threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\npg_stat_io has I/O statistics that are collected when track_io_timing is\nenabled, but it is not mentioned in the description of track_io_timing.\nI think it's better to add a description of pg_stat_io for easy reference.\nWhat do you think?\n\nRegards,\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Thu, 27 Jun 2024 09:06:23 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "On Thu, Jun 27, 2024 at 2:06 PM <[email protected]> wrote:\n\n> Hi,\n>\n> pg_stat_io has I/O statistics that are collected when track_io_timing is\n> enabled, but it is not mentioned in the description of track_io_timing.\n> I think it's better to add a description of pg_stat_io for easy reference.\n> What do you think?\n>\nIts always good to add new things.\n\n>\n> Regards,\n> --\n> Hajime Matsunaga\n> NTT DATA Group Corporation\n>\n\nOn Thu, Jun 27, 2024 at 2:06 PM <[email protected]> wrote:\n\n\nHi,\n\n\n\n\npg_stat_io has I/O statistics that are collected when track_io_timing is\n\nenabled, but it is not mentioned in the description of track_io_timing.\n\nI think it's better to add a description of pg_stat_io for easy reference. \n\nWhat do you think?Its always good to add new things. \n\n\n\n\nRegards,\n\n\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Thu, 27 Jun 2024 14:08:31 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "Hi,\n\npg_stat_io has I/O statistics that are collected when track_io_timing is\nenabled, but it is not mentioned in the description of track_io_timing.\nI think it's better to add a description of pg_stat_io for easy reference.\nWhat do you think?\nIts always good to add new things.\nThanks for your positive comments.\n\nRegards,\n--\nHajime Matsunaga\nNTT DATA Group Corporation\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\npg_stat_io has I/O statistics that are collected when track_io_timing is\n\nenabled, but it is not mentioned in the description of track_io_timing.\n\nI think it's better to add a description of pg_stat_io for easy reference. \n\nWhat do you think?\n\n\nIts always good to add new things. \n\n\nThanks for your positive comments.\n\n\n\n\nRegards,\n\n\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Thu, 27 Jun 2024 09:33:38 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "On Thu, Jun 27, 2024 at 5:06 AM <[email protected]> wrote:\n>\n> Hi,\n>\n> pg_stat_io has I/O statistics that are collected when track_io_timing is\n> enabled, but it is not mentioned in the description of track_io_timing.\n> I think it's better to add a description of pg_stat_io for easy reference.\n> What do you think?\n\nSeems quite reasonable to me given that track_wal_io_timing mentions\npg_stat_wal. I noticed that the sentence about track_io_timing in the\nstatistics collection configuration section [1] only mentions reads\nand writes -- perhaps it should also mention extends and fsyncs?\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-STATS-SETUP\n\n\n", "msg_date": "Thu, 27 Jun 2024 07:29:47 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "Hi,\n\nOn Thu, 27 Jun 2024 at 14:30, Melanie Plageman\n<[email protected]> wrote:\n>\n> On Thu, Jun 27, 2024 at 5:06 AM <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > pg_stat_io has I/O statistics that are collected when track_io_timing is\n> > enabled, but it is not mentioned in the description of track_io_timing.\n> > I think it's better to add a description of pg_stat_io for easy reference.\n> > What do you think?\n>\n> Seems quite reasonable to me given that track_wal_io_timing mentions\n> pg_stat_wal. I noticed that the sentence about track_io_timing in the\n> statistics collection configuration section [1] only mentions reads\n> and writes -- perhaps it should also mention extends and fsyncs?\n\nBoth suggestions look good to me. If what you said will be\nimplemented, maybe track_wal_io_timing too should mention fsyncs?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 27 Jun 2024 15:00:33 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "> > On Thu, Jun 27, 2024 at 5:06 AM <[email protected]> wrote:\n> > >\n> > > Hi,\n> >\n> > > pg_stat_io has I/O statistics that are collected when track_io_timing is\n> > > enabled, but it is not mentioned in the description of track_io_timing.\n> > > I think it's better to add a description of pg_stat_io for easy reference.\n> > > What do you think?\n> >\n> > Seems quite reasonable to me given that track_wal_io_timing mentions\n> > pg_stat_wal. I noticed that the sentence about track_io_timing in the\n> > statistics collection configuration section [1] only mentions reads\n> > and writes -- perhaps it should also mention extends and fsyncs?\n> \n> Both suggestions look good to me. If what you said will be\n> implemented, maybe track_wal_io_timing too should mention fsyncs?\n\nThank you both for your positive comments.\nI also think your suggestions seem good.\n\n\nRegards,\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Fri, 28 Jun 2024 01:48:54 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "From: Nazir Bilal Yavuz <[email protected]>\nSent: Thursday, June 27, 2024 9:01 PM\n> \n> Hi,\n> \n> On Thu, 27 Jun 2024 at 14:30, Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > On Thu, Jun 27, 2024 at 5:06 AM <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > pg_stat_io has I/O statistics that are collected when track_io_timing is\n> > > enabled, but it is not mentioned in the description of track_io_timing.\n> > > I think it's better to add a description of pg_stat_io for easy reference.\n> > > What do you think?\n> >\n> > Seems quite reasonable to me given that track_wal_io_timing mentions\n> > pg_stat_wal. I noticed that the sentence about track_io_timing in the\n> > statistics collection configuration section [1] only mentions reads\n> > and writes -- perhaps it should also mention extends and fsyncs?\n> \n> Both suggestions look good to me. If what you said will be\n> implemented, maybe track_wal_io_timing too should mention fsyncs?\n\nThanks for the suggestions the other day.\nI have created a patch that incorporates your suggestions.\n\nRegards,\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Wed, 3 Jul 2024 08:51:01 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "\n\nOn 2024/07/03 17:51, [email protected] wrote:\n> Thanks for the suggestions the other day.\n> I have created a patch that incorporates your suggestions.\n\n- <structname>pg_stat_database</structname></link>, in the output of\n+ <structname>pg_stat_database</structname></link> and\n+ <link linkend=\"monitoring-pg-stat-io-view\">\n+ <structname>pg_stat_io</structname></link>, in the output of\n\nI'm not a native English speaker, but it seems more natural to use a comma instead of \"and\" before \"pg_stat_io.\"\n\n- of block read and write times.\n+ of block read, block write, extend, and fsync times.\n\n\"block\" before \"fsync\" doesn't seem necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 3 Jul 2024 19:43:40 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "From: Fujii Masao <[email protected]>\nSent: Wednesday, July 3, 2024 7:44 PM\n> \n> On 2024/07/03 17:51, [email protected] wrote:\n> > Thanks for the suggestions the other day.\n> > I have created a patch that incorporates your suggestions.\n> \n> - <structname>pg_stat_database</structname></link>, in the output of\n> + <structname>pg_stat_database</structname></link> and\n> + <link linkend=\"monitoring-pg-stat-io-view\">\n> + <structname>pg_stat_io</structname></link>, in the output of\n> \n> I'm not a native English speaker, but it seems more natural to use a comma instead of \"and\" before \"pg_stat_io.\"\n> \n> - of block read and write times.\n> + of block read, block write, extend, and fsync times.\n> \n> \"block\" before \"fsync\" doesn't seem necessary.\n\nI'm also not a native English speaker, so I appreciate your feedback.\nI have created a patch that incorporates your feedback.\n\n\nRegards,\n--\nHajime Matsunaga\nNTT DATA Group Corporation", "msg_date": "Mon, 8 Jul 2024 03:01:50 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Doc: fix track_io_timing description to mention pg_stat_io" }, { "msg_contents": "\n\nOn 2024/07/08 12:01, [email protected] wrote:\n> \n> \n> From: Fujii Masao <[email protected]>\n> Sent: Wednesday, July 3, 2024 7:44 PM\n>>\n>> On 2024/07/03 17:51, [email protected] wrote:\n>>> Thanks for the suggestions the other day.\n>>> I have created a patch that incorporates your suggestions.\n>>\n>> - <structname>pg_stat_database</structname></link>, in the output of\n>> + <structname>pg_stat_database</structname></link> and\n>> + <link linkend=\"monitoring-pg-stat-io-view\">\n>> + <structname>pg_stat_io</structname></link>, in the output of\n>>\n>> I'm not a native English speaker, but it seems more natural to use a comma instead of \"and\" before \"pg_stat_io.\"\n>>\n>> - of block read and write times.\n>> + of block read, block write, extend, and fsync times.\n>>\n>> \"block\" before \"fsync\" doesn't seem necessary.\n> \n> I'm also not a native English speaker, so I appreciate your feedback.\n> I have created a patch that incorporates your feedback.\n\nThanks for updating the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:02:32 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: fix track_io_timing description to mention pg_stat_io" } ]
[ { "msg_contents": "Hello hackers,\n\nDuring work in the separate thread [1], I discovered more cases\nwhere the link in docs wasn't the canonical link [2].\n\n[1] https://postgr.es/m/CAKFQuwYEX9Pj9G0ZHJeWSmSbnqUyGH+FYcW-66eZjfVG4KOjiQ@mail.gmail.com\n[2] https://en.wikipedia.org/wiki/Canonical_link_element\n\nThe. below script e.g. doesn't parse SGML, and is broken in some other ways\nalso, but probably good enough to suggest changes that can then be manually\ncarefully verified.\n\n```\n#!/bin/bash\noutput_file=\"changes.log\"\n> $output_file\nextract_canonical() {\n local url=$1\n canonical=$(curl -s \"$url\" | sed -n 's/.*<link rel=\"canonical\" href=\"\\([^\"]*\\)\".*/\\1/p')\n if [[ -n \"$canonical\" && \"$canonical\" != \"$url\" ]]; then\n echo \"-$url\" >> $output_file\n echo \"+$canonical\" >> $output_file\n echo $canonical\n else\n echo $url\n fi\n}\nfind . -type f -name '*.sgml' | while read -r file; do\n urls=$(sed -n 's/.*\\(https:\\/\\/[^\"]*\\).*/\\1/p' \"$file\")\n for url in $urls; do\n canonical_url=$(extract_canonical \"$url\")\n if [[ \"$canonical_url\" != \"$url\" ]]; then\n # Replace the original URL with the canonical URL in the file\n sed -i '' \"s|$url|$canonical_url|g\" \"$file\"\n fi\n done\ndone\n```\n\nMost of what it found was indeed correct, but I had to undo some mistakes it did.\n\nAll the changes in the attached patch have been manually verified, by clicking\nthe original link, and observing the URL seen in the browser.\n\n/Joel", "msg_date": "Thu, 27 Jun 2024 11:27:45 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Fix docs to use canonical links" }, { "msg_contents": "On Thu, Jun 27, 2024 at 11:27:45AM +0200, Joel Jacobson wrote:\n> During work in the separate thread [1], I discovered more cases\n> where the link in docs wasn't the canonical link [2].\n> \n> [1] https://postgr.es/m/CAKFQuwYEX9Pj9G0ZHJeWSmSbnqUyGH+FYcW-66eZjfVG4KOjiQ@mail.gmail.com\n> [2] https://en.wikipedia.org/wiki/Canonical_link_element\n> \n> The. below script e.g. doesn't parse SGML, and is broken in some other ways\n> also, but probably good enough to suggest changes that can then be manually\n> carefully verified.\n\nThe 19 links you are updating here avoid redirections in Wikipedia and\nthe Postgres wiki. It's always a bit of a chicken-and-egg game in\nthis area, because links always change, still I don't mind the change.\n\nAny opinions from others?\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 15:06:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix docs to use canonical links" }, { "msg_contents": "> On 1 Jul 2024, at 08:06, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Jun 27, 2024 at 11:27:45AM +0200, Joel Jacobson wrote:\n>> During work in the separate thread [1], I discovered more cases\n>> where the link in docs wasn't the canonical link [2].\n>> \n>> [1] https://postgr.es/m/CAKFQuwYEX9Pj9G0ZHJeWSmSbnqUyGH+FYcW-66eZjfVG4KOjiQ@mail.gmail.com\n>> [2] https://en.wikipedia.org/wiki/Canonical_link_element\n>> \n>> The. below script e.g. doesn't parse SGML, and is broken in some other ways\n>> also, but probably good enough to suggest changes that can then be manually\n>> carefully verified.\n> \n> The 19 links you are updating here avoid redirections in Wikipedia and\n> the Postgres wiki. It's always a bit of a chicken-and-egg game in\n> this area, because links always change, still I don't mind the change.\n\nAvoding redirects is generally a good thing, not everyone is on lightning fast\ninternet. Wikipedia is however not doing any 30X redirects so it's not really\nan issue for those links, it's all 200 requests.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 09:35:09 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix docs to use canonical links" }, { "msg_contents": "On Mon, Jul 1, 2024, at 09:35, Daniel Gustafsson wrote:\n> Avoding redirects is generally a good thing, not everyone is on lightning fast\n> internet. Wikipedia is however not doing any 30X redirects so it's not really\n> an issue for those links, it's all 200 requests.\n\nYes, I noticed that too when observing the HTTPS traffic, so no issue there,\nexcept that it's a bit annoying that the address bar suddenly changes.\n\nHowever, I think David J had another good argument:\n\n\"If we are making wikipedia our authority we might as well use their standard for naming.\"\n\n/Joel\n\n\n", "msg_date": "Mon, 01 Jul 2024 13:09:30 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix docs to use canonical links" }, { "msg_contents": "> On 1 Jul 2024, at 13:09, Joel Jacobson <[email protected]> wrote:\n> \n> On Mon, Jul 1, 2024, at 09:35, Daniel Gustafsson wrote:\n>> Avoding redirects is generally a good thing, not everyone is on lightning fast\n>> internet. Wikipedia is however not doing any 30X redirects so it's not really\n>> an issue for those links, it's all 200 requests.\n> \n> Yes, I noticed that too when observing the HTTPS traffic, so no issue there,\n> except that it's a bit annoying that the address bar suddenly changes.\n\nRight, I was unclear, I'm not advocating against changing. It won't move the\nneedle compared to 30X redirects but it also won't hurt.\n\n> However, I think David J had another good argument:\n> \n> \"If we are making wikipedia our authority we might as well use their standard for naming.\"\n\nIt's a moving target, but so is most if not all links.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 14:45:15 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix docs to use canonical links" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 1 Jul 2024, at 13:09, Joel Jacobson <[email protected]> wrote:\n>> However, I think David J had another good argument:\n>> \"If we are making wikipedia our authority we might as well use their standard for naming.\"\n\n> It's a moving target, but so is most if not all links.\n\nI see nothing wrong with this patch, so pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jul 2024 16:39:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix docs to use canonical links" } ]
[ { "msg_contents": "Hi,\n\nI’m defining a custom type “MyType” with additional functions and an custom aggregate in a C-coded extension. From a PostgreSQL perspective it is a base type that piggybacks on the bytea type, i.e. LIKE = BYTEA.\n \nBut now I need to (re)define MyType to support type modifiers (e.g. MyType(1,14,18)) and I got that done using CREATE TYPE’s TYPMOD_IN and TYPMOD_OUT parameters resulting in the correct packed value getting stored in pg_attribute when I define a column of that type. \n\nBut when I pass a MyType value to a function defined in my C extension how would I access the type modifier value for the argument which could have been drawn from the catalog or the result of a cast. \n\nE.g. if I: \nSELECT MyFunc(‘xDEADBEEF’::MyType(1,14,18));\nIn the C function MyFunc calls I get a pointer to the data using PG_GETARG_BYTEA_P(0) macro and its length using the VARSIZE macro but I also need the given type modifiers (1, 14 and 18 in the example) before I can process the data correctly. Clearly I'd have to unpack the component values myself from the 16-bit atttypemod value into which the TYPMOD_OUT function has packed it, but where would I get access to that value? My type is written in C to be as fast as possible having to go do some SPI-level lookup or involved logic would slow it right down again.\n\nMy searches to date only yielded results referring to the value stored for a table in pg_attribute with the possibility of there being a value in HeapTupleHeader obtained by using the PG_GETARG_HEAPTUPLEHEADER(0) macro but that assumes the parameter is a tuple, not the individual value it actually is. I struggle to imagine that the type modifier value isn't being maintained by whatever casts are applied and not getting passed through to the extension already, but where to find it? \n\nCan someone please point me in the right direction here, the name of the structure containing the raw type modifier value, the name of the value in that structure, the name of a macro that accesses it, even if it’s just what keywords to search for in the documentation and/or archives? Even if it’s just a pointer to the code where e.g. the numeric type (which has type modifiers) is implemented so I can see how that code does it. Anything, I’m getting desperate. Perhaps not many before me needed to do this so it's not often mentioned, but sure it is in there somewhere, how else would type like numeric and even varchar actually work (given that the VARSIZE of a varlena gives its actual size, not the maximum as given when the column or value was created)?\n\nThank you in advance,\n\nMarthin Laubscher\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 16:33:34 +0200", "msg_from": "Marthin Laubscher <[email protected]>", "msg_from_op": true, "msg_subject": "Custom type's modifiers" }, { "msg_contents": "Marthin Laubscher <[email protected]> writes:\n> But now I need to (re)define MyType to support type modifiers (e.g. MyType(1,14,18)) and I got that done using CREATE TYPE’s TYPMOD_IN and TYPMOD_OUT parameters resulting in the correct packed value getting stored in pg_attribute when I define a column of that type. \n\nOK ...\n\n> But when I pass a MyType value to a function defined in my C extension how would I access the type modifier value for the argument which could have been drawn from the catalog or the result of a cast. \n\nYou can't. Whatever info is needed by operations on the type had\nbetter be embedded in the value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2024 11:06:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom type's modifiers" }, { "msg_contents": "On 2024/06/27, 17:06, \"Tom Lane\" <[email protected] <mailto:[email protected]>> wrote:\n> You can't. Whatever info is needed by operations on the type had better be embedded in the value.\n\nOK, thanks, that's clear and easy enough. I'll ensure the the third parameter to the input function is embedded in my opaque value. \n\nI don't see another function getting passed the value so I'd assume that (unless I return a MyType value from one of my own functions which would follow its internal logic to determine which type modifiers to use) the only way a MyType can get an initial value is via the input function. If the type is in a table column the input function would be called with the default value specified in external format if a value isn't specified during insert, but either way it would always originate from the eternal format. I suppose when a cast is involved it goes via the external format as well, right?\n\nAre those sound assumptions to make or am I still way off base here?\n\n --- Thanks for your time - Marthin Laubscher\n\n\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 17:49:08 +0200", "msg_from": "Marthin Laubscher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom type's modifiers" }, { "msg_contents": "On Thu, Jun 27, 2024 at 8:49 AM Marthin Laubscher <[email protected]>\nwrote:\n\n>\n> I suppose when a cast is involved it goes via the external format as well,\n> right?\n>\n\nA cast between two types is going to accept a concrete instance of the\ninput type, in memory, as its argument and then produces a concrete\ninstance of the output type, in memory, as output. If the input data is\nserialized the constructor for the input type will handle deserialization.\n\nSee: create cast\n\nhttps://www.postgresql.org/docs/current/sql-createcast.html\n\nIn particular the phrasing: identical to or binary-coercible to\n\nDavid J.\n\nOn Thu, Jun 27, 2024 at 8:49 AM Marthin Laubscher <[email protected]> wrote:I suppose when a cast is involved it goes via the external format as well, right?A cast between two types is going to accept a concrete instance of the input type, in memory, as its argument and then produces a concrete instance of the output type, in memory, as output.  If the input data is serialized the constructor for the input type will handle deserialization.See: create casthttps://www.postgresql.org/docs/current/sql-createcast.htmlIn particular the phrasing: identical to or binary-coercible toDavid J.", "msg_date": "Thu, 27 Jun 2024 09:12:25 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom type's modifiers" }, { "msg_contents": "Marthin Laubscher <[email protected]> writes:\n> I don't see another function getting passed the value so I'd assume that (unless I return a MyType value from one of my own functions which would follow its internal logic to determine which type modifiers to use) the only way a MyType can get an initial value is via the input function. If the type is in a table column the input function would be called with the default value specified in external format if a value isn't specified during insert, but either way it would always originate from the eternal format. I suppose when a cast is involved it goes via the external format as well, right?\n\nThe only code anywhere in the system that can produce a MyType value\nis code you've written. So that has to come originally from your\ninput function, or cast functions or constructor functions you write.\n\nYou'd be well advised to read the CREATE CAST doco about how to\nsupport notations like 'something'::MyType(x,y,z). Although the\ninput function is expected to be able to apply a typemod if it's\npassed one, in most situations coercion to a specific typemod is\nhandled by invoking a multi-parameter cast function.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2024 12:24:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom type's modifiers" }, { "msg_contents": "On 2024/06/27, 18:13, \"David G. Johnston\" <mailto:[email protected]> wrote:\n> A cast between two types is going to accept a concrete instance of the input type, in memory, as its argument and then produces a concrete instance of the output type, in memory, as output.  If the input data is serialized the constructor for the input type will handle deserialization.\n\nI confess to some uncertainty whether the PostgreSQL specific x::y notation and the standards based CAST(x AS y) would both be addressed by creating a cast. What you’re saying means both forms engage the same code and defining a cast would cover the :: syntax as well. Thanks for that.\n\nIf my understanding is accurate, it means that even when both values are of MyType the CAST function would still be invoked so the type logic can determine how to handle (or reject) the cast. Cast would (obviously) require the target type modifiers as well, and the good news is that it’s already there as the second parameter of the function. So that’s the other function that receives the type modifier that I was missing. It’s starting to make plenty sense. \n\nTo summarise:\n- The type modifiers, encoded by the TYPMOD_IN function are passed directly as parameters to:- \n - the type's input function (parameter 3), and\n - any \"with function\" cast where the target type has type modifiers.\n- Regardless of the syntax used to invoke the cast, the same function will be called.\n- Depending on what casts are defined, converting from the external string format to a value of MyType will be handled either by the input function or a cast function. By default (without any casts) only values recognised by input can be converted to MyType values.\n\n -- Thanks for your time – Marthin Laubscher \n\n\n\n\n", "msg_date": "Thu, 27 Jun 2024 19:01:10 +0200", "msg_from": "Marthin Laubscher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom type's modifiers" }, { "msg_contents": "Marthin Laubscher <[email protected]> writes:\n> I confess to some uncertainty whether the PostgreSQL specific x::y notation and the standards based CAST(x AS y) would both be addressed by creating a cast.\n\nThose notations are precisely equivalent, modulo concerns about\noperator precedence (ie, if x isn't a single token you might have\nto parenthesize x to get (x)::y to mean the same as CAST(x AS y)).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jun 2024 13:05:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom type's modifiers" } ]
[ { "msg_contents": "create event trigger ... on login is now available but it is not shown on\nDOCs or is it in another page ?\n\nhttps://www.postgresql.org/docs/devel/event-trigger-matrix.html\ndoesn't have it, should be there ?\n\nregards\nMarcos\n\ncreate event trigger ... on login is now available but it is not shown on DOCs or is it in another page ?https://www.postgresql.org/docs/devel/event-trigger-matrix.htmldoesn't have it, should be there ?regardsMarcos", "msg_date": "Thu, 27 Jun 2024 15:40:23 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": true, "msg_subject": "Is missing LOGIN Event on Trigger Firing Matrix ?" }, { "msg_contents": "On 2024-06-27 Th 2:40 PM, Marcos Pegoraro wrote:\n> create event trigger ... on login is now available but it is not shown \n> on DOCs or is it in another page ?\n>\n> https://www.postgresql.org/docs/devel/event-trigger-matrix.html\n> doesn't have it, should be there ?\n>\n\nIt's not triggered by a statement. But see here:\n\n\nhttps://www.postgresql.org/docs/devel/event-trigger-database-login-example.html\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-27 Th 2:40 PM, Marcos\n Pegoraro wrote:\n\n\n\n\ncreate\n event trigger ... on login is now available but it is not\n shown on DOCs or is it in another page ?\n\n\nhttps://www.postgresql.org/docs/devel/event-trigger-matrix.html\n\ndoesn't\n have it, should be there ?\n\n\n\n\n\n\nIt's not triggered by a statement. But see here:\n\n\nhttps://www.postgresql.org/docs/devel/event-trigger-database-login-example.html\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 27 Jun 2024 15:39:45 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is missing LOGIN Event on Trigger Firing Matrix ?" }, { "msg_contents": "On Thu, Jun 27, 2024 at 12:39 PM Andrew Dunstan <[email protected]> wrote:\n\n> On 2024-06-27 Th 2:40 PM, Marcos Pegoraro wrote:\n>\n> create event trigger ... on login is now available but it is not shown on\n> DOCs or is it in another page ?\n>\n> https://www.postgresql.org/docs/devel/event-trigger-matrix.html\n> doesn't have it, should be there ?\n>\n>\n> It's not triggered by a statement. But see here:\n>\n>\n>\n> https://www.postgresql.org/docs/devel/event-trigger-database-login-example.html\n>\n>\n>\nI suggest adding a sentence and link from the main description [1] to that\npage.\n\nWe already do that for the ones that have commands:\n\"For a complete list of commands supported by the event trigger mechanism,\nsee Section 38.2.\"\n\nand login deserves something similar on that page.\n\nDavid J.\n\n[1] https://www.postgresql.org/docs/devel/event-trigger-definition.html\n\nOn Thu, Jun 27, 2024 at 12:39 PM Andrew Dunstan <[email protected]> wrote:\nOn 2024-06-27 Th 2:40 PM, Marcos\n Pegoraro wrote:\n\n\n\ncreate\n event trigger ... on login is now available but it is not\n shown on DOCs or is it in another page ?\n\n\nhttps://www.postgresql.org/docs/devel/event-trigger-matrix.html\n\ndoesn't\n have it, should be there ?\n\n\n\n\n\n\nIt's not triggered by a statement. But see here:\n\n\nhttps://www.postgresql.org/docs/devel/event-trigger-database-login-example.html\nI suggest adding a sentence and link from the main description [1] to that page.We already do that for the ones that have commands:\"For a complete list of commands supported by the event trigger mechanism, see Section 38.2.\"and login deserves something similar on that page.David J.[1] https://www.postgresql.org/docs/devel/event-trigger-definition.html", "msg_date": "Thu, 27 Jun 2024 13:00:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is missing LOGIN Event on Trigger Firing Matrix ?" }, { "msg_contents": "> On 27 Jun 2024, at 22:00, David G. Johnston <[email protected]> wrote:\n\n> I suggest adding a sentence and link from the main description [1] to that page.\n\nHow about the attached.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 1 Jul 2024 10:45:53 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is missing LOGIN Event on Trigger Firing Matrix ?" } ]
[ { "msg_contents": "Hi Hackers,\n\nHere is a proof-of-concept patch to inline set-returning functions (SRFs) besides those written in \nSQL. We already try to inline SQL-language functions,[1] but that means you must have a static SQL \nquery. There is no way to get an inline-able query by dynamically building the sql in, say, plpgsql.\n\nWe also have a SupportRequestSimplify request type for functions that use SUPPORT to declare a \nsupport function, and it can replace the FuncExpr with an arbitrary nodetree.[2] I think this was \nintended for constant-substitution, but we can also use it to let functions generate dynamic SQL and \nthen inline it. In this patch, if a SRF replaces itself with a Query node, then \ninline_set_returning_function will use that.\n\nSo far there are no tests or docs; I'm hoping to hear feedback on the idea before going further.\n\nHere is my concrete use-case: I wrote a function to do a temporal semijoin,[3] and I want it to be \ninlined. There is a support function that builds the same SQL and lets Postgres parse it into a \nQuery.[4] (In practice I would rewrite the main function in C too, so it could share the \nSQL-building code there, but this is just a POC.) If you build and install that extension on its \n`inlined` branch,[5] then you can do this:\n\n```\n\\i bench.sql\nexplain select * from temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id', \n'valid_at') j(id bigint, valid_at daterange);\nexplain select * from temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id', \n'valid_at') j(id bigint, valid_at daterange) where j.id = 10::bigint;\n```\n\nWithout this patch, you get `ERROR: unrecognized node type: 58`. But with this patch you get these \nplans:\n\n```\npostgres=# explain select * from temporal_semijoin('employees', 'id', 'valid_at', 'positions', \n'employee_id', 'valid_at') j(id bigint, valid_at daterange);\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n ProjectSet (cost=4918.47..6177.06 rows=22300 width=40)\n -> Hash Join (cost=4918.47..6062.77 rows=223 width=53)\n Hash Cond: (employees.id = j.employee_id)\n Join Filter: (employees.valid_at && j.valid_at)\n -> Seq Scan on employees (cost=0.00..1027.39 rows=44539 width=21)\n -> Hash (cost=4799.15..4799.15 rows=9545 width=40)\n -> Subquery Scan on j (cost=4067.61..4799.15 rows=9545 width=40)\n -> HashAggregate (cost=4067.61..4703.70 rows=9545 width=40)\n Group Key: positions.employee_id\n Planned Partitions: 16\n -> Seq Scan on positions (cost=0.00..897.99 rows=44099 width=21)\n(11 rows)\n\npostgres=# explain select * from temporal_semijoin('employees', 'id', 'valid_at', 'positions', \n'employee_id', 'valid_at') j(id bigint, valid_at daterange) where j.id = 10::bigint;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n ProjectSet (cost=0.56..9.22 rows=100 width=40)\n -> Nested Loop (cost=0.56..8.71 rows=1 width=53)\n -> GroupAggregate (cost=0.28..4.39 rows=1 width=40)\n -> Index Only Scan using idx_positions_on_employee_id on positions \n(cost=0.28..4.36 rows=5 width=21)\n Index Cond: (employee_id = '10'::bigint)\n -> Index Only Scan using employees_pkey on employees (cost=0.28..4.30 rows=1 width=21)\n Index Cond: ((id = '10'::bigint) AND (valid_at && (range_agg(positions.valid_at))))\n(7 rows)\n\n```\n\nIn particular I'm excited to see in the second plan that the predicate gets pushed into the subquery.\n\nIf it seems good to let people use SupportRequestSimplify to make their SRFs be inlineable, I'm \nhappy to add tests and docs. We should really document the idea of inlined functions in general, so \nI'll do that too.\n\nAnother approach I considered is using a separate support request, e.g. SupportRequestInlineSRF, and \njust calling it from inline_set_returning_function. I didn't like having two support requests that \ndid almost exactly the same thing. OTOH my current approach means you'll get an error if you do this:\n\n```\npostgres=# select temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id', \n'valid_at');\nERROR: unrecognized node type: 66\n```\n\nI'll look into ways to fix that.\n\nI think SupportRequestSimplify is a really cool feature. It is nearly like having macros.\nI'm dreaming about other ways I can (ab)use it. Just making inline-able SRFs has many applications. \n From my own client work, I could use this for a big permissions query or a query with complicated \npricing logic.\n\nThe sad part though is that SUPPORT functions must be written in C. That means few people will use \nthem, especially these days when so many are in the cloud. Since they take a Node and return a Node, \nmaybe there is no other way. But I would love to have a different mechanism that receives the \nfunction's arguments (evaluated) and returns a string, which we parse as a SQL query and then \ninline. The arguments would have to be const-reducible to strings, of course. You could specify that \nfunction with a new INLINE keyword when you create your target function. That feature would be less \npowerful, but with broader reach.\n\nI'd be glad to hear your thoughts!\n\n[1] https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions (I couldn't find any mention in our \ndocs though, so we should add that.)\n[2] https://www.postgresql.org/docs/current/xfunc-optimization.html\n[3] https://github.com/pjungwir/temporal_ops/blob/master/temporal_ops--1.0.0.sql\n[4] https://github.com/pjungwir/temporal_ops/blob/inlined/temporal_ops.c\n[5] https://github.com/pjungwir/temporal_ops/tree/inlined\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Thu, 27 Jun 2024 15:01:23 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Inline non-SQL SRFs using SupportRequestSimplify" }, { "msg_contents": "On 28/06/2024 01:01, Paul Jungwirth wrote:\n> If it seems good to let people use SupportRequestSimplify to make their SRFs be inlineable, I'm\n> happy to add tests and docs. We should really document the idea of inlined functions in general, so\n> I'll do that too.\n>\n> Another approach I considered is using a separate support request, e.g. SupportRequestInlineSRF, and\n> just calling it from inline_set_returning_function. I didn't like having two support requests that\n> did almost exactly the same thing. OTOH my current approach means you'll get an error if you do this:\n> \n> ```\n> postgres=# select temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id',\n> 'valid_at');\n> ERROR: unrecognized node type: 66\n> ```\n> \n> I'll look into ways to fix that.\n\nIf the support function returns a Query, we end up having a FuncExpr \nwith a Query in the tree. A Query isnt an Expr, which is why you get \nthat error, and it seems like a recipe for confusion in general. Perhaps \nreturning a SubLink would be better.\n\nI think we should actually add an assertion after the call to the \nSupportRequestSimplify support function, to check that it returned an \nExpr node.\n\n+1 to the general feature of letting SRFs be simplified by the support \nfunction.\n\n> I think SupportRequestSimplify is a really cool feature. It is nearly like having macros.\n> I'm dreaming about other ways I can (ab)use it.\n\n:-D\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:59:45 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inline non-SQL SRFs using SupportRequestSimplify" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 28/06/2024 01:01, Paul Jungwirth wrote:\n>> Another approach I considered is using a separate support request, e.g. SupportRequestInlineSRF, and\n>> just calling it from inline_set_returning_function. I didn't like having two support requests that\n>> did almost exactly the same thing. OTOH my current approach means you'll get an error if you do this:\n>> \n>> ```\n>> postgres=# select temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id',\n>> 'valid_at');\n>> ERROR: unrecognized node type: 66\n>> ```\n>> \n>> I'll look into ways to fix that.\n\nI like this idea, but I like exactly nothing about this implementation.\nThe right thing is to have a separate SupportRequestInlineSRF request\nthat is called directly by inline_set_returning_function. It might be\n\"almost the same thing\" as SupportRequestSimplify, but \"almost\" only\ncounts in horseshoes and hand grenades. In particular, returning a\nQuery node is simply broken for SupportRequestSimplify (as your\nexample demonstrates), whereas it's the only correct result for\nSupportRequestInlineSRF.\n\nYou could imagine keeping it to one support request by adding a\nboolean field to the request struct to show which behavior is wanted,\nbut I think the principal result of that would be to break extensions\nthat weren't expecting such calls. The defined mechanism for\nextending the SupportRequest protocol is to add new support request\ncodes, not to whack around the APIs of existing ones.\n\n> I think we should actually add an assertion after the call to the \n> SupportRequestSimplify support function, to check that it returned an \n> Expr node.\n\nUm ... IsA(node, Expr) isn't going to work, and I'm not sure that\nit'd be useful to try to enumerate the set of Expr subtypes that\nshould be allowed there. But possibly it'd be worth asserting that\nit's not a Query, just in case anyone gets confused about the\ndifference between SupportRequestSimplify and SupportRequestInlineSRF.\n\nIt would be good to have an in-core test case for this request type,\nbut I don't really see any built-in SRFs for which expansion as a\nsub-SELECT would be an improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jul 2024 14:58:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inline non-SQL SRFs using SupportRequestSimplify" }, { "msg_contents": "On 7/26/24 11:58, Tom Lane wrote:\n > Heikki Linnakangas <[email protected]> writes:\n >> On 28/06/2024 01:01, Paul Jungwirth wrote:\n >>> Another approach I considered is using a separate support request, e.g. \nSupportRequestInlineSRF, and\n >>> just calling it from inline_set_returning_function. I didn't like having two support requests that\n >>> did almost exactly the same thing. OTOH my current approach means you'll get an error if you do \nthis:\n >>>\n >>> ```\n >>> postgres=# select temporal_semijoin('employees', 'id', 'valid_at', 'positions', 'employee_id',\n >>> 'valid_at');\n >>> ERROR: unrecognized node type: 66\n >>> ```\n >>>\n >>> I'll look into ways to fix that.\n >\n > I like this idea, but I like exactly nothing about this implementation.\n > The right thing is to have a separate SupportRequestInlineSRF request\n > that is called directly by inline_set_returning_function.\n\nHere are new patches using a new SupportRequestInlineSRF request type. They include patches and \ndocumentation.\n\nThe patches handle this:\n\n SELECT * FROM srf();\n\nbut not this:\n\n SELECT srf();\n\nIn the latter case, Postgres always calls the function in \"materialized mode\" and gets the whole \nresult up front, so inline_set_returning_function is never called, even for SQL functions.\n\nFor tests I added a `foo_from_bar(colname, tablename, filter)` PL/pgSQL function that does `SELECT \n$colname FROM $tablename [WHERE $colname = $filter]`, then the support function generates the same \nSQL and turns it into a Query node. This matches how I want to use the feature for my \ntemporal_semijoin etc functions. If you give a non-NULL filter, you get a Query with a Var node, so \nwe are testing something that isn't purely Const.\n\nThe SupportRequestSimplify type has some comments about supporting operators, but I don't think you \ncan have a set-returning operator, so I didn't repeat those comments for this new type.\n\nI split things up into three patch files because I couldn't get git to gracefully handle shifting a \nlarge block of code into an if statement. The first two patches have no changes except that \nindentation (and initializing one variable to NULL). They aren't meant to be committed separately.\n\nRebased to a83a944e9f.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Fri, 30 Aug 2024 09:26:56 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inline non-SQL SRFs using SupportRequestSimplify" }, { "msg_contents": "Paul Jungwirth <[email protected]> writes:\n> Here are new patches using a new SupportRequestInlineSRF request type. They include patches and \n> documentation.\n\nI took a look through this. I feel like we're still some way away\nfrom having something committable. I've got two main complaint\nareas:\n\n1. It doesn't seem like integrating this into\ninline_set_returning_function was the right thing after all, or\nmaybe just the way you did it isn't right. That function is pretty\nopinionated about what it is doing, and a lot of what it is doing\ndoesn't seem appropriate for a support-function-driven substitution.\nAs an example, it rejects WITH ORDINALITY, but who's to say that a\nsupport function couldn't handle that? More generally, I'm not sure\nif it's appropriate to make any tests on the function's properties,\nrather than assuming the support function knows what it's doing.\nI see you already hacked up the test on prolang, but the others in\nthe same if-clause seem equally dubious from here. I'm also unsure\nwhether it's our business to reject volatile functions or subplans\nin the function arguments. (Maybe it is, but not sure.) There is\nalso stuff towards the bottom of the function, particularly\ncheck_sql_fn_retval and parameter substitution, that I do not think\nmakes sense to apply to a non-SQL-language function; but if I'm\nreading this right you run all that code on the support function's\nresult.\n\nIt does make sense to require there to be just one RangeTblFunction in\nthe RTE, since it's not at all clear how we could combine the results\nif there's more than one. But I wonder if we should just pass the RTE\nnode to the support function, and let it make its own decision about\nrte->funcordinality. Or if that seems like a bad idea, pass the\nRangeTblFunction node. I think it's essential to do one of those\nthings rather than fake up a FuncExpr, because a support function for\na function returning RECORD would likely need access to the column\ndefinition list to figure out what to do.\n\nI notice that in the case of non-SRF function inlining, we handle\nsupport-function calling in a totally separate function\n(simplify_function) rather than try to integrate it into the\ncode that does SQL function inlining (inline_function). Maybe\na similar approach should be adopted here. We could have a\nwrapper function that implements the parts worth sharing, such\nas looking up the target function's pg_proc entry and doing\nthe permissions check. Or perhaps put that stuff into the sole\ncaller, preprocess_function_rtes.\n\nIf we do keep this in inline_set_returning_function, we need to\npay more than zero attention to updating that function's header\ncomment.\n\n2. The documentation needs to be a great deal more explicit\nabout what the function is supposed to return. It needs to\nbe a SELECT Query node that has been through parse analysis\nand rewriting. I don't think pointing to a regression test\nfunction is adequate, or even appropriate. The test function\nis a pretty bad example as-is, too. It aggressively disregards\nthe API recommendation in supportnodes.h:\n\n * Support functions must return a NULL pointer, not fail, if they do not\n * recognize the request node type or cannot handle the given case; this\n * allows for future extensions of the set of request cases.\n\nAs a more minor nit, I think SupportRequestInlineSRF should\ninclude \"struct PlannerInfo *root\", for the same reasons that\nSupportRequestSimplify does.\n\n> I split things up into three patch files because I couldn't get git to gracefully handle shifting a \n> large block of code into an if statement. The first two patches have no changes except that \n> indentation (and initializing one variable to NULL). They aren't meant to be committed separately.\n\nA hack I've used in the past is to have the main patch just add\n\n+\tif (...)\n+\t{\n...\n+\t}\n\naround the to-be-reindented code, and then apply pgindent as a\nseparate patch step. (We used to just leave it to the committer to\nrun pgindent, but I think nowadays the cfbot will whine at you if you\nsubmit not-pgindented code.) I think that's easier to review since\nthe reviewer can mechanically verify the pgindent patch. This problem\nmay be moot for this patch once we detangle the support function call\nfrom SQL-function inlining, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 03 Sep 2024 12:42:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inline non-SQL SRFs using SupportRequestSimplify" } ]
[ { "msg_contents": "fastgetattr and heap_getattr are converted to inline functions\nin e27f4ee0a701, while some comments still referring them as macros.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 28 Jun 2024 11:04:40 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "stale comments about fastgetattr and heap_getattr" }, { "msg_contents": "Hi! Looks good to me. Please, register it in CF.\nBest regards, Stepan Neretin.\n\nOn Fri, Jun 28, 2024 at 10:05 AM Junwang Zhao <[email protected]> wrote:\n\n> fastgetattr and heap_getattr are converted to inline functions\n> in e27f4ee0a701, while some comments still referring them as macros.\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nHi! Looks good to me. Please, register it in CF.Best regards, Stepan Neretin.On Fri, Jun 28, 2024 at 10:05 AM Junwang Zhao <[email protected]> wrote:fastgetattr and heap_getattr are converted to inline functions\nin e27f4ee0a701, while some comments still referring them as macros.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 28 Jun 2024 11:06:05 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stale comments about fastgetattr and heap_getattr" }, { "msg_contents": "On Fri, Jun 28, 2024 at 11:06:05AM +0700, Stepan Neretin wrote:\n> Hi! Looks good to me. Please, register it in CF.\n> Best regards, Stepan Neretin.\n\nNo need for that. I've looked at the patch, and you are right so I\nhave applied it now on HEAD.\n--\nMichael", "msg_date": "Fri, 28 Jun 2024 13:32:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stale comments about fastgetattr and heap_getattr" } ]
[ { "msg_contents": "Hi all,\n\nThe parallel vacuum we have today supports only for index vacuuming.\nTherefore, while multiple workers can work on different indexes in\nparallel, the heap table is always processed by the single process.\nI'd like to propose $subject, which enables us to have multiple\nworkers running on the single heap table. This would be helpful to\nspeedup vacuuming for tables without indexes or tables with\nINDEX_CLENAUP = off.\n\nI've attached a PoC patch for this feature. It implements only\nparallel heap scans in lazyvacum. We can extend this feature to\nsupport parallel heap vacuum as well in the future or in the same\npatch.\n\n# Overall idea (for parallel heap scan in lazy vacuum)\n\nAt the beginning of vacuum, we determine how many workers to launch\nbased on the table size like other parallel query operations. The\nnumber of workers is capped by max_parallel_maitenance_workers. Once\nwe decided to use parallel heap scan, we prepared DSM to share data\namong parallel workers and leader. The information include at least\nthe vacuum option such as aggressive, the counters collected during\nlazy vacuum such as scanned_pages, vacuum cutoff such as VacuumCutoffs\nand GlobalVisState, and parallel scan description.\n\nBefore starting heap scan in lazy vacuum, we launch parallel workers\nand then each worker (and the leader) process different blocks. Each\nworker does HOT-pruning on pages and collects dead tuple TIDs. When\nadding dead tuple TIDs, workers need to hold an exclusive lock on\nTidStore. At the end of heap scan phase, workers exit and the leader\nwill wait for all workers to exit. After that, the leader process\ngather the counters collected by parallel workers, and compute the\noldest relfrozenxid (and relminmxid). Then if parallel index vacuum is\nalso enabled, we launch other parallel workers for parallel index\nvacuuming.\n\nWhen it comes to parallel heap scan in lazy vacuum, I think we can use\nthe table_block_parallelscan_XXX() family. One tricky thing we need to\ndeal with is that if the TideStore memory usage reaches the limit, we\nstop the parallel scan, do index vacuum and table vacuum, and then\nresume the parallel scan from the previous state. In order to do that,\nin the patch, we store ParallelBlockTableScanWorker, per-worker\nparallel scan state, into DSM so that different parallel workers can\nresume the scan using the same parallel scan state.\n\nIn addition to that, since we could end up launching fewer workers\nthan requested, it could happen that some ParallelBlockTableScanWorker\ndata is used once and never be used while remaining unprocessed\nblocks. To handle this case, in the patch, the leader process checks\nat the end of the parallel scan if there is an uncompleted parallel\nscan. If so, the leader process does the scan using worker's\nParallelBlockTableScanWorker data on behalf of workers.\n\n# Discussions\n\nI'm somewhat convinced the brief design of this feature, but there are\nsome points regarding the implementation we need to discuss.\n\nIn the patch, I extended vacuumparalle.c to support parallel table\nscan (and vacuum in the future). So I was required to add some table\nAM callbacks such as DSM size estimation, DSM initialization, and\nactual table scans etc. We need to verify these APIs are appropriate.\nSpecifically, if we want to support both parallel heap scan and\nparallel heap vacuum, do we want to add separate callbacks for them?\nIt could be overkill since such a 2-pass vacuum strategy is specific\nto heap AM.\n\nAs another implementation idea, we might want to implement parallel\nheap scan/vacuum in lazyvacuum.c while minimizing changes for\nvacuumparallel.c. That way, we would not need to add table AM\ncallbacks. However, we would end up having duplicate codes related to\nparallel operation in vacuum such as vacuum delays.\n\nAlso, we might need to add some functions to share GlobalVisState\namong parallel workers, since GlobalVisState is a private struct.\n\nOther points I'm somewhat uncomfortable with or need to be discussed\nremain in the code with XXX comments.\n\n# Benchmark results\n\n* Test-1: parallel heap scan on the table without indexes\n\nI created 20GB table, made garbage on the table, and run vacuum while\nchanging parallel degree:\n\ncreate unlogged table test (a int) with (autovacuum_enabled = off);\ninsert into test select generate_series(1, 600000000); --- 20GB table\ndelete from test where a % 5 = 0;\nvacuum (verbose, parallel 0) test;\n\nHere are the results (total time and heap scan time):\n\nPARALLEL 0: 21.99 s (single process)\nPARALLEL 1: 11.39 s\nPARALLEL 2: 8.36 s\nPARALLEL 3: 6.14 s\nPARALLEL 4: 5.08 s\n\n* Test-2: parallel heap scan on the table with one index\n\nI used a similar table to the test case 1 but created one btree index on it:\n\ncreate unlogged table test (a int) with (autovacuum_enabled = off);\ninsert into test select generate_series(1, 600000000); --- 20GB table\ncreate index on test (a);\ndelete from test where a % 5 = 0;\nvacuum (verbose, parallel 0) test;\n\nI've measured the total execution time as well as the time of each\nvacuum phase (from left heap scan time, index vacuum time, and heap\nvacuum time):\n\nPARALLEL 0: 45.11 s (21.89, 16.74, 6.48)\nPARALLEL 1: 42.13 s (12.75, 22.04, 7.23)\nPARALLEL 2: 39.27 s (8.93, 22.78, 7.45)\nPARALLEL 3: 36.53 s (6.76, 22.00, 7.65)\nPARALLEL 4: 35.84 s (5.85, 22.04, 7.83)\n\nOverall, I can see the parallel heap scan in lazy vacuum has a decent\nscalability; In both test-1 and test-2, the execution time of heap\nscan got ~4x faster with 4 parallel workers. On the other hand, when\nit comes to the total vacuum execution time, I could not see much\nperformance improvement in test-2 (45.11 vs. 35.84). Looking at the\nresults PARALLEL 0 vs. PARALLEL 1 in test-2, the heap scan got faster\n(21.89 vs. 12.75) whereas index vacuum got slower (16.74 vs. 22.04),\nand heap scan in case 2 was not as fast as in case 1 with 1 parallel\nworker (12.75 vs. 11.39).\n\nI think the reason is the shared TidStore is not very scalable since\nwe have a single lock on it. In all cases in the test-1, we don't use\nthe shared TidStore since all dead tuples are removed during heap\npruning. So the scalability was better overall than in test-2. In\nparallel 0 case in test-2, we use the local TidStore, and from\nparallel degree of 1 in test-2, we use the shared TidStore and\nparallel worker concurrently update it. Also, I guess that the lookup\nperformance of the local TidStore is better than the shared TidStore's\nlookup performance because of the differences between a bump context\nand an DSA area. I think that this difference contributed the fact\nthat index vacuuming got slower (16.74 vs. 22.04).\n\nThere are two obvious improvement ideas to improve overall vacuum\nexecution time: (1) improve the shared TidStore scalability and (2)\nsupport parallel heap vacuum. For (1), several ideas are proposed by\nthe ART authors[1]. I've not tried these ideas but it might be\napplicable to our ART implementation. But I prefer to start with (2)\nsince it would be easier. Feedback is very welcome.\n\nRegards,\n\n[1] https://db.in.tum.de/~leis/papers/artsync.pdf\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 28 Jun 2024 13:13:25 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel heap vacuum" }, { "msg_contents": "On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada <[email protected]> wrote:\n>\n> # Benchmark results\n>\n> * Test-1: parallel heap scan on the table without indexes\n>\n> I created 20GB table, made garbage on the table, and run vacuum while\n> changing parallel degree:\n>\n> create unlogged table test (a int) with (autovacuum_enabled = off);\n> insert into test select generate_series(1, 600000000); --- 20GB table\n> delete from test where a % 5 = 0;\n> vacuum (verbose, parallel 0) test;\n>\n> Here are the results (total time and heap scan time):\n>\n> PARALLEL 0: 21.99 s (single process)\n> PARALLEL 1: 11.39 s\n> PARALLEL 2: 8.36 s\n> PARALLEL 3: 6.14 s\n> PARALLEL 4: 5.08 s\n>\n> * Test-2: parallel heap scan on the table with one index\n>\n> I used a similar table to the test case 1 but created one btree index on it:\n>\n> create unlogged table test (a int) with (autovacuum_enabled = off);\n> insert into test select generate_series(1, 600000000); --- 20GB table\n> create index on test (a);\n> delete from test where a % 5 = 0;\n> vacuum (verbose, parallel 0) test;\n>\n> I've measured the total execution time as well as the time of each\n> vacuum phase (from left heap scan time, index vacuum time, and heap\n> vacuum time):\n>\n> PARALLEL 0: 45.11 s (21.89, 16.74, 6.48)\n> PARALLEL 1: 42.13 s (12.75, 22.04, 7.23)\n> PARALLEL 2: 39.27 s (8.93, 22.78, 7.45)\n> PARALLEL 3: 36.53 s (6.76, 22.00, 7.65)\n> PARALLEL 4: 35.84 s (5.85, 22.04, 7.83)\n>\n> Overall, I can see the parallel heap scan in lazy vacuum has a decent\n> scalability; In both test-1 and test-2, the execution time of heap\n> scan got ~4x faster with 4 parallel workers. On the other hand, when\n> it comes to the total vacuum execution time, I could not see much\n> performance improvement in test-2 (45.11 vs. 35.84). Looking at the\n> results PARALLEL 0 vs. PARALLEL 1 in test-2, the heap scan got faster\n> (21.89 vs. 12.75) whereas index vacuum got slower (16.74 vs. 22.04),\n> and heap scan in case 2 was not as fast as in case 1 with 1 parallel\n> worker (12.75 vs. 11.39).\n>\n> I think the reason is the shared TidStore is not very scalable since\n> we have a single lock on it. In all cases in the test-1, we don't use\n> the shared TidStore since all dead tuples are removed during heap\n> pruning. So the scalability was better overall than in test-2. In\n> parallel 0 case in test-2, we use the local TidStore, and from\n> parallel degree of 1 in test-2, we use the shared TidStore and\n> parallel worker concurrently update it. Also, I guess that the lookup\n> performance of the local TidStore is better than the shared TidStore's\n> lookup performance because of the differences between a bump context\n> and an DSA area. I think that this difference contributed the fact\n> that index vacuuming got slower (16.74 vs. 22.04).\n>\n> There are two obvious improvement ideas to improve overall vacuum\n> execution time: (1) improve the shared TidStore scalability and (2)\n> support parallel heap vacuum. For (1), several ideas are proposed by\n> the ART authors[1]. I've not tried these ideas but it might be\n> applicable to our ART implementation. But I prefer to start with (2)\n> since it would be easier. Feedback is very welcome.\n>\n\nStarting with (2) sounds like a reasonable approach. We should study a\nfew more things like (a) the performance results where there are 3-4\nindexes, (b) What is the reason for performance improvement seen with\nonly heap scans. We normally get benefits of parallelism because of\nusing multiple CPUs but parallelizing scans (I/O) shouldn't give much\nbenefits. Is it possible that you are seeing benefits because most of\nthe data is either in shared_buffers or in memory? We can probably try\nvacuuming tables by restarting the nodes to ensure the data is not in\nmemory.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 Jun 2024 17:36:23 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel heap vacuum" }, { "msg_contents": "Dear Sawada-san,\r\n\r\n> The parallel vacuum we have today supports only for index vacuuming.\r\n> Therefore, while multiple workers can work on different indexes in\r\n> parallel, the heap table is always processed by the single process.\r\n> I'd like to propose $subject, which enables us to have multiple\r\n> workers running on the single heap table. This would be helpful to\r\n> speedup vacuuming for tables without indexes or tables with\r\n> INDEX_CLENAUP = off.\r\n\r\nSounds great. IIUC, vacuuming is still one of the main weak point of postgres.\r\n\r\n> I've attached a PoC patch for this feature. It implements only\r\n> parallel heap scans in lazyvacum. We can extend this feature to\r\n> support parallel heap vacuum as well in the future or in the same\r\n> patch.\r\n\r\nBefore diving into deep, I tested your PoC but found unclear point.\r\nWhen the vacuuming is requested with parallel > 0 with almost the same workload\r\nas yours, only the first page was scanned and cleaned up. \r\n\r\nWhen parallel was set to zero, I got:\r\n```\r\nINFO: vacuuming \"postgres.public.test\"\r\nINFO: finished vacuuming \"postgres.public.test\": index scans: 0\r\npages: 0 removed, 2654868 remain, 2654868 scanned (100.00% of total)\r\ntuples: 120000000 removed, 480000000 remain, 0 are dead but not yet removable\r\nremovable cutoff: 752, which was 0 XIDs old when operation ended\r\nnew relfrozenxid: 739, which is 1 XIDs ahead of previous value\r\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\r\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\r\navg read rate: 344.639 MB/s, avg write rate: 344.650 MB/s\r\nbuffer usage: 2655045 hits, 2655527 misses, 2655606 dirtied\r\nWAL usage: 1 records, 1 full page images, 937 bytes\r\nsystem usage: CPU: user: 39.45 s, system: 20.74 s, elapsed: 60.19 s\r\n```\r\n\r\nThis meant that all pages were surely scanned and dead tuples were removed.\r\nHowever, when parallel was set to one, I got another result:\r\n\r\n```\r\nINFO: vacuuming \"postgres.public.test\"\r\nINFO: launched 1 parallel vacuum worker for table scanning (planned: 1)\r\nINFO: finished vacuuming \"postgres.public.test\": index scans: 0\r\npages: 0 removed, 2654868 remain, 1 scanned (0.00% of total)\r\ntuples: 12 removed, 0 remain, 0 are dead but not yet removable\r\nremovable cutoff: 752, which was 0 XIDs old when operation ended\r\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\r\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\r\navg read rate: 92.952 MB/s, avg write rate: 0.845 MB/s\r\nbuffer usage: 96 hits, 660 misses, 6 dirtied\r\nWAL usage: 1 records, 1 full page images, 937 bytes\r\nsystem usage: CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.05 s\r\n```\r\n\r\nIt looked like that only a page was scanned and 12 tuples were removed.\r\nIt looks very strange for me...\r\n\r\nAttached script emulate my test. IIUC it was almost the same as yours, but\r\nthe instance was restarted before vacuuming.\r\n\r\nCan you reproduce and see the reason? Based on the requirement I can\r\nprovide further information.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Fri, 5 Jul 2024 09:51:22 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Parallel heap vacuum" }, { "msg_contents": "On Fri, Jul 5, 2024 at 6:51 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> > The parallel vacuum we have today supports only for index vacuuming.\n> > Therefore, while multiple workers can work on different indexes in\n> > parallel, the heap table is always processed by the single process.\n> > I'd like to propose $subject, which enables us to have multiple\n> > workers running on the single heap table. This would be helpful to\n> > speedup vacuuming for tables without indexes or tables with\n> > INDEX_CLENAUP = off.\n>\n> Sounds great. IIUC, vacuuming is still one of the main weak point of postgres.\n>\n> > I've attached a PoC patch for this feature. It implements only\n> > parallel heap scans in lazyvacum. We can extend this feature to\n> > support parallel heap vacuum as well in the future or in the same\n> > patch.\n>\n> Before diving into deep, I tested your PoC but found unclear point.\n> When the vacuuming is requested with parallel > 0 with almost the same workload\n> as yours, only the first page was scanned and cleaned up.\n>\n> When parallel was set to zero, I got:\n> ```\n> INFO: vacuuming \"postgres.public.test\"\n> INFO: finished vacuuming \"postgres.public.test\": index scans: 0\n> pages: 0 removed, 2654868 remain, 2654868 scanned (100.00% of total)\n> tuples: 120000000 removed, 480000000 remain, 0 are dead but not yet removable\n> removable cutoff: 752, which was 0 XIDs old when operation ended\n> new relfrozenxid: 739, which is 1 XIDs ahead of previous value\n> frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n> avg read rate: 344.639 MB/s, avg write rate: 344.650 MB/s\n> buffer usage: 2655045 hits, 2655527 misses, 2655606 dirtied\n> WAL usage: 1 records, 1 full page images, 937 bytes\n> system usage: CPU: user: 39.45 s, system: 20.74 s, elapsed: 60.19 s\n> ```\n>\n> This meant that all pages were surely scanned and dead tuples were removed.\n> However, when parallel was set to one, I got another result:\n>\n> ```\n> INFO: vacuuming \"postgres.public.test\"\n> INFO: launched 1 parallel vacuum worker for table scanning (planned: 1)\n> INFO: finished vacuuming \"postgres.public.test\": index scans: 0\n> pages: 0 removed, 2654868 remain, 1 scanned (0.00% of total)\n> tuples: 12 removed, 0 remain, 0 are dead but not yet removable\n> removable cutoff: 752, which was 0 XIDs old when operation ended\n> frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n> avg read rate: 92.952 MB/s, avg write rate: 0.845 MB/s\n> buffer usage: 96 hits, 660 misses, 6 dirtied\n> WAL usage: 1 records, 1 full page images, 937 bytes\n> system usage: CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.05 s\n> ```\n>\n> It looked like that only a page was scanned and 12 tuples were removed.\n> It looks very strange for me...\n>\n> Attached script emulate my test. IIUC it was almost the same as yours, but\n> the instance was restarted before vacuuming.\n>\n> Can you reproduce and see the reason? Based on the requirement I can\n> provide further information.\n\nThank you for the test!\n\nI could reproduce this issue and it's a bug; it skipped even\nnon-all-visible pages. I've attached the new version patch.\n\nBTW since we compute the number of parallel workers for the heap scan\nbased on the table size, it's possible that we launch multiple workers\neven if most blocks are all-visible. It seems to be better if we\ncalculate it based on (relpages - relallvisible).\n\nRegards,\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 Jul 2024 15:10:56 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel heap vacuum" }, { "msg_contents": "On Fri, Jun 28, 2024 at 9:06 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > # Benchmark results\n> >\n> > * Test-1: parallel heap scan on the table without indexes\n> >\n> > I created 20GB table, made garbage on the table, and run vacuum while\n> > changing parallel degree:\n> >\n> > create unlogged table test (a int) with (autovacuum_enabled = off);\n> > insert into test select generate_series(1, 600000000); --- 20GB table\n> > delete from test where a % 5 = 0;\n> > vacuum (verbose, parallel 0) test;\n> >\n> > Here are the results (total time and heap scan time):\n> >\n> > PARALLEL 0: 21.99 s (single process)\n> > PARALLEL 1: 11.39 s\n> > PARALLEL 2: 8.36 s\n> > PARALLEL 3: 6.14 s\n> > PARALLEL 4: 5.08 s\n> >\n> > * Test-2: parallel heap scan on the table with one index\n> >\n> > I used a similar table to the test case 1 but created one btree index on it:\n> >\n> > create unlogged table test (a int) with (autovacuum_enabled = off);\n> > insert into test select generate_series(1, 600000000); --- 20GB table\n> > create index on test (a);\n> > delete from test where a % 5 = 0;\n> > vacuum (verbose, parallel 0) test;\n> >\n> > I've measured the total execution time as well as the time of each\n> > vacuum phase (from left heap scan time, index vacuum time, and heap\n> > vacuum time):\n> >\n> > PARALLEL 0: 45.11 s (21.89, 16.74, 6.48)\n> > PARALLEL 1: 42.13 s (12.75, 22.04, 7.23)\n> > PARALLEL 2: 39.27 s (8.93, 22.78, 7.45)\n> > PARALLEL 3: 36.53 s (6.76, 22.00, 7.65)\n> > PARALLEL 4: 35.84 s (5.85, 22.04, 7.83)\n> >\n> > Overall, I can see the parallel heap scan in lazy vacuum has a decent\n> > scalability; In both test-1 and test-2, the execution time of heap\n> > scan got ~4x faster with 4 parallel workers. On the other hand, when\n> > it comes to the total vacuum execution time, I could not see much\n> > performance improvement in test-2 (45.11 vs. 35.84). Looking at the\n> > results PARALLEL 0 vs. PARALLEL 1 in test-2, the heap scan got faster\n> > (21.89 vs. 12.75) whereas index vacuum got slower (16.74 vs. 22.04),\n> > and heap scan in case 2 was not as fast as in case 1 with 1 parallel\n> > worker (12.75 vs. 11.39).\n> >\n> > I think the reason is the shared TidStore is not very scalable since\n> > we have a single lock on it. In all cases in the test-1, we don't use\n> > the shared TidStore since all dead tuples are removed during heap\n> > pruning. So the scalability was better overall than in test-2. In\n> > parallel 0 case in test-2, we use the local TidStore, and from\n> > parallel degree of 1 in test-2, we use the shared TidStore and\n> > parallel worker concurrently update it. Also, I guess that the lookup\n> > performance of the local TidStore is better than the shared TidStore's\n> > lookup performance because of the differences between a bump context\n> > and an DSA area. I think that this difference contributed the fact\n> > that index vacuuming got slower (16.74 vs. 22.04).\n> >\n\nThank you for the comments!\n\n> > There are two obvious improvement ideas to improve overall vacuum\n> > execution time: (1) improve the shared TidStore scalability and (2)\n> > support parallel heap vacuum. For (1), several ideas are proposed by\n> > the ART authors[1]. I've not tried these ideas but it might be\n> > applicable to our ART implementation. But I prefer to start with (2)\n> > since it would be easier. Feedback is very welcome.\n> >\n>\n> Starting with (2) sounds like a reasonable approach. We should study a\n> few more things like (a) the performance results where there are 3-4\n> indexes,\n\nHere are the results with 4 indexes (and restarting the server before\nthe benchmark):\n\nPARALLEL 0: 115.48 s (32.76, 64.46, 18.24)\nPARALLEL 1: 74.88 s (17.11, 44.43, 13.25)\nPARALLEL 2: 71.15 s (14.13, 44.82, 12.12)\nPARALLEL 3: 46.78 s (10.74, 24.50, 11.43)\nPARALLEL 4: 46.42 s (8.95, 24.96, 12.39) (launched 4 workers for heap\nscan and 3 workers for index vacuum)\n\n> (b) What is the reason for performance improvement seen with\n> only heap scans. We normally get benefits of parallelism because of\n> using multiple CPUs but parallelizing scans (I/O) shouldn't give much\n> benefits. Is it possible that you are seeing benefits because most of\n> the data is either in shared_buffers or in memory? We can probably try\n> vacuuming tables by restarting the nodes to ensure the data is not in\n> memory.\n\nI think it depends on the storage performance. FYI I use an EC2\ninstance (m6id.metal).\n\nI've run the same benchmark script (table with no index) with\nrestarting the server before executing the vacuum, and here are the\nresults:\n\nPARALLEL 0: 32.75 s\nPARALLEL 1: 17.46 s\nPARALLEL 2: 13.41 s\nPARALLEL 3: 10.31 s\nPARALLEL 4: 8.48 s\n\nWith the above two tests, I used the updated patch that I just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAWHHnCg9OvtoEJnnvCc-3isyOyAGn%2B2KYoSXEv%3DvXauw%40mail.gmail.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:14:13 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel heap vacuum" }, { "msg_contents": "Dear Sawada-san,\r\n\r\n> Thank you for the test!\r\n> \r\n> I could reproduce this issue and it's a bug; it skipped even\r\n> non-all-visible pages. I've attached the new version patch.\r\n> \r\n> BTW since we compute the number of parallel workers for the heap scan\r\n> based on the table size, it's possible that we launch multiple workers\r\n> even if most blocks are all-visible. It seems to be better if we\r\n> calculate it based on (relpages - relallvisible).\r\n\r\nThanks for updating the patch. I applied and confirmed all pages are scanned.\r\nI used almost the same script (just changed max_parallel_maintenance_workers)\r\nand got below result. I think the tendency was the same as yours.\r\n\r\n```\r\nparallel 0: 61114.369 ms\r\nparallel 1: 34870.316 ms\r\nparallel 2: 23598.115 ms\r\nparallel 3: 17732.363 ms\r\nparallel 4: 15203.271 ms\r\nparallel 5: 13836.025 ms\r\n```\r\n\r\nI started to read your codes but takes much time because I've never seen before...\r\nBelow part contains initial comments.\r\n\r\n1.\r\nThis patch cannot be built when debug mode is enabled. See [1].\r\nIIUC, this was because NewRelminMxid was moved from struct LVRelState to PHVShared.\r\nSo you should update like \" vacrel->counters->NewRelminMxid\".\r\n\r\n2.\r\nI compared parallel heap scan and found that it does not have compute_worker API.\r\nCan you clarify the reason why there is an inconsistency?\r\n(I feel it is intentional because the calculation logic seems to depend on the heap structure,\r\nso should we add the API for table scan as well?)\r\n\r\n[1]:\r\n```\r\nvacuumlazy.c: In function ‘lazy_scan_prune’:\r\nvacuumlazy.c:1666:34: error: ‘LVRelState’ {aka ‘struct LVRelState’} has no member named ‘NewRelminMxid’\r\n Assert(MultiXactIdIsValid(vacrel->NewRelminMxid));\r\n ^~\r\n....\r\n```\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 25 Jul 2024 09:58:03 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Parallel heap vacuum" }, { "msg_contents": "On Thu, Jul 25, 2024 at 2:58 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> > Thank you for the test!\n> >\n> > I could reproduce this issue and it's a bug; it skipped even\n> > non-all-visible pages. I've attached the new version patch.\n> >\n> > BTW since we compute the number of parallel workers for the heap scan\n> > based on the table size, it's possible that we launch multiple workers\n> > even if most blocks are all-visible. It seems to be better if we\n> > calculate it based on (relpages - relallvisible).\n>\n> Thanks for updating the patch. I applied and confirmed all pages are scanned.\n> I used almost the same script (just changed max_parallel_maintenance_workers)\n> and got below result. I think the tendency was the same as yours.\n>\n> ```\n> parallel 0: 61114.369 ms\n> parallel 1: 34870.316 ms\n> parallel 2: 23598.115 ms\n> parallel 3: 17732.363 ms\n> parallel 4: 15203.271 ms\n> parallel 5: 13836.025 ms\n> ```\n\nThank you for testing!\n\n>\n> I started to read your codes but takes much time because I've never seen before...\n> Below part contains initial comments.\n>\n> 1.\n> This patch cannot be built when debug mode is enabled. See [1].\n> IIUC, this was because NewRelminMxid was moved from struct LVRelState to PHVShared.\n> So you should update like \" vacrel->counters->NewRelminMxid\".\n\nRight, will fix.\n\n> 2.\n> I compared parallel heap scan and found that it does not have compute_worker API.\n> Can you clarify the reason why there is an inconsistency?\n> (I feel it is intentional because the calculation logic seems to depend on the heap structure,\n> so should we add the API for table scan as well?)\n\nThere is room to consider a better API design, but yes, the reason is\nthat the calculation logic depends on table AM implementation. For\nexample, I thought it might make sense to consider taking the number\nof all-visible pages into account for the calculation of the number of\nparallel workers as we don't want to launch many workers on the table\nwhere most pages are all-visible. Which might not work for other table\nAMs.\n\nI'm updating the patch to implement parallel heap vacuum and will\nshare the updated patch. It might take time as it requires to\nimplement shared iteration support in radx tree.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 11:02:22 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel heap vacuum" }, { "msg_contents": "Dear Sawada-san,\r\n\r\n> Thank you for testing!\r\n\r\nI tried to profile the vacuuming with the larger case (40 workers for the 20G table)\r\nand attached FlameGraph showed the result. IIUC, I cannot find bottlenecks.\r\n\r\n> > 2.\r\n> > I compared parallel heap scan and found that it does not have compute_worker\r\n> API.\r\n> > Can you clarify the reason why there is an inconsistency?\r\n> > (I feel it is intentional because the calculation logic seems to depend on the\r\n> heap structure,\r\n> > so should we add the API for table scan as well?)\r\n> \r\n> There is room to consider a better API design, but yes, the reason is\r\n> that the calculation logic depends on table AM implementation. For\r\n> example, I thought it might make sense to consider taking the number\r\n> of all-visible pages into account for the calculation of the number of\r\n> parallel workers as we don't want to launch many workers on the table\r\n> where most pages are all-visible. Which might not work for other table\r\n> AMs.\r\n>\r\n\r\nOkay, thanks for confirming. I wanted to ask others as well.\r\n\r\n\r\n> I'm updating the patch to implement parallel heap vacuum and will\r\n> share the updated patch. It might take time as it requires to\r\n> implement shared iteration support in radx tree.\r\n\r\nHere are other preliminary comments for v2 patch. This does not contain\r\ncosmetic ones. \r\n\r\n1.\r\nShared data structure PHVShared does not contain the mutex lock. Is it intentional\r\nbecause they are accessed by leader only after parallel workers exit?\r\n\r\n2.\r\nPer my understanding, the vacuuming goes like below steps.\r\n\r\na. paralell workers are launched for scanning pages\r\nb. leader waits until scans are done\r\nc. leader does vacuum alone (you may extend here...)\r\nd. parallel workers are launched again to cleanup indeces\r\n\r\nIf so, can we reuse parallel workers for the cleanup? Or, this is painful\r\nengineering than the benefit?\r\n\r\n3.\r\nAccording to LaunchParallelWorkers(), the bgw_name and bgw_type are hardcoded as\r\n\"parallel worker ...\" Can we extend this to improve the trackability in the\r\npg_stat_activity?\r\n\r\n4.\r\nI'm not the expert TidStore, but as you said TidStoreLockExclusive() might be a\r\nbottleneck when tid is added to the shared TidStore. My another primitive idea\r\nis that to prepare per-worker TidStores (in the PHVScanWorkerState or LVRelCounters?)\r\nand gather after the heap scanning. If you extend like parallel workers do vacuuming,\r\nthe gathering may not be needed: they can access own TidStore and clean up.\r\nOne downside is that the memory consumption may be quite large.\r\n\r\n\r\nHow do you think?\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Wed, 31 Jul 2024 03:54:22 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Parallel heap vacuum" } ]
[ { "msg_contents": "Attachment protected by Amazon:\r\n\r\n[0001-Handle-Sleep-interrupts.patch]\r\nhttps://us-east-1.secure-attach.amazon.com/fcdc82ce-7887-4aa1-af9e-c1161a6b1d6f/bc81fa24-41de-48f9-a767-a6d15801754b\r\n\r\nAmazon has replaced attachment with download link. Downloads will be available until July 28, 2024, 19:59 (UTC+00:00).\r\n[Tell us what you think] https://amazonexteu.qualtrics.com/jfe/form/SV_ehuz6zGo8YnsRKK\r\n[For more information click here] https://docs.secure-attach.amazon.com/guide\r\n\r\n\r\nHi,\r\n\r\nIn the proposal by Bertrand [1] to implement vacuum cost delay tracking\r\nin pg_stat_progress_vacuum, it was discovered that the vacuum cost delay\r\nends early on the leader process of a parallel vacuum due to parallel workers\r\nreporting progress on index vacuuming, which was introduced in 17\r\nwith commit [2]. With this patch, everytime a parallel worker\r\ncompletes a vacuum index, it will send a completion message to the leader.\r\n\r\nThe facility that allows a parallel worker to report progress to the leader was\r\nintroduced in commit [3].\r\n\r\nIn the case of the patch being proposed by Bertrand, the number of interrupts\r\nwill be much more frequent as parallel workers would send a message to the leader\r\nto update the vacuum delay counters every vacuum_delay_point call.\r\n\r\nLooking into this, the ideal solution appears to provide the ability for a pg_usleep\r\ncall to restart after being interrupted. Thanks to the usage of nanosleep as\r\nof commit[4], this should be trivial to do as nanosleep\r\nprovides a remaining time, which can be used to restart the sleep. This\r\nidea was also mentioned in [5].\r\n\r\nI am attaching a POC patch that does the $SUBJECT. Rather than changing the\r\nexisting pg_usleep, the patch introduces a function, pg_usleep_handle_interrupt,\r\nthat takes in the sleep duration and a boolean flag to force handling of the\r\ninterrupt, or not.\r\n\r\nThis function can replace pg_usleep inside vacuum_delay_point, and could be\r\nuseful in other cases in which it’s important to handle interrupts.\r\n\r\nFor Windows, the “force” = “true” flag of the new function uses SleepEx which\r\nfrom what I can tell based on the code comments is a non-interruptible sleep.\r\n\r\nThoughts?\r\n\r\n[1] https://www.postgresql.org/message-id/ZnbIIUQpFJKAJx2W%40ip-10-97-1-34.eu-west-3.compute.internal\r\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=46ebdfe164c61fbac961d1eb7f40e9a684289ae6\r\n[3] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f1889729dd3ab0352dc0ccc2ffcc1b1901f8e39f\r\n[4] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a948e49e2ef11815be0b211723bfc5b53b7f75a8\r\n[5] https://www.postgresql.org/message-id/24848.1473607575%40sss.pgh.pa.us\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Fri, 28 Jun 2024 19:59:15 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": true, "msg_subject": "Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nI think you need to find a way to disable this \"Attachment protected\nby Amazon\" thing:\n\nhttp://postgr.es/m/01000190606e3d2a-116ead16-84d2-4449-8d18-5053da66b1f4-000000@email.amazonses.com\n\nWe want patches to be in the pgsql-hackers archives, not temporarily\naccessible via some link.\n\n...Robert\n\n\n", "msg_date": "Fri, 28 Jun 2024 16:29:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> I think you need to find a way to disable this \"Attachment protected\r\n> by Amazon\" thing:\r\n\r\nYes, I am looking into that. I only noticed after sending the email.\r\n\r\nSorry about that.\r\n\r\nSami \r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Fri, 28 Jun 2024 20:32:27 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> We want patches to be in the pgsql-hackers archives, not temporarily\n> accessible via some link.\n> \n> …Robert\n> \n\nMoving to another email going forward.\n\nReattaching the patch.\n\n\n\n\n\n\nRegards,\n\nSami Imseih\nAmazon Web Services", "msg_date": "Fri, 28 Jun 2024 16:14:36 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Sami Imseih <[email protected]> writes:\n> Reattaching the patch.\n\nI feel like this is fundamentally a wrong solution, for the reasons\ncited in the comment for pg_usleep: long sleeps are a bad idea\nbecause of the resulting uncertainty about whether we'll respond to\ninterrupts and such promptly. An example here is that if we get\na query cancel interrupt, we should probably not insist on finishing\nout the current sleep before responding.\n\nTherefore, rather than \"improving\" pg_usleep (and uglifying its API),\nthe right answer is to fix parallel vacuum leaders to not depend on\npg_usleep in the first place. A better idea might be to use\npg_sleep() or equivalent code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2024 17:34:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Sat, Jun 29, 2024 at 9:34 AM Tom Lane <[email protected]> wrote:\n> Therefore, rather than \"improving\" pg_usleep (and uglifying its API),\n> the right answer is to fix parallel vacuum leaders to not depend on\n> pg_usleep in the first place. A better idea might be to use\n> pg_sleep() or equivalent code.\n\nIn case it's useful for someone looking into that, in earlier\ndiscussions we figured out that it is possible to have high resolution\ntimeouts AND support latch multiplexing on Linux, FreeBSD, macOS:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BhC9mFx8tEcBsyo7-cAfWgtbRy1eDizeFuff2K7T%3D4bA%40mail.gmail.com\n\n\n", "msg_date": "Sat, 29 Jun 2024 10:11:39 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Thanks for the feedback!\n\n> On Jun 28, 2024, at 4:34 PM, Tom Lane <[email protected]> wrote:\n> \n> Sami Imseih <[email protected]> writes:\n>> Reattaching the patch.\n> \n> I feel like this is fundamentally a wrong solution, for the reasons\n> cited in the comment for pg_usleep: long sleeps are a bad idea\n> because of the resulting uncertainty about whether we'll respond to\n> interrupts and such promptly. An example here is that if we get\n> a query cancel interrupt, we should probably not insist on finishing\n> out the current sleep before responding.\n\nThe case which brought up this discussion is the pg_usleep that \nis called within the vacuum_delay_point being interrupted. \n\nWhen I read the same code comment you cited, it sounded to me \nthat “long sleeps” are those that are in seconds or minutes. The longest \nvacuum delay allowed is 100ms.\n\n\n> Therefore, rather than \"improving\" pg_usleep (and uglifying its API),\n> the right answer is to fix parallel vacuum leaders to not depend on\n> pg_usleep in the first place. A better idea might be to use\n> pg_sleep() or equivalent code.\n\nYes, that is a good idea to explore and it will not require introducing\nan awkward new API. I will look into using something similar to\npg_sleep.\n\nRegards,\n\nSami\n\n\n\nThanks for the feedback!On Jun 28, 2024, at 4:34 PM, Tom Lane <[email protected]> wrote:Sami Imseih <[email protected]> writes:Reattaching the patch.I feel like this is fundamentally a wrong solution, for the reasonscited in the comment for pg_usleep: long sleeps are a bad ideabecause of the resulting uncertainty about whether we'll respond tointerrupts and such promptly.  An example here is that if we geta query cancel interrupt, we should probably not insist on finishingout the current sleep before responding.The case which brought up this discussion is the pg_usleep that is called within the vacuum_delay_point being interrupted. When I read the same code comment you cited, it sounded to me that “long sleeps” are those that are in seconds or minutes. The longest vacuum delay allowed is 100ms.Therefore, rather than \"improving\" pg_usleep (and uglifying its API),the right answer is to fix parallel vacuum leaders to not depend onpg_usleep in the first place.  A better idea might be to usepg_sleep() or equivalent code.Yes, that is a good idea to explore and it will not require introducingan awkward new API. I will look into using something similar topg_sleep.Regards,Sami", "msg_date": "Fri, 28 Jun 2024 17:50:20 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n>> Therefore, rather than \"improving\" pg_usleep (and uglifying its API),\n>> the right answer is to fix parallel vacuum leaders to not depend on\n>> pg_usleep in the first place. A better idea might be to use\n>> pg_sleep() or equivalent code.\n> \n> Yes, that is a good idea to explore and it will not require introducing\n> an awkward new API. I will look into using something similar to\n> pg_sleep.\n\n\nLooking through the history of the sleep in vacuum_delay_point, commit\n720de00af49 replaced WaitLatch with pg_usleep to allow for microsecond\nsleep precision [1]. \n\nThomas has proposed a WaitLatchUs implementation in [2], but I have not\nyet tried it. \n\nSo I see there are 2 possible options here to deal with the interrupt of a \nparallel vacuum leader when a message is sent by a parallel vacuum worker. \n\nOption 1/ something like my initial proposal which is\nto create a function similar to pg_usleep that is able to deal with\ninterrupts in a sleep. This could be a function scoped only to vacuum.c,\nso it can only be used for vacuum delay purposes.\n\n—— \nOption 2/ to explore the WaitLatchUs implementation by\nThomas which will give both a latch implementation for a sleep with\nthe microsecond precision.\n\nIt is worth mentioning that if we do end up using WaitLatch(Us) inside\nvacuum_delay_point, it will need to set only WL_TIMEOUT and \nWL_EXIT_ON_PM_DEATH.\n\ni.e.\n(void) WaitLatch(MyLatch, WL_TIMEOUT| WL_EXIT_ON_PM_DEATH,\n\t\t\t\t\t msec\n\t\t\t\t\t WAIT_EVENT_VACUUM_DELAY);\n\nThis way it is not interrupted by a WL_LATCH_SET when a message\nis set by a parallel worker.\n\n——\n\nUltimately, I think option 2 may be worth a closer look as it is a cleaner\nand safer approach, to detect a postmaster death.\n\n\nThoughts?\n\n\n[1] https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BhUKGKVbJE59JkwnUj5XMY%2B-rzcTFciV9vVC7i%3DLUfWPds8Xw%40mail.gmail.com\n \nTherefore, rather than \"improving\" pg_usleep (and uglifying its API),the right answer is to fix parallel vacuum leaders to not depend onpg_usleep in the first place.  A better idea might be to usepg_sleep() or equivalent code.Yes, that is a good idea to explore and it will not require introducingan awkward new API. I will look into using something similar topg_sleep.Looking through the history of the sleep in vacuum_delay_point, commit720de00af49 replaced WaitLatch with pg_usleep to allow for microsecondsleep precision [1]. Thomas has proposed a WaitLatchUs implementation in [2], but I have notyet tried it. So I see there are 2 possible options here to deal with the interrupt of a parallel vacuum leader when a message is sent by a parallel vacuum worker. Option 1/ something like my initial proposal which isto create a function similar to pg_usleep that is able to deal withinterrupts in a sleep. This could be a function scoped only to vacuum.c,so it can only be used for vacuum delay purposes.—— Option 2/ to explore the WaitLatchUs implementation byThomas which will give both a latch implementation for a sleep withthe microsecond precision.It is worth mentioning that if we do end up using WaitLatch(Us) insidevacuum_delay_point, it will need to set only WL_TIMEOUT and WL_EXIT_ON_PM_DEATH.i.e.(void) WaitLatch(MyLatch, WL_TIMEOUT| WL_EXIT_ON_PM_DEATH,    msec   WAIT_EVENT_VACUUM_DELAY);This way it is not interrupted by a WL_LATCH_SET when a messageis set by a parallel worker.——Ultimately, I think option 2 may be worth a closer look as it is a cleanerand safer approach, to detect a postmaster death.Thoughts?[1] https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com[2] https://www.postgresql.org/message-id/CA%2BhUKGKVbJE59JkwnUj5XMY%2B-rzcTFciV9vVC7i%3DLUfWPds8Xw%40mail.gmail.com", "msg_date": "Mon, 1 Jul 2024 11:16:32 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 28, 2024 at 05:50:20PM -0500, Sami Imseih wrote:\n> \n> Thanks for the feedback!\n> \n> > On Jun 28, 2024, at 4:34 PM, Tom Lane <[email protected]> wrote:\n> > \n> > Sami Imseih <[email protected]> writes:\n> >> Reattaching the patch.\n> > \n> > I feel like this is fundamentally a wrong solution, for the reasons\n> > cited in the comment for pg_usleep: long sleeps are a bad idea\n> > because of the resulting uncertainty about whether we'll respond to\n> > interrupts and such promptly. An example here is that if we get\n> > a query cancel interrupt, we should probably not insist on finishing\n> > out the current sleep before responding.\n> \n> The case which brought up this discussion is the pg_usleep that \n> is called within the vacuum_delay_point being interrupted. \n> \n> When I read the same code comment you cited, it sounded to me \n> that “long sleeps” are those that are in seconds or minutes. The longest \n> vacuum delay allowed is 100ms.\n\nI think that with the proposed patch the actual wait time can be \"long\".\n\nIndeed, the time between the interruptions and restarts of the nanosleep() call\nwill lead to drift (as mentioned in the nanosleep() man page). So, with a large\nnumber of interruptions, the actual wait time could be way longer than the\nexpected wait time.\n\nTo put numbers, I did a few tests with the patch (and with v2 shared in [1]):\n\ncost delay is 1ms and cost limit is 10.\n\nWith 50 indexes and 10 parallel workers I can see things like:\n\n2024-07-02 08:22:23.789 UTC [2189616] LOG: expected 1.000000, actual 239.378368\n2024-07-02 08:22:24.575 UTC [2189616] LOG: expected 0.100000, actual 224.331737\n2024-07-02 08:22:25.363 UTC [2189616] LOG: expected 1.300000, actual 230.462793\n2024-07-02 08:22:26.154 UTC [2189616] LOG: expected 1.000000, actual 225.980803\n\nMeans we waited more than the max allowed cost delay (100ms).\n\nWith 49 parallel workers, it's worst as I can see things like:\n\n2024-07-02 08:26:36.069 UTC [2189807] LOG: expected 1.000000, actual 1106.790136\n2024-07-02 08:26:36.298 UTC [2189807] LOG: expected 1.000000, actual 218.148985\n\nThe first actual wait time is about 1 second (it has been interrupted about\n16300 times during this second).\n\nTo avoid this drift, the nanosleep() man page suggests to use clock_nanosleep()\nwith an absolute time value, that might be another idea to explore.\n\n[1]: https://www.postgresql.org/message-id/flat/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:05:40 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> With 50 indexes and 10 parallel workers I can see things like:\n> \n> 2024-07-02 08:22:23.789 UTC [2189616] LOG: expected 1.000000, actual 239.378368\n> 2024-07-02 08:22:24.575 UTC [2189616] LOG: expected 0.100000, actual 224.331737\n> 2024-07-02 08:22:25.363 UTC [2189616] LOG: expected 1.300000, actual 230.462793\n> 2024-07-02 08:22:26.154 UTC [2189616] LOG: expected 1.000000, actual 225.980803\n> \n> Means we waited more than the max allowed cost delay (100ms).\n> \n> With 49 parallel workers, it's worst as I can see things like:\n> \n> 2024-07-02 08:26:36.069 UTC [2189807] LOG: expected 1.000000, actual 1106.790136\n> 2024-07-02 08:26:36.298 UTC [2189807] LOG: expected 1.000000, actual 218.148985\n> \n> The first actual wait time is about 1 second (it has been interrupted about\n> 16300 times during this second).\n> \n> To avoid this drift, the nanosleep() man page suggests to use clock_nanosleep()\n> with an absolute time value, that might be another idea to explore.\n> \n> [1]: https://www.postgresql.org/message-id/flat/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal\n> \n\n\n\nI could not reproduce the same time you drift you observed on my\nmachine, so I am guessing the time drift could be worse on certain\nplatforms than others.\n\nI also looked into the WaitLatchUs patch proposed by Thomas in [1]\nand since my system does have epoll_pwait(2) available, I could not\nachieve the sub-millisecond wait times. \n\nA more portable approach which could be to continue using nanosleep and\nadd checks to ensure that nanosleep exists whenever\nit goes past an absolute time. This was suggested by Bertrand in an offline\nconversation. I am not yet fully convinced of this idea, but posting the patch\nthat implements this idea for anyone interested in looking.\n\nSince sub-millisecond sleep times are not guaranteed as suggested by\nthe vacuum_cost_delay docs ( see below ), an alternative idea\nis to use clock_nanosleep for vacuum delay when it’s available, else\nfallback to WaitLatch.\n\n\"While vacuum_cost_delay can be set to fractional-millisecond values, \nsuch delays may not be measured accurately on older platforms”\n\n\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKVbJE59JkwnUj5XMY%2B-rzcTFciV9vVC7i%3DLUfWPds8Xw%40mail.gmail.com\n\nRegards, \n\nSami\n\n", "msg_date": "Fri, 5 Jul 2024 11:49:45 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> A more portable approach which could be to continue using nanosleep and\n> add checks to ensure that nanosleep exists whenever\n> it goes past an absolute time. This was suggested by Bertrand in an offline\n> conversation. I am not yet fully convinced of this idea, but posting the patch\n> that implements this idea for anyone interested in looking.\n\noops, forgot to attach the patch. Here it is.", "msg_date": "Fri, 5 Jul 2024 11:57:37 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 05, 2024 at 11:49:45AM -0500, Sami Imseih wrote:\n> \n> > With 50 indexes and 10 parallel workers I can see things like:\n> > \n> > 2024-07-02 08:22:23.789 UTC [2189616] LOG: expected 1.000000, actual 239.378368\n> > 2024-07-02 08:22:24.575 UTC [2189616] LOG: expected 0.100000, actual 224.331737\n> > 2024-07-02 08:22:25.363 UTC [2189616] LOG: expected 1.300000, actual 230.462793\n> > 2024-07-02 08:22:26.154 UTC [2189616] LOG: expected 1.000000, actual 225.980803\n> > \n> > Means we waited more than the max allowed cost delay (100ms).\n> > \n> > With 49 parallel workers, it's worst as I can see things like:\n> > \n> > 2024-07-02 08:26:36.069 UTC [2189807] LOG: expected 1.000000, actual 1106.790136\n> > 2024-07-02 08:26:36.298 UTC [2189807] LOG: expected 1.000000, actual 218.148985\n> > \n> > The first actual wait time is about 1 second (it has been interrupted about\n> > 16300 times during this second).\n> > \n> > To avoid this drift, the nanosleep() man page suggests to use clock_nanosleep()\n> > with an absolute time value, that might be another idea to explore.\n> > \n> > [1]: https://www.postgresql.org/message-id/flat/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal\n> > \n> \n> \n> A more portable approach which could be to continue using nanosleep and\n> add checks to ensure that nanosleep exists whenever\n> it goes past an absolute time. This was suggested by Bertrand in an offline\n> conversation. I am not yet fully convinced of this idea, but posting the patch\n> that implements this idea for anyone interested in looking.\n\nThanks! \n\nI did a few tests with the patch and did not see any \"large\" drifts like the\nones observed above.\n\nAs far the patch, not thoroughly review (as it's still one option among others\nbeing discussed)):\n\n+ struct timespec current;\n+ float time_diff;\n+\n+ clock_gettime(PG_INSTR_CLOCK, &current);\n+\n+ time_diff = (absolute.tv_sec - current.tv_sec) + (absolute.tv_nsec - current.tv_nsec) / 1000000000.0;\n\nI think it could be \"simplified\" by making use of instr_time instead of timespec\nfor current and absolute. Then I think it would be enough to compare their\nticks.\n\n> Since sub-millisecond sleep times are not guaranteed as suggested by\n> the vacuum_cost_delay docs ( see below ), an alternative idea\n> is to use clock_nanosleep for vacuum delay when it’s available, else\n> fallback to WaitLatch.\n\nWouldn't that increase even more the cases where sub-millisecond won't be\nguaranteed?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Jul 2024 10:44:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> I did a few tests with the patch and did not see any \"large\" drifts like the\n> ones observed above.\n\nThanks for testing.\n\n> I think it could be \"simplified\" by making use of instr_time instead of timespec\n> for current and absolute. Then I think it would be enough to compare their\n> ticks.\n\nCorrect I attached a v2 of this patch that uses instr_time to check the elapsed\ntime and break out of the loop. It needs some more benchmarking.\n\n>> Since sub-millisecond sleep times are not guaranteed as suggested by\n>> the vacuum_cost_delay docs ( see below ), an alternative idea\n>> is to use clock_nanosleep for vacuum delay when it’s available, else\n>> fallback to WaitLatch.\n> \n> Wouldn't that increase even more the cases where sub-millisecond won't be\n> guaranteed?\n\nYes, nanosleep is going to provide the most coverage as it’s widely available.\n\nRegards,\n\n\nSami\n\n", "msg_date": "Thu, 11 Jul 2024 10:15:41 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> Correct I attached a v2 of this patch that uses instr_time to check the elapsed\n> time and break out of the loop. It needs some more benchmarking.\n\nMy email client has an issue sending attachments it seems. Reattaching \n\n\n\nRegards,\n\nSami", "msg_date": "Thu, 11 Jul 2024 10:19:22 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "+\t\t/*\n+\t\t * We allow nanosleep to handle interrupts and retry with the remaining time.\n+\t\t * However, since nanosleep is susceptible to time drift when interrupted\n+\t\t * frequently, we add a safeguard to break out of the nanosleep whenever the\n+\t\t * total time of the sleep exceeds the requested sleep time. Using nanosleep\n+\t\t * is a more portable approach than clock_nanosleep.\n+\t\t */\n\nI'm curious why we wouldn't just subtract \"elapsed_time\" from \"delay\" at\nthe bottom of the while loop to avoid needing this extra check. Also, I\nthink we need some commentary about why we want to retry after an interrupt\nin this case.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 11 Jul 2024 10:34:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "\n> \n> I'm curious why we wouldn't just subtract \"elapsed_time\" from \"delay\" at\n> the bottom of the while loop to avoid needing this extra check. \n\nCan you elaborate further? I am not sure how this will work since delay is a timespec \nand elapsed time is an instr_time. \n\nAlso, in every iteration of the loop, the delay must be set to the remaining time. The\npurpose of the elapsed_time is to make sure that we don’t surpass requested time\ndelay as an additional safeguard.\n\n> Also, I\n> think we need some commentary about why we want to retry after an interrupt\n> in this case.\n\nI will elaborate further in the comments for the next revision.\n\n\nRegards,\n\nSami \n\n\n\n", "msg_date": "Thu, 11 Jul 2024 13:10:25 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Thu, Jul 11, 2024 at 01:10:25PM -0500, Sami Imseih wrote:\n>> I'm curious why we wouldn't just subtract \"elapsed_time\" from \"delay\" at\n>> the bottom of the while loop to avoid needing this extra check. \n> \n> Can you elaborate further? I am not sure how this will work since delay is a timespec \n> and elapsed time is an instr_time. \n> \n> Also, in every iteration of the loop, the delay must be set to the remaining time. The\n> purpose of the elapsed_time is to make sure that we don�t surpass requested time\n> delay as an additional safeguard.\n\nI'm imagining something like this:\n\n struct timespec delay;\n TimestampTz end_time;\n\n end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);\n\n do\n {\n long secs;\n int microsecs;\n\n TimestampDifference(GetCurrentTimestamp(), end_time,\n &secs, &microsecs);\n\n delay.tv_sec = secs;\n delay.tv_nsec = microsecs * 1000;\n\n } while (nanosleep(&delay, NULL) == -1 && errno == EINTR);\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 11 Jul 2024 14:56:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 11, 2024 at 10:15:41AM -0500, Sami Imseih wrote:\n> \n> > I did a few tests with the patch and did not see any \"large\" drifts like the\n> > ones observed above.\n> \n> Thanks for testing.\n> \n> > I think it could be \"simplified\" by making use of instr_time instead of timespec\n> > for current and absolute. Then I think it would be enough to compare their\n> > ticks.\n> \n> Correct I attached a v2 of this patch that uses instr_time to check the elapsed\n> time and break out of the loop. It needs some more benchmarking.\n\nThanks!\n\nOutside of Nathan's comment:\n\n1 ===\n\n+ * However, since nanosleep is susceptible to time drift when interrupted\n+ * frequently, we add a safeguard to break out of the nanosleep whenever the\n\nI'm not sure that \"nanosleep is susceptible to time drift when interrupted frequently\"\nis a correct wording.\n\nWhat about?\n\n\"\nHowever, since the time between frequent interruptions and restarts of the\nnanosleep calls can substantially lead to drift in the time when the sleep\nfinally completes, we add....\"\n\n2 ===\n\n+static void vacuum_sleep(double msec);\n\nWhat about a more generic name that could be used outside of vacuum?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 07:54:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On 2024-Jul-11, Nathan Bossart wrote:\n\n> I'm imagining something like this:\n> \n> struct timespec delay;\n> TimestampTz end_time;\n> \n> end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);\n> \n> do\n> {\n> long secs;\n> int microsecs;\n> \n> TimestampDifference(GetCurrentTimestamp(), end_time,\n> &secs, &microsecs);\n> \n> delay.tv_sec = secs;\n> delay.tv_nsec = microsecs * 1000;\n> \n> } while (nanosleep(&delay, NULL) == -1 && errno == EINTR);\n\nThis looks nicer. We could deal with clock drift easily (in case the\nsysadmin winds the clock back) by testing that tv_sec+tv_nsec is not\nhigher than the initial time to sleep. I don't know how common this\nsituation is nowadays, but I remember debugging a system years ago where\nautovacuum was sleeping for a very long time because of that. I can't\nremember now if we did anything in the code to cope, or just told\nsysadmins not to do that anymore :-)\n\nFWIW my (Linux's) nanosleep() manpage contains this note:\n\n If the interval specified in req is not an exact multiple of the granu‐\n larity underlying clock (see time(7)), then the interval will be rounded\n up to the next multiple. Furthermore, after the sleep completes, there\n may still be a delay before the CPU becomes free to once again execute\n the calling thread.\n\nIt's not clear to me what happens if the time to sleep is zero, so maybe\nthere should be a \"if tv_sec == 0 && tv_nsec == 0 then break\" statement\nat the bottom of the loop, to quit without sleeping one more time than\nneeded.\n\nFor Windows, this [1] looks like an interesting and possibly relevant\nread (though maybe SleepEx already does what we want to do here.)\n\n[1] https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Having your biases confirmed independently is how scientific progress is\nmade, and hence made our great society what it is today\" (Mary Gardiner)\n\n\n", "msg_date": "Fri, 12 Jul 2024 11:39:54 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> I'm imagining something like this:\n> \n> struct timespec delay;\n> TimestampTz end_time;\n> \n> end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);\n> \n> do\n> {\n> long secs;\n> int microsecs;\n> \n> TimestampDifference(GetCurrentTimestamp(), end_time,\n> &secs, &microsecs);\n> \n> delay.tv_sec = secs;\n> delay.tv_nsec = microsecs * 1000;\n> \n> } while (nanosleep(&delay, NULL) == -1 && errno == EINTR);\n> \n\nI do agree that this is cleaner code, but I am not sure I like this.\n\n\n1/ TimestampDifference has a dependency on gettimeofday, \nwhile my proposal utilizes clock_gettime. There are old discussions\nthat did not reach a conclusion comparing both mechanisms. \nMy main conclusion from these hacker discussions [1], [2] and other \nonline discussions on the topic is clock_gettime should replace\ngetimeofday when possible. Precision is the main reason.\n\n2/ It no longer uses the remain time. I think the remain time\nis still required here. I did a unrealistic stress test which shows \nthe original proposal can handle frequent interruptions much better.\n\n#1 in one session kicked off a vacuum\n\n set vacuum_cost_delay = 10;\n set vacuum_cost_limit = 1;\n set client_min_messages = log;\n update large_tbl set version = 1;\n vacuum (verbose, parallel 4) large_tbl;\n\n#2 in another session, ran a loop to continually\ninterrupt the vacuum leader. This was during the\n“heap scan” phase of the vacuum.\n\nPID=< pid of vacuum leader >\nwhile :\ndo\n kill -USR1 $PID\ndone\n\n\nUsing the proposed loop with the remainder, I noticed that\nthe actual time reported remains close to the requested\ndelay time.\n\nLOG: 10.000000,10.013420\nLOG: 10.000000,10.011188\nLOG: 10.000000,10.010860\nLOG: 10.000000,10.014839\nLOG: 10.000000,10.004542\nLOG: 10.000000,10.006035\nLOG: 10.000000,10.012230\nLOG: 10.000000,10.014535\nLOG: 10.000000,10.009645\nLOG: 10.000000,10.000817\nLOG: 10.000000,10.002162\nLOG: 10.000000,10.011721\nLOG: 10.000000,10.011655\n\nUsing the approach mentioned by Nathan, there\nare large differences between requested and actual time.\n\nLOG: 10.000000,17.801778\nLOG: 10.000000,12.795450\nLOG: 10.000000,11.793723\nLOG: 10.000000,11.796317\nLOG: 10.000000,13.785993\nLOG: 10.000000,11.803775\nLOG: 10.000000,15.782767\nLOG: 10.000000,31.783901\nLOG: 10.000000,19.792440\nLOG: 10.000000,21.795795\nLOG: 10.000000,18.800412\nLOG: 10.000000,16.782886\nLOG: 10.000000,10.795197\nLOG: 10.000000,14.793333\nLOG: 10.000000,29.806556\nLOG: 10.000000,18.810784\nLOG: 10.000000,11.804956\nLOG: 10.000000,24.809812\nLOG: 10.000000,25.815600\nLOG: 10.000000,22.809493\nLOG: 10.000000,22.790908\nLOG: 10.000000,19.699097\nLOG: 10.000000,23.795613\nLOG: 10.000000,24.797078\n\nLet me know what you think?\n\n[1] https://www.postgresql.org/message-id/flat/31856.1400021891%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/E1cO7fR-0003y0-9E%40gemulon.postgresql.org\n\n\n\nRegards,\n\nSami \n\n\nI'm imagining something like this:    struct timespec delay;    TimestampTz end_time;    end_time = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), msec);    do    {        long        secs;        int         microsecs;        TimestampDifference(GetCurrentTimestamp(), end_time,                            &secs, &microsecs);        delay.tv_sec = secs;        delay.tv_nsec = microsecs * 1000;    } while (nanosleep(&delay, NULL) == -1 && errno == EINTR);I do agree that this is cleaner code, but I am not sure I like this.1/ TimestampDifference has a dependency on gettimeofday, while my proposal utilizes clock_gettime. There are old discussionsthat did not reach a conclusion comparing both mechanisms. My main conclusion from these hacker discussions [1], [2] and other online discussions on the topic is clock_gettime should replacegetimeofday when possible. Precision is the main reason.2/ It no longer uses the remain time. I think the remain timeis still required here. I did a unrealistic stress test which shows the original proposal can handle frequent interruptions much better.#1 in one session kicked off a vacuum    set vacuum_cost_delay = 10;    set vacuum_cost_limit = 1;    set client_min_messages = log;    update large_tbl set version = 1;    vacuum (verbose, parallel 4) large_tbl;#2 in another session, ran a loop to continuallyinterrupt the vacuum leader. This was during the“heap scan” phase of the vacuum.PID=< pid of vacuum leader >while :do    kill -USR1 $PIDdoneUsing the proposed loop with the remainder, I noticed thatthe actual time reported remains close to the requesteddelay time.LOG:  10.000000,10.013420LOG:  10.000000,10.011188LOG:  10.000000,10.010860LOG:  10.000000,10.014839LOG:  10.000000,10.004542LOG:  10.000000,10.006035LOG:  10.000000,10.012230LOG:  10.000000,10.014535LOG:  10.000000,10.009645LOG:  10.000000,10.000817LOG:  10.000000,10.002162LOG:  10.000000,10.011721LOG:  10.000000,10.011655Using the approach mentioned by Nathan, thereare large differences between requested and actual time.LOG:  10.000000,17.801778LOG:  10.000000,12.795450LOG:  10.000000,11.793723LOG:  10.000000,11.796317LOG:  10.000000,13.785993LOG:  10.000000,11.803775LOG:  10.000000,15.782767LOG:  10.000000,31.783901LOG:  10.000000,19.792440LOG:  10.000000,21.795795LOG:  10.000000,18.800412LOG:  10.000000,16.782886LOG:  10.000000,10.795197LOG:  10.000000,14.793333LOG:  10.000000,29.806556LOG:  10.000000,18.810784LOG:  10.000000,11.804956LOG:  10.000000,24.809812LOG:  10.000000,25.815600LOG:  10.000000,22.809493LOG:  10.000000,22.790908LOG:  10.000000,19.699097LOG:  10.000000,23.795613\nLOG:  10.000000,24.797078Let me know what you think?[1] https://www.postgresql.org/message-id/flat/31856.1400021891%40sss.pgh.pa.us[2] https://www.postgresql.org/message-id/flat/E1cO7fR-0003y0-9E%40gemulon.postgresql.orgRegards,Sami", "msg_date": "Fri, 12 Jul 2024 12:14:56 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Fri, Jul 12, 2024 at 12:14:56PM -0500, Sami Imseih wrote:\n> 1/ TimestampDifference has a dependency on gettimeofday, \n> while my proposal utilizes clock_gettime. There are old discussions\n> that did not reach a conclusion comparing both mechanisms. \n> My main conclusion from these hacker discussions [1], [2] and other \n> online discussions on the topic is clock_gettime should replace\n> getimeofday when possible. Precision is the main reason.\n> \n> 2/ It no longer uses the remain time. I think the remain time\n> is still required here. I did a unrealistic stress test which shows \n> the original proposal can handle frequent interruptions much better.\n\nMy comment was mostly about coding style and not about gettimeofday()\nversus clock_gettime(). What does your testing show when you don't have\nthe extra check, i.e.,\n\n\tstruct timespec delay;\n\tstruct timespec remain;\n\n\tdelay.tv_sec = microsec / 1000000L;\n\tdelay.tv_nsec = (microsec % 1000000L) * 1000;\n\n\twhile (nanosleep(&delay, &remain) == -1 && errno == EINTR)\n\t\tdelay = remain;\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 12 Jul 2024 14:18:37 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> What does your testing show when you don't have\n> the extra check, i.e.,\n> \n> \tstruct timespec delay;\n> \tstruct timespec remain;\n> \n> \tdelay.tv_sec = microsec / 1000000L;\n> \tdelay.tv_nsec = (microsec % 1000000L) * 1000;\n> \n> \twhile (nanosleep(&delay, &remain) == -1 && errno == EINTR)\n> \t\tdelay = remain;\n> \n\n\nThis is similar to the first attempt [1], \n\n+pg_usleep_handle_interrupt(long microsec, bool force)\n {\n if (microsec > 0)\n {\n #ifndef WIN32\n struct timespec delay;\n+ struct timespec remaining;\n \n delay.tv_sec = microsec / 1000000L;\n delay.tv_nsec = (microsec % 1000000L) * 1000;\n- (void) nanosleep(&delay, NULL);\n+\n+ if (force)\n+ while (nanosleep(&delay, &remaining) == -1 && errno == EINTR)\n+ delay = remaining;\n\n\nbut Bertrand found long drifts [2[ which I could not reproduce.\nTo safeguard the long drifts, continue to use the &remain time with an \nadditional safeguard to make sure the actual sleep does not exceed the \nrequested sleep time.\n\n[1] https://www.postgresql.org/message-id/7D50DC5B-80C6-47B5-8DA8-A6C68A115EE5%40gmail.com\n[2] https://www.postgresql.org/message-id/ZoPC5IeP4k7sZpki%40ip-10-97-1-34.eu-west-3.compute.internal\n\n\nRegards,\n\nSami \n\n\nWhat does your testing show when you don't havethe extra check, i.e., struct timespec delay; struct timespec remain; delay.tv_sec = microsec / 1000000L; delay.tv_nsec = (microsec % 1000000L) * 1000; while (nanosleep(&delay, &remain) == -1 && errno == EINTR) delay = remain;This is similar to the first attempt [1], +pg_usleep_handle_interrupt(long microsec, bool force) {    if (microsec > 0)    { #ifndef WIN32        struct timespec delay;+       struct timespec remaining;         delay.tv_sec = microsec / 1000000L;        delay.tv_nsec = (microsec % 1000000L) * 1000;-       (void) nanosleep(&delay, NULL);++       if (force)+           while (nanosleep(&delay, &remaining) == -1 && errno == EINTR)+               delay = remaining;but Bertrand found long drifts [2[ which I could not reproduce.To safeguard the long drifts, continue to use the &remain time with an additional safeguard to make sure the actual sleep does not exceed the requested sleep time.[1] https://www.postgresql.org/message-id/7D50DC5B-80C6-47B5-8DA8-A6C68A115EE5%40gmail.com[2] https://www.postgresql.org/message-id/ZoPC5IeP4k7sZpki%40ip-10-97-1-34.eu-west-3.compute.internalRegards,Sami", "msg_date": "Fri, 12 Jul 2024 15:39:57 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Fri, Jul 12, 2024 at 03:39:57PM -0500, Sami Imseih wrote:\n> but Bertrand found long drifts [2[ which I could not reproduce.\n> To safeguard the long drifts, continue to use the &remain time with an \n> additional safeguard to make sure the actual sleep does not exceed the \n> requested sleep time.\n\nBertrand, does this safeguard fix the long drifts you saw?\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 15 Jul 2024 12:20:29 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 15, 2024 at 12:20:29PM -0500, Nathan Bossart wrote:\n> On Fri, Jul 12, 2024 at 03:39:57PM -0500, Sami Imseih wrote:\n> > but Bertrand found long drifts [2[ which I could not reproduce.\n> > To safeguard the long drifts, continue to use the &remain time with an \n> > additional safeguard to make sure the actual sleep does not exceed the \n> > requested sleep time.\n> \n> Bertrand, does this safeguard fix the long drifts you saw?\n\nYeah, it was the case with the first version using the safeguard (see [1]) and\nit's also the case with the last one shared in [2].\n\n[1]: https://www.postgresql.org/message-id/Zo0UdeE3i9d0Wt5E%40ip-10-97-1-34.eu-west-3.compute.internal\n[2]: https://www.postgresql.org/message-id/C18017A1-EDFD-4F2F-BCA1-0572D4CCC92B%40gmail.com \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Jul 2024 05:06:47 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "I am attaching v3 of the patch which addresses the comments made\nearlier by Bertrand about the comment in the patch [1]. Also I will stick with\nvacuum_sleep as the name as the function will be inside vacuum.c. I am not\nsure we should make this function available outside of vacuum, but I would like\nto hear other thoughts.\n\nAlso, earlier in the thread, Alvaro mentions what happens\nif the sleep time is 0 [2]. In that case, we do not do anything as we check\nif sleep time is > 0 microseconds before proceeding with the sleep\n\n\n[1] https://www.postgresql.org/message-id/ZpDhS4nFX66ItAze%40ip-10-97-1-34.eu-west-3.compute.internal\n[2] https://www.postgresql.org/message-id/202407120939.pr6wpjffmxov%40alvherre.pgsql\n\n\nRegards,\n\nSami\n\n", "msg_date": "Thu, 25 Jul 2024 17:27:15 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "attaching the patch again. Something is strange with my email client.\n\nRegards,\n\nSami", "msg_date": "Thu, 25 Jul 2024 17:29:06 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 25, 2024 at 05:27:15PM -0500, Sami Imseih wrote:\n> I am attaching v3 of the patch which addresses the comments made\n> earlier by Bertrand about the comment in the patch [1].\n\nThanks!\n\nLooking at it:\n\n1 ===\n\n+ struct instr_time start_time;\n\nI think we can get rid of the \"struct\" keyword here.\n\n2 ===\n\n+ struct instr_time current_time;\n+ struct instr_time elapsed_time;\n\nSame as above.\n\n3 ===\n\nI gave more thoughts and I think it can be simplified a bit to reduce the\nnumber of operations in the while loop.\n\nWhat about relying on a \"absolute\" time that way:\n\n\tinstr_time absolute;\n absolute.ticks = start_time.ticks + msec * 1000000;\n\nand then in the while loop:\n\n while (nanosleep(&delay, &remain) == -1 && errno == EINTR)\n {\n instr_time current_time;\n INSTR_TIME_SET_CURRENT(current_time);\n\n if (current_time.ticks > absolute.ticks)\n {\n break;\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 08:27:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "\n\n> On Jul 26, 2024, at 3:27 AM, Bertrand Drouvot <[email protected]> wrote:\n> \n> Hi,\n> \n> On Thu, Jul 25, 2024 at 05:27:15PM -0500, Sami Imseih wrote:\n>> I am attaching v3 of the patch which addresses the comments made\n>> earlier by Bertrand about the comment in the patch [1].\n> \n> Thanks!\n> \n> Looking at it:\n> \n> 1 ===\n> \n> + struct instr_time start_time;\n> \n> I think we can get rid of the \"struct\" keyword here.\n> \n> 2 ===\n> \n> + struct instr_time current_time;\n> + struct instr_time elapsed_time;\n> \n> Same as above.\n\nWill fix those 2.\n\n> \n> 3 ===\n> \n> I gave more thoughts and I think it can be simplified a bit to reduce the\n> number of operations in the while loop.\n> \n> What about relying on a \"absolute\" time that way:\n> \n> \tinstr_time absolute;\n> absolute.ticks = start_time.ticks + msec * 1000000;\n> \n> and then in the while loop:\n> \n> while (nanosleep(&delay, &remain) == -1 && errno == EINTR)\n> {\n> instr_time current_time;\n> INSTR_TIME_SET_CURRENT(current_time);\n> \n> if (current_time.ticks > absolute.ticks)\n> {\n> break;\n\nWhile I agree this code is cleaner, myy hesitation there is we don’t \nhave any other place in which we access .ticks directly and the \ncommon practice is to use the intsr_time.h APIs.\n\n\nWhat do you think?\n\n\nRegards,\n\nSami \n\n\n\n\n", "msg_date": "Mon, 29 Jul 2024 18:15:49 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 29, 2024 at 06:15:49PM -0500, Sami Imseih wrote:\n> > On Jul 26, 2024, at 3:27 AM, Bertrand Drouvot <[email protected]> wrote:\n> > 3 ===\n> > \n> > I gave more thoughts and I think it can be simplified a bit to reduce the\n> > number of operations in the while loop.\n> > \n> > What about relying on a \"absolute\" time that way:\n> > \n> > \tinstr_time absolute;\n> > absolute.ticks = start_time.ticks + msec * 1000000;\n> > \n> > and then in the while loop:\n> > \n> > while (nanosleep(&delay, &remain) == -1 && errno == EINTR)\n> > {\n> > instr_time current_time;\n> > INSTR_TIME_SET_CURRENT(current_time);\n> > \n> > if (current_time.ticks > absolute.ticks)\n> > {\n> > break;\n> \n> While I agree this code is cleaner, myy hesitation there is we don’t \n> have any other place in which we access .ticks directly and the \n> common practice is to use the intsr_time.h APIs.\n\nyeah, we already have a few macros that access the .ticks, so maybe we could add\n2 new ones, say:\n\n1. INSTR_TIME_ADD_MS(t1, msec)\n2. INSTR_TIME_IS_GREATER(t1, t2)\n\nI think the less operations is done in the while loop the better.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 06:31:34 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> yeah, we already have a few macros that access the .ticks, so maybe we could add\n> 2 new ones, say:\n> \n> 1. INSTR_TIME_ADD_MS(t1, msec)\n> 2. INSTR_TIME_IS_GREATER(t1, t2)\n> \n> I think the less operations is done in the while loop the better.\n> \n\nSee v4. it includes 2 new instr_time.h macros to simplify the \ncode insidethe while loop.\n\nRegards,\n\nSami", "msg_date": "Mon, 5 Aug 2024 15:07:34 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 05, 2024 at 03:07:34PM -0500, Sami Imseih wrote:\n\n> \n> > yeah, we already have a few macros that access the .ticks, so maybe we could add\n> > 2 new ones, say:\n> > \n> > 1. INSTR_TIME_ADD_MS(t1, msec)\n> > 2. INSTR_TIME_IS_GREATER(t1, t2)\n> > \n> > I think the less operations is done in the while loop the better.\n> > \n> \n> See v4. it includes 2 new instr_time.h macros to simplify the \n> code insidethe while loop.\n\nThanks!\n\n1 ===\n\n+#define INSTR_TIME_IS_GREATER(x,y) \\\n+ ((bool) (x).ticks > (y).ticks)\n\nAround parentheses are missing, that should be ((bool) ((x).ticks > (y).ticks)).\nI did not pay attention to it initially but found it was the culprit of breaks\nnot occuring (while my test case produces some).\n\nThat said, I don't think the cast is necessary here and that we could get rid of\nit. \n\n2 ===\n\nWhat about providing a quick comment about the 2 new macros in header of \ninstr_time.h? (like it is done for the others macros)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Aug 2024 09:00:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "v5 takes care of the latest comments by Bertrand.\n\nRegards,\n\nSami", "msg_date": "Tue, 6 Aug 2024 12:36:44 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 06, 2024 at 12:36:44PM -0500, Sami Imseih wrote:\n\n> \n> \n> v5 takes care of the latest comments by Bertrand.\n> \n\nThanks!\n\nPlease find attached a rebase version (due to 39a138fbef) and in passing I changed\none comment:\n\n\"add t in microseconds to a instr_time\" -> \"add t (in microseconds) to x\"\n\nDoes that make sense to you?\n\nThat said, it looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Aug 2024 06:00:53 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> On Aug 7, 2024, at 1:00 AM, Bertrand Drouvot <[email protected]> wrote:\n> \n> add t (in microseconds) to x”\n\n\nI was attempting to be more verbose in the comment,\nbut what you have above matches the format of\nthe other comments. I am ok with your revision. \n\nRegards.\n\nSami \n\n\nOn Aug 7, 2024, at 1:00 AM, Bertrand Drouvot <[email protected]> wrote:add t (in microseconds) to x”I was attempting to be more verbose in the comment,but what you have above matches the format ofthe other comments. I am ok with your revision. Regards.Sami", "msg_date": "Wed, 7 Aug 2024 09:11:19 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 07, 2024 at 09:11:19AM -0500, Sami Imseih wrote:\n> \n> \n> > On Aug 7, 2024, at 1:00 AM, Bertrand Drouvot <[email protected]> wrote:\n> > \n> > add t (in microseconds) to x”\n> \n> \n> I was attempting to be more verbose in the comment,\n> but what you have above matches the format of\n> the other comments. I am ok with your revision. \n> \n\nThanks!\n\nI'm marking the CF entry [1], as RFC.\n\n[1]: https://commitfest.postgresql.org/49/5161/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 14:23:41 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Wed, Aug 07, 2024 at 06:00:53AM +0000, Bertrand Drouvot wrote:\n> +\t\tSleepEx((microsec < 500 ? 1 : (microsec + 500) / 1000), FALSE);\n\nI think this deserves a comment.\n\n> +#define INSTR_TIME_ADD_MICROSEC(x,t) \\\n> +\t((x).ticks += t * NS_PER_US)\n\nI'd add parentheses around \"t\" to ensure any expressions given as \"t\" are\nevaluated first.\n\nAlso, do we need to worry about overflow here? It looks like the rest of\ninstr_time.h is oblivious about overflow, so maybe this is better discussed\nin a separate thread...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 7 Aug 2024 09:36:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> On Wed, Aug 07, 2024 at 06:00:53AM +0000, Bertrand Drouvot wrote:\n>> +\t\tSleepEx((microsec < 500 ? 1 : (microsec + 500) / 1000), FALSE);\n> \n> I think this deserves a comment.\n> \n\nDone\n\n>> +#define INSTR_TIME_ADD_MICROSEC(x,t) \\\n>> +\t((x).ticks += t * NS_PER_US)\n> \n> I'd add parentheses around \"t\" to ensure any expressions given as \"t\" are\n> evaluated first.\n> \n\nDone\n\n\n> Also, do we need to worry about overflow here? It looks like the rest of\n> instr_time.h is oblivious about overflow, so maybe this is better discussed\n> in a separate thread...\n> \n\nI agree, this needs to be handled in a different thread.\n\nPlease see v7.\n\n\nRegards,\n\nSami", "msg_date": "Wed, 7 Aug 2024 16:22:22 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 07, 2024 at 09:36:59AM -0500, Nathan Bossart wrote:\n> Also, do we need to worry about overflow here? It looks like the rest of\n> instr_time.h is oblivious about overflow, so maybe this is better discussed\n> in a separate thread...\n\nYeah, a separate thread would be better.\n\nFWIW and just out of curiosity:\n\n1. it seems to me that most of the time (always?) we are manipulating instr_time(s)\nas interval(s) which (with int64) gives “space” for about 292 years interval time\nmeasurement (if my maths are correct).\n\n2. for \"absolute\" manipulation (if any) it would depend of the PG_INSTR_CLOCK.\n\nA \"man clock_gettime\" mentions:\n\n 2.1 CLOCK_MONOTONIC: on Linux, time since the system was booted. Not sure what\nthe longest Linux uptime record is but can't be more than since the 90's.\n\n 2.2 CLOCK_REALTIME: Its time represents seconds and nanoseconds since the Epoch.\nIt means that we’re currently about 237 years away from the limit. So even,\nif we were to say add 2 \"recents\" of them we are still about 183 years away from\nthe limit.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 8 Aug 2024 06:49:27 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Thu, Aug 08, 2024 at 06:49:27AM +0000, Bertrand Drouvot wrote:\n> 2.2 CLOCK_REALTIME: Its time represents seconds and nanoseconds since the Epoch.\n> It means that we�re currently about 237 years away from the limit. So even,\n> if we were to say add 2 \"recents\" of them we are still about 183 years away from\n> the limit.\n\nI hope to be retired before then.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 8 Aug 2024 14:52:07 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Wed, Aug 07, 2024 at 04:22:22PM -0500, Sami Imseih wrote:\n> Please see v7.\n\nThanks. This one looks pretty good to me, and so I plan to commit it in\nthe near future unless anyone voices concerns about the approach.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 8 Aug 2024 15:01:20 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Thu, Aug 08, 2024 at 03:01:20PM -0500, Nathan Bossart wrote:\n> Thanks. This one looks pretty good to me, and so I plan to commit it in\n> the near future unless anyone voices concerns about the approach.\n\nAs I am preparing this for commit, I'm wondering whether it makes sense to\nname the new function vacuum_sleep() and keep it private to vacuum.c.\nNothing about this function is terribly specific to vacuum, and it's not\ninconceivable that it might be useful elsewhere. Perhaps we should move it\nto pgsleep.c and rename it to something to the effect of\npg_usleep_non_interruptable().\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 9 Aug 2024 14:03:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 09, 2024 at 02:03:55PM -0500, Nathan Bossart wrote:\n> On Thu, Aug 08, 2024 at 03:01:20PM -0500, Nathan Bossart wrote:\n> > Thanks. This one looks pretty good to me, and so I plan to commit it in\n> > the near future unless anyone voices concerns about the approach.\n> \n> As I am preparing this for commit, I'm wondering whether it makes sense to\n> name the new function vacuum_sleep() and keep it private to vacuum.c.\n> Nothing about this function is terribly specific to vacuum, and it's not\n> inconceivable that it might be useful elsewhere. Perhaps we should move it\n> to pgsleep.c and rename it to something to the effect of\n> pg_usleep_non_interruptable().\n\nYeah, I had the same thought in [1], so +1.\n\n[1]: https://www.postgresql.org/message-id/ZpDhS4nFX66ItAze%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Aug 2024 19:26:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> Yeah, I had the same thought in [1], so +1.\n> \n> [1]: https://www.postgresql.org/message-id/ZpDhS4nFX66ItAze%40ip-10-97-1-34.eu-west-3.compute.internal\n> \n\nThe intention ( see start of the thread ) was to make this a general API,\nbut I was not sure if there are use cases outside of vacuum.c. \n\nIn v8, I moved the function to pgsleep.c/signals.c and called it pg_usleep_non_interruptible.\nThe function, unlike vacuum_sleep, will take in micros as an argument to match with pg_usleep.\n\nRegards\n\nSami", "msg_date": "Fri, 9 Aug 2024 19:01:55 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "v9 has some has some minor corrections to the comments.\n\n\nRegards,\n\nSami", "msg_date": "Sat, 10 Aug 2024 08:27:56 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Sat, Aug 10, 2024 at 08:27:56AM -0500, Sami Imseih wrote:\n\n> \n> \n> v9 has some has some minor corrections to the comments.\n> \n\nThanks!\n\n1 ===\n\n+ * Unlike pg_usleep, This function continues\n\ns/This/this/?\n\nApart from the above, LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Aug 2024 05:24:49 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> \n> + * Unlike pg_usleep, This function continues\n> \n> s/This/this/?\n> \n> Apart from the above, LGTM.\n> \n\n\n\n\n\nThanks for this catch. Uploaded v10.\n\nRegards,\n\nSami", "msg_date": "Mon, 12 Aug 2024 10:13:38 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "My email client attached the last response for\nsome reason :( \n\nv10 attached in the previous message addresses \nBertrands last comment and replaces “This” with “this\"\n\nRegards,\n\nSami \n\n", "msg_date": "Mon, 12 Aug 2024 10:19:56 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 12, 2024 at 10:19:56AM -0500, Sami Imseih wrote:\n> v10 attached in the previous message addresses \n> Bertrands last comment and replaces “This” with “this\"\n> \n\nThanks, v10 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:56:18 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Mon, Aug 12, 2024 at 03:56:18PM +0000, Bertrand Drouvot wrote:\n> Thanks, v10 LGTM.\n\nAs cfbot points out [0], this is missing a Windows/frontend implementation.\n\n[0] https://cirrus-ci.com/task/6393555868975104\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 12 Aug 2024 14:50:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> As cfbot points out [0], this is missing a Windows/frontend implementation.\n>\n> [0] https://cirrus-ci.com/task/6393555868975104\n\nLooks like the pgsleep implementation is missing the\n#ifndef WIN32\n\nI followed what is done in pg_usleep.\n\nv11 should now build on Windows, hopefully.\n\nStrangely, v10 build on a Windows machine I have locally.\n\n\nRegards,\n\nSami", "msg_date": "Mon, 12 Aug 2024 15:51:35 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "I'm trying to understand what the point of this patch is, so I went to \nread this thread from the beginning:\n\n> In the proposal by Bertrand [1] to implement vacuum cost delay tracking\n> in pg_stat_progress_vacuum, it was discovered that the vacuum cost delay\n> ends early on the leader process of a parallel vacuum due to parallel workers > reporting progress on index vacuuming, which was introduced in 17\n> with commit [2]. With this patch, everytime a parallel worker\n> completes a vacuum index, it will send a completion message to the leader.\n\nOk, so we might sometimes skip the sleep, if an interrupt is received. I \nagree that's a bit sloppy, but probably won't make any difference in \npractice.\n\n> The facility that allows a parallel worker to report progress to the leader was\n> introduced in commit [3].\n>\n> In the case of the patch being proposed by Bertrand, the number of interrupts > will be much more frequent as parallel workers would send a message \nto the leader\n> to update the vacuum delay counters every vacuum_delay_point call.\n\nHmm, I wonder if that's a good design, if it results in a lot of interrupts.\n\nOn the patch itself: Making the sleeps in vacuum uninterruptible means \nthat vacuum will be more slow to respond to interrupts. If you SIGTERM a \nvacuum process, or hit CTRL-C, you *would* want to exit the sleep ASAP.\n\nTom raised that concern earlier in this thread \n(https://www.postgresql.org/message-id/2100439.1719610468%40sss.pgh.pa.us), \nbut it seems the discussion wandered off to the details of how to do the \nsleep, and left that unaddressed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 13 Aug 2024 00:04:28 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Tue, Aug 13, 2024 at 12:04:28AM +0300, Heikki Linnakangas wrote:\n>> In the case of the patch being proposed by Bertrand, the number of\n>> interrupts > will be much more frequent as parallel workers would send a\n>> message\n> to the leader\n>> to update the vacuum delay counters every vacuum_delay_point call.\n> \n> Hmm, I wonder if that's a good design, if it results in a lot of interrupts.\n\nSkimming the last few messages of that thread [0], it looks like Bertrand\nis exploring ways to avoid so many interrupts. I guess the unavoidable\nquestion is whether this work is still worthwhile given that improvement.\n\n> On the patch itself: Making the sleeps in vacuum uninterruptible means that\n> vacuum will be more slow to respond to interrupts. If you SIGTERM a vacuum\n> process, or hit CTRL-C, you *would* want to exit the sleep ASAP.\n\nSince the delay will typically be pretty small (2 milliseconds by default\nfor autovacuum), I'm assuming this won't ordinarily be noticeable. But I\ndo think it is an important consideration.\n\n[0] https://commitfest.postgresql.org/49/5027/\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 12 Aug 2024 17:04:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "\n> Skimming the last few messages of that thread [0], it looks like Bertrand\n> is exploring ways to avoid so many interrupts. I guess the unavoidable\n> question is whether this work is still worthwhile given that improvement.\nThe way the instrumentation in [0] dealt with interrupts was too complex,\nwhich is why it seemed better to handle the restart the remainder of the\nsleep in the sleep function\n>> On the patch itself: Making the sleeps in vacuum uninterruptible means that\n>> vacuum will be more slow to respond to interrupts. If you SIGTERM a vacuum\n>> process, or hit CTRL-C, you *would* want to exit the sleep ASAP.\n> Since the delay will typically be pretty small (2 milliseconds by default\n> for autovacuum), I'm assuming this won't ordinarily be noticeable. But I\n> do think it is an important consideration.\n>\nAt most, the sleep will be 100ms for vacuum.\n\n>\n> Tom raised that concern earlier in this thread \n> (https://www.postgresql.org/message-id/2100439.1719610468%40sss.pgh.pa.us), \n> but it seems the discussion wandered off to the details of how to do \n> the sleep, and left that unaddressed.\n>\nDoing something like pg_sleep, using WaitLatch [1], was explored. \nHowever this\ndoes not support microsecond sleeps which was allowed in 720de00af49\nThomas shared WaitLatchUs [2], which supports higher precision sleeps, but\nit requires epoll_pwait(2) on the system, thus it's not very portable.\nUsing nanosleep with remain time and checking for drift was the most \nportable\napproach.\n\nRegards,\n\nSami\n\n[0] https://commitfest.postgresql.org/49/5027/\n[1] https://www.postgresql.org/message-id/67072E39-3B4E-4240-8373-AC45E23721E7%40gmail.com\n[2] https://www.postgresql.org/message-id/CA+hUKGKVbJE59JkwnUj5XMY+-rzcTFciV9vVC7i=LUfWPds8Xw@mail.gmail.com\n\n\n\n\n", "msg_date": "Mon, 12 Aug 2024 17:35:08 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 12, 2024 at 05:04:02PM -0500, Nathan Bossart wrote:\n> On Tue, Aug 13, 2024 at 12:04:28AM +0300, Heikki Linnakangas wrote:\n> >> In the case of the patch being proposed by Bertrand, the number of\n> >> interrupts > will be much more frequent as parallel workers would send a\n> >> message\n> > to the leader\n> >> to update the vacuum delay counters every vacuum_delay_point call.\n> > \n> > Hmm, I wonder if that's a good design, if it results in a lot of interrupts.\n> \n> Skimming the last few messages of that thread [0], it looks like Bertrand\n> is exploring ways to avoid so many interrupts.\n\nYeah, that was mainly to avoid the side effect of the interrupts making the \nvacuum faster as compared to the master branch (as in [0], the leader is not\nhonouring the delays when the parallel workers report their delayed time): that\ncould be noticeable depending of the amount of work and the number of parallel\nworkers involved in the vacuum.\n\nThat could be solved thanks to this thread. Once this thread is over (and whatever\nthe outcome is), I'll resume my testing as far the cost delay report is concerned.\n\n> I guess the unavoidable\n> question is whether this work is still worthwhile given that improvement.\n\nThe delay not being honored already affects the vacuum since we allow a parallel\nworker to report progress to the leader (see [1]). The interrupts are far less\nfrequent (as compare to [1]) though.\n\n[0]: https://commitfest.postgresql.org/49/5027/\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f1889729dd\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 05:41:16 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Mon, Aug 12, 2024 at 05:35:08PM -0500, Imseih (AWS), Sami wrote:\n>> Skimming the last few messages of that thread [0], it looks like Bertrand\n>> is exploring ways to avoid so many interrupts. I guess the unavoidable\n>> question is whether this work is still worthwhile given that improvement.\n>\n> The way the instrumentation in [0] dealt with interrupts was too complex,\n> which is why it seemed better to handle the restart the remainder of the\n> sleep in the sleep function\n\nCan you elaborate on how it is too complex?\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 13 Aug 2024 10:09:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "\nOn 8/13/24 10:09 AM, Nathan Bossart wrote:\n> On Mon, Aug 12, 2024 at 05:35:08PM -0500, Imseih (AWS), Sami wrote:\n>>> Skimming the last few messages of that thread [0], it looks like Bertrand\n>>> is exploring ways to avoid so many interrupts. I guess the unavoidable\n>>> question is whether this work is still worthwhile given that improvement.\n>> The way the instrumentation in [0] dealt with interrupts was too complex,\n>> which is why it seemed better to handle the restart the remainder of the\n>> sleep in the sleep function\n> Can you elaborate on how it is too complex?\n>\n[0] made vacuum_delay_point more complex as it has to\ninstrument cost_delay at an interval to reduce the number\nof interrupts to the leader.\n\nOn the other hand, with allowing the sleep to deal with\ninterrupts,no additional logic to space out instrumentation\nis required.\n\n\nRegards,\n\nSami\n\n\n[0] https://commitfest.postgresql.org/49/5027/\n\n\n", "msg_date": "Tue, 13 Aug 2024 10:47:51 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:47:51AM -0500, Imseih (AWS), Sami wrote:\n> On 8/13/24 10:09 AM, Nathan Bossart wrote:\n>> On Mon, Aug 12, 2024 at 05:35:08PM -0500, Imseih (AWS), Sami wrote:\n>> > > Skimming the last few messages of that thread [0], it looks like Bertrand\n>> > > is exploring ways to avoid so many interrupts. I guess the unavoidable\n>> > > question is whether this work is still worthwhile given that improvement.\n>> > The way the instrumentation in [0] dealt with interrupts was too complex,\n>> > which is why it seemed better to handle the restart the remainder of the\n>> > sleep in the sleep function\n>> Can you elaborate on how it is too complex?\n>> \n> [0] made vacuum_delay_point more complex as it has to\n> instrument cost_delay at an interval to reduce the number\n> of interrupts to the leader.\n\nSure, but looking at the patch [0], it adds maybe an extra 10 lines of code\nto limit the reports to 1 Hz. That doesn't strike me as too complex...\n\n[0] https://postgr.es/m/ZnlPZZZJCRu/8fka%40ip-10-97-1-34.eu-west-3.compute.internal\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 13 Aug 2024 10:57:24 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "\nOn 8/13/24 10:57 AM, Nathan Bossart wrote:\n> On Tue, Aug 13, 2024 at 10:47:51AM -0500, Imseih (AWS), Sami wrote:\n>> On 8/13/24 10:09 AM, Nathan Bossart wrote:\n>>> On Mon, Aug 12, 2024 at 05:35:08PM -0500, Imseih (AWS), Sami wrote:\n>>>>> Skimming the last few messages of that thread [0], it looks like Bertrand\n>>>>> is exploring ways to avoid so many interrupts. I guess the unavoidable\n>>>>> question is whether this work is still worthwhile given that improvement.\n>>>> The way the instrumentation in [0] dealt with interrupts was too complex,\n>>>> which is why it seemed better to handle the restart the remainder of the\n>>>> sleep in the sleep function\n>>> Can you elaborate on how it is too complex?\n>>>\n>> [0] made vacuum_delay_point more complex as it has to\n>> instrument cost_delay at an interval to reduce the number\n>> of interrupts to the leader.\n> Sure, but looking at the patch [0], it adds maybe an extra 10 lines of code\n> to limit the reports to 1 Hz. That doesn't strike me as too complex...\n>\n> [0] https://postgr.es/m/ZnlPZZZJCRu/8fka%40ip-10-97-1-34.eu-west-3.compute.internal\nPerhaps \"complex\" may not be the correct way to describe it.\n\nHaving to add special handling to space out instrumentation\ndirectly in vacuum_delay_point seems very odd to me. I don't\nthink vacuum_delay_point should have to worry about this.\n\nAlso,\n1/ what is an appropriate interval to collect these stats?\n2/ What if there are other callers in the future that wish\nto instrument parallel vacuum workers? they will need to implement\nsimilar logic.\n\nRegards,\n\nSami\n\n\n", "msg_date": "Tue, 13 Aug 2024 11:07:46 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Please disregards this point from the last reply:\n\n\n\"\"\"\n\n2/ What if there are other callers in the future that wish\nto instrument parallel vacuum workers? they will need to implement\nsimilar logic.\n\n\"\"\"\n\nI misspoke about this and this point does not matter since only\nvacuum sleep matters for this current discussion.\n\n\nRegards,\n\nSami\n\n\n", "msg_date": "Tue, 13 Aug 2024 11:15:56 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Tue, Aug 13, 2024 at 11:07:46AM -0500, Imseih (AWS), Sami wrote:\n> Having to add special handling to space out instrumentation\n> directly in vacuum_delay_point seems very odd to me. I don't\n> think vacuum_delay_point should have to worry about this.\n> \n> Also,\n> 1/ what is an appropriate interval to collect these stats?\n> 2/ What if there are other callers in the future that wish\n> to instrument parallel vacuum workers? they will need to implement\n> similar logic.\n\nNone of this seems intractable to me. 1 Hz seems like an entirely\nreasonable place to start, and it is very easy to change (or to even make\nconfigurable). pg_stat_progress_vacuum might show slightly old values in\nthis column, but that should be easy enough to explain in the docs if we\nare really concerned about it. If other callers want to do something\nsimilar, maybe we should add a more generic implementation in\nbackend_progress.c.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 13 Aug 2024 11:40:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> None of this seems intractable to me. 1 Hz seems like an entirely\n> reasonable place to start, and it is very easy to change (or to even make\n> configurable). pg_stat_progress_vacuum might show slightly old values in\n> this column, but that should be easy enough to explain in the docs if we\n> are really concerned about it. If other callers want to do something\n> similar, maybe we should add a more generic implementation in\n> backend_progress.c.\n>\nI don't know if I agree. Making vacuum sleep more robust to handle\ninterrupts seems like a cleaner general solution than to add\neven more code to handle this case or have to explain the behavior of\ncost delay instrumentation in docs.\n\n\nRegards,\n\n\nSami\n\n\n", "msg_date": "Tue, 13 Aug 2024 13:12:30 -0500", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Tue, Aug 13, 2024 at 01:12:30PM -0500, Imseih (AWS), Sami wrote:\n>> None of this seems intractable to me. 1 Hz seems like an entirely\n>> reasonable place to start, and it is very easy to change (or to even make\n>> configurable). pg_stat_progress_vacuum might show slightly old values in\n>> this column, but that should be easy enough to explain in the docs if we\n>> are really concerned about it. If other callers want to do something\n>> similar, maybe we should add a more generic implementation in\n>> backend_progress.c.\n>> \n> I don't know if I agree. Making vacuum sleep more robust to handle\n> interrupts seems like a cleaner general solution than to add\n> even more code to handle this case or have to explain the behavior of\n> cost delay instrumentation in docs.\n\nAnother concern is the huge number of PqMsg_Progress messages sent by\nparallel workers with that approach. In Bertrand's tests, he was seeing\nnearly 350K interrupts for a ~19 minute vacuum (~300 interrupts per\nsecond). That seems a bit extreme to me. I don't see how anyone could\npossibly need stats about vacuum delays with that level of accuracy.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 13 Aug 2024 16:30:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Tue, Aug 13, 2024 at 4:30 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Tue, Aug 13, 2024 at 01:12:30PM -0500, Imseih (AWS), Sami wrote:\n> >> None of this seems intractable to me. 1 Hz seems like an entirely\n> >> reasonable place to start, and it is very easy to change (or to even\n> make\n> >> configurable). pg_stat_progress_vacuum might show slightly old values\n> in\n> >> this column, but that should be easy enough to explain in the docs if we\n> >> are really concerned about it. If other callers want to do something\n> >> similar, maybe we should add a more generic implementation in\n> >> backend_progress.c.\n> >>\n> > I don't know if I agree. Making vacuum sleep more robust to handle\n> > interrupts seems like a cleaner general solution than to add\n> > even more code to handle this case or have to explain the behavior of\n> > cost delay instrumentation in docs.\n>\n> Another concern is the huge number of PqMsg_Progress messages sent by\n> parallel workers with that approach. In Bertrand's tests, he was seeing\n> nearly 350K interrupts for a ~19 minute vacuum (~300 interrupts per\n> second). That seems a bit extreme to me. I don't see how anyone could\n> possibly need stats about vacuum delays with that level of accuracy.\n>\n> --\n> nathan\n\n\n> Fair point. If there is a clear benefit to spacing out the vacuum sleep\ndelay instrumentation, that could be taken up in that thread. This will\nreduce the interrupts, but not eliminate them.\n\nThere could still be benefit to ensure that vacuum sleeps can deal\nwith interrupts and sleep the requested time consistently.\n\nRegards,\n\nSami\n\nOn Tue, Aug 13, 2024 at 4:30 PM Nathan Bossart <[email protected]> wrote:On Tue, Aug 13, 2024 at 01:12:30PM -0500, Imseih (AWS), Sami wrote:\n>> None of this seems intractable to me.  1 Hz seems like an entirely\n>> reasonable place to start, and it is very easy to change (or to even make\n>> configurable).  pg_stat_progress_vacuum might show slightly old values in\n>> this column, but that should be easy enough to explain in the docs if we\n>> are really concerned about it.  If other callers want to do something\n>> similar, maybe we should add a more generic implementation in\n>> backend_progress.c.\n>> \n> I don't know if I agree. Making vacuum sleep more robust to handle\n> interrupts seems like a cleaner general solution than to add\n> even more code to handle this case or have to explain the behavior of\n> cost delay instrumentation in docs.\n\nAnother concern is the huge number of PqMsg_Progress messages sent by\nparallel workers with that approach.  In Bertrand's tests, he was seeing\nnearly 350K interrupts for a ~19 minute vacuum (~300 interrupts per\nsecond).  That seems a bit extreme to me.  I don't see how anyone could\npossibly need stats about vacuum delays with that level of accuracy.\n\n-- \nnathan\nFair point. If there is a clear benefit to spacing out the vacuum sleep delay instrumentation, that could be taken up in that thread. This willreduce the interrupts,  but not eliminate them. There could still be benefit to ensure that vacuum sleeps can deal with interrupts and sleep the requested time consistently. Regards,Sami", "msg_date": "Tue, 13 Aug 2024 18:18:22 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 13, 2024 at 04:30:46PM -0500, Nathan Bossart wrote:\n> On Tue, Aug 13, 2024 at 01:12:30PM -0500, Imseih (AWS), Sami wrote:\n> >> None of this seems intractable to me. 1 Hz seems like an entirely\n> >> reasonable place to start, and it is very easy to change (or to even make\n> >> configurable). pg_stat_progress_vacuum might show slightly old values in\n> >> this column, but that should be easy enough to explain in the docs if we\n> >> are really concerned about it. If other callers want to do something\n> >> similar, maybe we should add a more generic implementation in\n> >> backend_progress.c.\n> >> \n> > I don't know if I agree. Making vacuum sleep more robust to handle\n> > interrupts seems like a cleaner general solution than to add\n> > even more code to handle this case or have to explain the behavior of\n> > cost delay instrumentation in docs.\n> \n> Another concern is the huge number of PqMsg_Progress messages sent by\n> parallel workers with that approach. In Bertrand's tests, he was seeing\n> nearly 350K interrupts for a ~19 minute vacuum (~300 interrupts per\n> second). That seems a bit extreme to me. I don't see how anyone could\n> possibly need stats about vacuum delays with that level of accuracy.\n> \n\nI gave it more thoughts and I don't think we have to choose between the two.\nThe 1 Hz approach reduces the number of interrupts and Sami's patch provides a\nway to get \"accurate\" delay in case of interrupts. I think both have their own\nbenefit. \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Aug 2024 06:00:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Wed, Aug 14, 2024 at 06:00:06AM +0000, Bertrand Drouvot wrote:\n> I gave it more thoughts and I don't think we have to choose between the two.\n> The 1 Hz approach reduces the number of interrupts and Sami's patch provides a\n> way to get \"accurate\" delay in case of interrupts. I think both have their own\n> benefit. \n\nIs it really that important to delay with that level of accuracy? In most\ncases, the chances of actually interrupting a given vacuum delay point are\npretty small. Even in the extreme scenario you tested with ~350K\ninterrupts in a 19 minute vacuum, you only saw a 10-15% difference in total\ntime. I wouldn't say I'm diametrically opposed to this patch, but I do\nthink we need to carefully consider whether it's worth the extra code.\n\nSeparately, I've been wondering whether it's worth allowing the sleep to be\ninterrupted in certain cases, such as SIGINT and SIGTERM. That should\naddress one of Heikki's points.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 15 Aug 2024 16:13:29 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> time. I wouldn't say I'm diametrically opposed to this patch, but I do\n> think we need to carefully consider whether it's worth the extra code.\n\nFWIW, besides the patch that Bertrand is proposing [1], there is another parallel \nvacuum case being discussed to allow for parallel heap scan [2].\n\nBeing able to support both instrumentation of sleep time by a parallel workers\nand ensuring that actual sleep times are as close as possible to the \nrequested times is a good think, IMO.\n\n> Separately, I've been wondering whether it's worth allowing the sleep to be\n> interrupted in certain cases, such as SIGINT and SIGTERM. That should\n> address one of Heikki's points.\n\nAn idea may be to check for pending interrupts inside the \npg_usleep_non_interruptible nanosleep loop. If there is a\npending interrupt and the interrupt is QueryCancelPending or\nClientConnectionLost, we can break out immediately.\n\nI am not sure yet how this can work for Windows, since for\nthis patch, we are using a simple SleepEx call which is \nnon-interruptible anyhow. \n\nIs it worth the effort and even more code to deal with specific\nInterrupts for such short sleeps ( less than 100ms for vacuum at most )?\n\nI am also thinking that pg_usleep_non_interruptuble routine should have\na cap on the sleep time allowed. That cap can be 100ms to match the \nmax vacuum_cost_delay. This will prevent anyone from trying to use\nthis API for much longer sleeps.\n\nWhat do you think?\n\n[1] https://www.postgresql.org/message-id/flat/ZmaXmWDL829fzAVX%40ip-10-97-1-34.eu-west-3.compute.internal\n[2] https://www.postgresql.org/message-id/CAD21AoAEfCNv-GgaDheDJ%2Bs-p_Lv1H24AiJeNoPGCmZNSwL1YA%40mail.gmail.com\n\n\nRegards,\n\nSami\n\n\n\n\n", "msg_date": "Sat, 17 Aug 2024 09:49:07 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Wed, Aug 14, 2024 at 9:30 AM Nathan Bossart <[email protected]> wrote:\n> Another concern is the huge number of PqMsg_Progress messages sent by\n> parallel workers with that approach. In Bertrand's tests, he was seeing\n> nearly 350K interrupts for a ~19 minute vacuum (~300 interrupts per\n> second). That seems a bit extreme to me. I don't see how anyone could\n> possibly need stats about vacuum delays with that level of accuracy.\n\nI suspect CF #5118 would fix lots of cases of ProcSignal() senders\ngoing berserk, because it deletes SendProcSignal(), and introduces\nSendInterrupt(), which calls SetLatch(), which doesn't send a signal if\nthe latch is already set. Even if the latch is not already set, it\nonly sends a signal if the latch is currently being waited on\n(\"maybe_sleeping\" flag). Even when it sends a signal, it goes to a\nsignalfd, kqueue or NT event flag on common platforms.\n\nOf course that is only talking about the receiving side. I'm sure we\ncan improve the senders too. There's nothing we can do about NOTIFY,\nbecause that's under user control, but that PqMsg_Progress case sounds\npretty bad, and the recovery conflict system could probably be made\nmore precise in its logic about who to wake up and when, etc.\n\nOther backends going bananas with SendProcSignal() is the reason\ndsm_impl_posix_resize() has to block signals while calling\nposix_fallocate(). Unlike nanosleep(), which you can fix by tracking\nremaining time, posix_fallocate() is all-or-nothing, it has no way to\nreport partial progress, so it must therefore undo its work if\ninterrupted, so its EINTR retry loop could get stuck forever when\nother backends are trigger-happy with signals, which was a real\nproduction issue. I guess both of these issues go away in practice if\nCF #5118 goes in.\n\n\n", "msg_date": "Sun, 18 Aug 2024 11:12:22 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "On Sun, Aug 18, 2024 at 11:12 AM Thomas Munro <[email protected]> wrote:\n> I guess both of these issues go away in practice if\n> CF #5118 goes in.\n\nTo be more precise, if you just keep doing pg_usleep() the issue goes\naway, and likewise for posix_fallocate() it goes away... But if you\nswitch to WaitLatchUs() so you can handle latch wakeups in vacuum\ndelays, which really you should because the latch might be an urgent\nrequest for you to CHECK_FOR_INTERRUPTS(), because another backend is\nwaiting for all backends to service a ProcSignalBarrier (we need a new\nname for that), well now you'll get wakeups, so you're back to square\none if someone is sending them very fast.\n\n\n", "msg_date": "Sun, 18 Aug 2024 11:27:36 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Thu, Aug 15, 2024 at 04:13:29PM -0500, Nathan Bossart wrote:\n> On Wed, Aug 14, 2024 at 06:00:06AM +0000, Bertrand Drouvot wrote:\n> > I gave it more thoughts and I don't think we have to choose between the two.\n> > The 1 Hz approach reduces the number of interrupts and Sami's patch provides a\n> > way to get \"accurate\" delay in case of interrupts. I think both have their own\n> > benefit. \n> \n> Is it really that important to delay with that level of accuracy? In most\n> cases, the chances of actually interrupting a given vacuum delay point are\n> pretty small. Even in the extreme scenario you tested with ~350K\n> interrupts in a 19 minute vacuum, you only saw a 10-15% difference in total\n> time. I wouldn't say I'm diametrically opposed to this patch, but I do\n> think we need to carefully consider whether it's worth the extra code.\n>\n\nI'm not 100% sure that it is worth it but on OTOH I'm thinking that could be the\ncase for someone that cares enough to change the cost delay from say 2ms to 3ms.\n\nI mean, given the granularity we expose in the cost delay, one could expect to\nget \"accurate\" delay. The doc is cautious enough to mention that \"such delays may\nnot be measured accurately on older platforms\" which makes me think that could\nbe worth to implement it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Aug 2024 14:29:57 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 13, 2024 at 11:40:27AM -0500, Nathan Bossart wrote:\n> On Tue, Aug 13, 2024 at 11:07:46AM -0500, Imseih (AWS), Sami wrote:\n> > Having to add special handling to space out instrumentation\n> > directly in vacuum_delay_point seems very odd to me. I don't\n> > think vacuum_delay_point should have to worry about this.\n> > \n> > Also,\n> > 1/ what is an appropriate interval to collect these stats?\n> > 2/ What if there are other callers in the future that wish\n> > to instrument parallel vacuum workers? they will need to implement\n> > similar logic.\n> \n> None of this seems intractable to me. 1 Hz seems like an entirely\n> reasonable place to start, and it is very easy to change (or to even make\n> configurable). pg_stat_progress_vacuum might show slightly old values in\n> this column, but that should be easy enough to explain in the docs if we\n> are really concerned about it. If other callers want to do something\n> similar, maybe we should add a more generic implementation in\n> backend_progress.c.\n> \n\nAs it looks like we have a consensus that reducing the number of interrupts also \nmakes sense, I just provided a rebase version of the 1 Hz version (see [0], that\nalso makes clear in the doc that the new field might show slightly old values).\n\n[0]: https://www.postgresql.org/message-id/ZsSQnS9OW9EWSOk4%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Aug 2024 12:55:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> As it looks like we have a consensus that reducing the number of interrupts also \n> makes sense, I just provided a rebase version of the 1 Hz version (see [0], that\n> also makes clear in the doc that the new field might show slightly old values).\n\nThat makes sense. However, I suspect the \"1 Hz\" work code will no longer\nbe needed if CF entry 5118 [1] mentioned by Thomas [2] a few days back \ngoes through. Maybe this extra work can be removed if [1] goes through.\nWhat do you think?\n\nWith regards to CF 5118 and what it means to the current discussion, below\nare my thoughts.\n\nI tested with both CF 5118 [1] and the cost-delay tracking patch. With that in place, \npg_usleep is able to sleep the full requested, as mentioned by Thomas [3]. This is\nbecause certain interrupts like Parallel Message and others are not signaled\nby SIGUSR1 any longer, but with latches.\n\n From this discussion, there is desire for a sleep function that:\n1/ Sleeps for the full duration of the requested time\n2/ Continues to handle important interrupts during the sleep\n\nWhile something like CF 5118 will take care of point #1, it will not deal\nwith #2. Also, the v11 [4] patch does not do #2 either. So I think\nin the sleep loop, we need a C_F_I call. The same type of loop can\nalso be used to call WaitForSingleObject.\n\nIf CF 5118 gets committed, will still need similar loop that calls C_F_I, \nbut the function will need to call WaitLatchUs [5].\n\nThoughts?\n\n--\nSami \n\n\n[1] https://commitfest.postgresql.org/49/5118/\n[2] https://www.postgresql.org/message-id/CA%2BhUKG%2Bf-nEc_SowDLW1JMUa6Of5sCK-JZ%3Dv-KhL1xgXk83fiw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CA%2BhUKGKpo3fM%3DrnfdxHqt%2BjNGh_zUNcL1ob4hMsb%3DjFfKn9Aag%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/e3297b5e-0b33-4f45-afcd-4b00ba0b3547%40gmail.com\n[5] https://www.postgresql.org/message-id/CA+hUKGKVbJE59JkwnUj5XMY+-rzcTFciV9vVC7i=LUfWPds8Xw@mail.gmail.com\n\n\n\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 14:25:19 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 20, 2024 at 02:25:19PM -0500, Sami Imseih wrote:\n> > As it looks like we have a consensus that reducing the number of interrupts also \n> > makes sense, I just provided a rebase version of the 1 Hz version (see [0], that\n> > also makes clear in the doc that the new field might show slightly old values).\n> \n> That makes sense. However, I suspect the \"1 Hz\" work code will no longer\n> be needed if CF entry 5118 [1] mentioned by Thomas [2] a few days back \n> goes through. Maybe this extra work can be removed if [1] goes through.\n> What do you think?\n\nYeah agree that the \"1 Hz\" work wouldn't be needed anymore if the CF entry 5118\ngoes through *and* if vacuum delays keep doing pg_usleep() (as the leader won't\nreceive SIGUSR1 from the parallel workers while executing nanosleep() anymore).\nIt could still receive SIGHUP or such but that's outside of the PqMsg_Progress\ncase though.\n\n> With regards to CF 5118 and what it means to the current discussion, below\n> are my thoughts.\n> \n> I tested with both CF 5118 [1] and the cost-delay tracking patch. With that in place, \n> pg_usleep is able to sleep the full requested, as mentioned by Thomas [3]. This is\n> because certain interrupts like Parallel Message and others are not signaled\n> by SIGUSR1 any longer, but with latches.\n>\n\nYeah.\n \n> From this discussion, there is desire for a sleep function that:\n> 1/ Sleeps for the full duration of the requested time\n> 2/ Continues to handle important interrupts during the sleep\n> \n> While something like CF 5118 will take care of point #1,\n\nI'm not sure, even with CF entry 5118, nanosleep() could be interrupted. But I\nagree that the leader won't be interrupted by PqMsg_Progress anymore.\n\n> it will not deal\n> with #2. Also, the v11 [4] patch does not do #2 either. So I think\n> in the sleep loop, we need a C_F_I call. The same type of loop can\n> also be used to call WaitForSingleObject.\n> \n> If CF 5118 gets committed, will still need similar loop that calls C_F_I, \n> but the function will need to call WaitLatchUs [5].\n> \n> Thoughts?\n\nIf CF 5118 gets committed, then I think we would need to move to using WaitLatchUs()\nto react to urgent request. We'd also need to find a way to ensure that we'd\nwait for the full duration of the requested time depending of the reason why we\nwaked up (well, only if we agree that 1/ is needed and I'm not sure we got a\nconsensus).\n\nSo I think that:\n\n1. we should still implement the \"1 Hz\" stuff as 1.1/ it could be useful if CF\n5118 gets committed and we move to WaitLatchUs() and 2.2/ it won't break anything\nif CF gets committed and we don't move to WaitLatchUs(). For 1.1 it would still\nprevent the leader to be waked up too frequently by the parallel workers.\n\n2. still discuss the \"need\" and get a consensus regarding a sleep that could\nensure the wait duration (in some cases and depending of the reason why the\nprocess is waked up).\n\nThoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 15:06:30 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" }, { "msg_contents": "> From this discussion, there is desire for a sleep function that:\n> 1/ Sleeps for the full duration of the requested time\n> 2/ Continues to handle important interrupts during the sleep\n> \n> While something like CF 5118 will take care of point #1,\n\n\n> I'm not sure, even with CF entry 5118, nanosleep() could be interrupted. But I\n> agree that the leader won't be interrupted by PqMsg_Progress anymore.\n\nCorrect.\n\n\n> 1. we should still implement the \"1 Hz\" stuff as 1.1/ it could be useful if CF\n> 5118 gets committed and we move to WaitLatchUs() and 2.2/ it won't break anything\n> if CF gets committed and we don't move to WaitLatchUs(). For 1.1 it would still\n> prevent the leader to be waked up too frequently by the parallel workers.\n\nYes, regardless of any changes that may occur in the future that change the behaior\nof pg_usleep, preventing a leader from being woken up too frequently is\ngood to have. The \"1 Hz\" work is still good to have.\n\n\n> 2. still discuss the \"need\" and get a consensus regarding a sleep that could\n> ensure the wait duration (in some cases and depending of the reason why the\n> process is waked up).\n\nUnless there is objection, I will withdraw the CF [1] entry for this patch next week.\n\nThis discussion however should be one of the points that CF 5118 must deal with.\nThat is, 5118 will change the behavior of pg_usleep when dealing with interrupts\npreviously signaled by SIGUSR1. \n\n\n[1] https://commitfest.postgresql.org/49/5161/\n\nRegards, \n\nSami\n\n\n\n\n\n", "msg_date": "Fri, 30 Aug 2024 10:50:49 -0500", "msg_from": "Sami Imseih <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restart pg_usleep when interrupted" } ]
[ { "msg_contents": "A documentation comment came in [1] causing me to review some of our backup\ndocumentation and I left the current content and location of the standalone\nbackups was odd. I propose to move it to a better place, under file system\nbackups.\n\nAdding to commitfest.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKFQuwZ%3DWxdWJ6O66yQ9dnWTLO12p7h3HpfhowCj%2B0U_bNrzdg%40mail.gmail.com", "msg_date": "Fri, 28 Jun 2024 13:56:56 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Doc: Move standalone backup section, mention -X argument" }, { "msg_contents": "I compiled the patch and it worked without any problems.\n\nI think the patch makes sense, because of the structure of the current\ndocs. It seems more logical to have this section in this part of the\ndocumentation, where it is useful and not only described for another\nchapter, because it won't even work with the current chapter it is\nreferenced in (\"Continous Archiving and Point-in-Time Recovery\n(PITR)\").\n\nI am still new to Postgres, so I can't tell whether it can be written\nmore detailed or not. But I really like it, that is in a more fitting\nchapter in my opinion.\n\n\nRegards,\nMarlene Reiterer\n\n\nAm Mo., 16. Sept. 2024 um 10:35 Uhr schrieb David G. Johnston\n<[email protected]>:\n>\n> A documentation comment came in [1] causing me to review some of our backup documentation and I left the current content and location of the standalone backups was odd. I propose to move it to a better place, under file system backups.\n>\n> Adding to commitfest.\n>\n> David J.\n>\n> [1] https://www.postgresql.org/message-id/CAKFQuwZ%3DWxdWJ6O66yQ9dnWTLO12p7h3HpfhowCj%2B0U_bNrzdg%40mail.gmail.com\n>\n\n\n", "msg_date": "Mon, 16 Sep 2024 10:45:47 +0200", "msg_from": "Marlene Reiterer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Doc: Move standalone backup section, mention -X argument" } ]
[ { "msg_contents": "I played around with incremental backup yesterday and tried $subject\n\nThe WAL summarizer is running on the standby server, but when I try \nto take an incremental backup, I get an error that I understand to mean \nthat WAL summarizing hasn't caught up yet.\n\nI am not sure if that is working as designed, but if it is, I think it \nshould be documented.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 29 Jun 2024 07:01:04 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Incremental backup from a streaming replication standby" }, { "msg_contents": "On Sat, 2024-06-29 at 07:01 +0200, Laurenz Albe wrote:\n> I played around with incremental backup yesterday and tried $subject\n> \n> The WAL summarizer is running on the standby server, but when I try \n> to take an incremental backup, I get an error that I understand to mean \n> that WAL summarizing hasn't caught up yet.\n> \n> I am not sure if that is working as designed, but if it is, I think it \n> should be documented.\n\nI played with this some more. Here is the exact error message:\n\nERROR: manifest requires WAL from final timeline 1 ending at 0/1967C260, but this backup starts at 0/1967C190\n\nBy trial and error I found that when I run a CHECKPOINT on the primary,\ntaking an incremental backup on the standby works.\n\nI couldn't fathom the cause of that, but I think that that should either\nbe addressed or documented before v17 comes out.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 15 Jul 2024 17:26:58 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Mon, Jul 15, 2024 at 11:27 AM Laurenz Albe <[email protected]> wrote:\n> On Sat, 2024-06-29 at 07:01 +0200, Laurenz Albe wrote:\n> > I played around with incremental backup yesterday and tried $subject\n> >\n> > The WAL summarizer is running on the standby server, but when I try\n> > to take an incremental backup, I get an error that I understand to mean\n> > that WAL summarizing hasn't caught up yet.\n> >\n> > I am not sure if that is working as designed, but if it is, I think it\n> > should be documented.\n>\n> I played with this some more. Here is the exact error message:\n>\n> ERROR: manifest requires WAL from final timeline 1 ending at 0/1967C260, but this backup starts at 0/1967C190\n>\n> By trial and error I found that when I run a CHECKPOINT on the primary,\n> taking an incremental backup on the standby works.\n>\n> I couldn't fathom the cause of that, but I think that that should either\n> be addressed or documented before v17 comes out.\n\nI had a feeling this was going to be confusing. I'm not sure what to\ndo about it, but I'm open to suggestions.\n\nSuppose you take a full backup F; replay of that backup will begin\nwith a checkpoint CF. Then you try to take an incremental backup I;\nreplay will begin from a checkpoint CI. For the incremental backup to\nbe valid, it must include all blocks modified after CF and before CI.\nBut when the backup is taken on a standby, no new checkpoint is\npossible. Hence, CI will be the most recent restartpoint on the\nstandby that has occurred before the backup starts. So, if F is taken\non the primary and then I is immediately taken on the standby without\nthe standby having done a new restartpoint, or if both F and I are\ntaken on the standby and no restartpoint intervenes, then CF=CI. In\nthat scenario, an incremental backup is pretty much pointless: every\nsingle incremental file would contain 0 blocks. You might as well just\nuse the backup you already have, unless one of the non-relation files\nhas changed. So, except in that unusual corner case, the fact that the\nbackup fails isn't really costing you anything. In fact, there's a\ndecent chance that it's saving you from taking a completely useless\nbackup.\n\nOn the primary, this doesn't occur, because there, each new backup\ntriggers a new checkpoint, so you always have CI>CF.\n\nThe error message is definitely confusing. The reason I'm not sure how\nto do better is that there is a large class of errors that a user\ncould make that would trigger an error of this general type. I'm\nguessing that attempting a standby backup with CF=CI will turn out to\nbe the most common one, but I don't think it'll be the only one that\never comes up. The code in PrepareForIncrementalBackup() focuses on\nwhat has gone wrong on a technical level rather than on what you\nprobably did to create that situation. Indeed, the server doesn't\nreally know what you did to create that situation. You could trigger\nthe same error by taking a full backup on the primary and then try to\ntake an incremental based on that full backup on a time-delayed\nstandby (or a lagging standby) whose replay position was behind the\nprimary, i.e. CI<CF.\n\nMore perversely, you could trigger the error by spinning up a standby,\npromoting it, taking a full backup, destroying the standby, removing\nthe timeline history file from the archive, spinning up a new standby,\npromoting onto the same timeline ID as the previous one, and then\ntrying to take an incremental backup relative to the full backup. This\nmight actually succeed, if you take the incremental backup at a later\nLSN than the previous full backup, but, as you may guess, terrible\nthings will happen to you if you try to use such a backup. (I hope you\nwill agree that this would be a self-inflicted injury; I can't see any\nway of detecting such cases.) If the incremental backup LSN is earlier\nthan the previous full backup LSN, this error will trigger.\n\nSo, given all the above, what can we do here?\n\nOne option might be to add an errhint() to the message. I had trouble\nthinking of something that was compact enough to be reasonable to\ninclude and yet reasonably accurate and useful, but maybe we can\nbrainstorm and figure something out. Another option might be to add\nmore to the documentation, but it's all so complicated that I'm not\nsure what to write. It feels hard to make something that is brief\nenough to be worth including, accurate enough to help more than it\nhurts, and understandable enough that people who run into this will be\nable to make use of it.\n\nI think I'm a little too close to this to really know what the best\nthing to do is, so I'm happy to hear suggestions from you and others.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jul 2024 10:52:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On 7/19/24 21:52, Robert Haas wrote:\n> On Mon, Jul 15, 2024 at 11:27 AM Laurenz Albe <[email protected]> wrote:\n>> On Sat, 2024-06-29 at 07:01 +0200, Laurenz Albe wrote:\n>>> I played around with incremental backup yesterday and tried $subject\n>>>\n>>> The WAL summarizer is running on the standby server, but when I try\n>>> to take an incremental backup, I get an error that I understand to mean\n>>> that WAL summarizing hasn't caught up yet.\n>>>\n>>> I am not sure if that is working as designed, but if it is, I think it\n>>> should be documented.\n>>\n>> I played with this some more. Here is the exact error message:\n>>\n>> ERROR: manifest requires WAL from final timeline 1 ending at 0/1967C260, but this backup starts at 0/1967C190\n>>\n>> By trial and error I found that when I run a CHECKPOINT on the primary,\n>> taking an incremental backup on the standby works.\n>>\n>> I couldn't fathom the cause of that, but I think that that should either\n>> be addressed or documented before v17 comes out.\n> \n> I had a feeling this was going to be confusing. I'm not sure what to\n> do about it, but I'm open to suggestions.\n> \n> Suppose you take a full backup F; replay of that backup will begin\n> with a checkpoint CF. Then you try to take an incremental backup I;\n> replay will begin from a checkpoint CI. For the incremental backup to\n> be valid, it must include all blocks modified after CF and before CI.\n> But when the backup is taken on a standby, no new checkpoint is\n> possible. Hence, CI will be the most recent restartpoint on the\n> standby that has occurred before the backup starts. So, if F is taken\n> on the primary and then I is immediately taken on the standby without\n> the standby having done a new restartpoint, or if both F and I are\n> taken on the standby and no restartpoint intervenes, then CF=CI. In\n> that scenario, an incremental backup is pretty much pointless: every\n> single incremental file would contain 0 blocks. You might as well just\n> use the backup you already have, unless one of the non-relation files\n> has changed. So, except in that unusual corner case, the fact that the\n> backup fails isn't really costing you anything. In fact, there's a\n> decent chance that it's saving you from taking a completely useless\n> backup.\n\n<snip>\n\n> I think I'm a little too close to this to really know what the best\n> thing to do is, so I'm happy to hear suggestions from you and others.\n\nI think it would be enough just to add a hint such as:\n\nHINT: this is possible when making a standby backup with little or no \nactivity.\n\nMy guess is in production environments this will be uncommon.\n\nFor example, over the years we (pgBackRest) have gotten numerous bug \nreports that time-targeted PITR does not work. In every case we found \nthat the user was just testing procedures and the database had no \nactivity between backups -- therefore recovery had no commit timestamps \nto use to end recovery. Test environments sometimes produce weird results.\n\nHaving said that, I think it would be better if it worked even if it \ndoes produce an empty backup. An empty backup wastes some disk space but \nif it produces less friction and saves an admin having to intervene then \nit is probably worth it. I don't immediately see how to do that in a \nreliable way, though, and in any case it seems like something to \nconsider for PG18.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 19 Jul 2024 22:32:06 +0700", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, Jul 19, 2024 at 11:32 AM David Steele <[email protected]> wrote:\n> I think it would be enough just to add a hint such as:\n>\n> HINT: this is possible when making a standby backup with little or no\n> activity.\n\nThat could work (with \"this\" capitalized).\n\n> My guess is in production environments this will be uncommon.\n\nI think so too, but when it does happen, confusion may be common.\n\n> Having said that, I think it would be better if it worked even if it\n> does produce an empty backup. An empty backup wastes some disk space but\n> if it produces less friction and saves an admin having to intervene then\n> it is probably worth it. I don't immediately see how to do that in a\n> reliable way, though, and in any case it seems like something to\n> consider for PG18.\n\nYeah, I'm pretty reluctant to weaken the sanity checks here, at least\nin the short term. Note that what the check is actually complaining\nabout is that the previous backup thinks that the WAL it needs to\nreplay to reach consistency ends after the start of the current\nbackup. Even in this scenario, I'm not positive that everything would\nbe OK if we let the backup proceed, and it's easy to think of\nscenarios where it definitely isn't. Plus, it's not quite clear how to\ndistinguish the cases where it's OK from the cases where it isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jul 2024 12:59:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, 2024-07-19 at 12:59 -0400, Robert Haas wrote:\nThanks for looking at this.\n\n> On Fri, Jul 19, 2024 at 11:32 AM David Steele <[email protected]> wrote:\n> > I think it would be enough just to add a hint such as:\n> > \n> > HINT: this is possible when making a standby backup with little or no\n> > activity.\n> \n> That could work (with \"this\" capitalized).\n> \n> > My guess is in production environments this will be uncommon.\n> \n> I think so too, but when it does happen, confusion may be common.\n\nI guess this will most likely happen during tests like the one I made.\n\nI'd be alright with the hint, but I'd say \"during making an *incremental*\nstandby backup\", because that's the only case where it can happen.\n\nI think it would also be sufficient if we document that possibility.\nWhen I got the error, I looked at the documentation of incremental\nbackup for any limitations with standby servers, but didn't find any.\nA remark in the documentation would have satisfied me.\n\nYours,\nLaurenz\n\n\n", "msg_date": "Fri, 19 Jul 2024 20:41:50 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, Jul 19, 2024 at 2:41 PM Laurenz Albe <[email protected]> wrote:\n> I'd be alright with the hint, but I'd say \"during making an *incremental*\n> standby backup\", because that's the only case where it can happen.\n>\n> I think it would also be sufficient if we document that possibility.\n> When I got the error, I looked at the documentation of incremental\n> backup for any limitations with standby servers, but didn't find any.\n> A remark in the documentation would have satisfied me.\n\nWould you like to propose a patch adding a hint and/or adjusting the\ndocumentation? Or are you wanting me to do that?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jul 2024 16:03:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, 2024-07-19 at 16:03 -0400, Robert Haas wrote:\n> On Fri, Jul 19, 2024 at 2:41 PM Laurenz Albe <[email protected]> wrote:\n> > I'd be alright with the hint, but I'd say \"during making an *incremental*\n> > standby backup\", because that's the only case where it can happen.\n> > \n> > I think it would also be sufficient if we document that possibility.\n> > When I got the error, I looked at the documentation of incremental\n> > backup for any limitations with standby servers, but didn't find any.\n> > A remark in the documentation would have satisfied me.\n> \n> Would you like to propose a patch adding a hint and/or adjusting the\n> documentation? Or are you wanting me to do that?\n\nHere is a patch.\nI went for both the errhint and some documentation.\n\nYours,\nLaurenz Albe", "msg_date": "Sat, 20 Jul 2024 00:07:36 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Sat, Jun 29, 2024 at 07:01:04AM +0200, Laurenz Albe wrote:\n> The WAL summarizer is running on the standby server, but when I try \n> to take an incremental backup, I get an error that I understand to mean \n> that WAL summarizing hasn't caught up yet.\n\nAdded an open item for this one.\n--\nMichael", "msg_date": "Mon, 22 Jul 2024 15:05:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby" }, { "msg_contents": "On Fri, Jul 19, 2024 at 6:07 PM Laurenz Albe <[email protected]> wrote:\n> Here is a patch.\n> I went for both the errhint and some documentation.\n\nHmm, the hint doesn't end up using the word \"standby\" anywhere. That\nseems like it might not be optimal?\n\n+ Like a base backup, you can take an incremental backup from a streaming\n+ replication standby server. But since a backup of a standby server cannot\n+ initiate a checkpoint, it is possible that an incremental backup taken\n+ right after a base backup will fail with an error, since it would have\n+ to start with the same checkpoint as the base backup and would therefore\n+ be empty.\n\nHmm. I feel like I'm about to be super-nitpicky, but this seems\nimprecise to me in multiple ways. First, an incremental backup is a\nkind of base backup, or at least, it's something you take with\npg_basebackup. Note that later in the paragraph, you use the term\n\"base backup\" to refer to what I have been calling the \"prior\" or\n\"previous\" backup or \"the backup upon which it depends,\" but that\nearlier backup could be either a full or an incremental backup.\nSecond, the standby need not be using streaming replication, even\nthough it probably will be in practice. Third, the failing incremental\nbackup doesn't necessarily have to be attempted immediately after the\nprevious one - the intervening time could be quite long on an idle\nsystem. Fourth, it makes it sound like the backup being empty is a\nreason for it to fail, which is debatable; I think we should try to\ncast this more as an implementation restriction.\n\nHow about something like this:\n\nAn incremental backup is only possible if replay would begin from a\nlater checkpoint than for the previous backup upon which it depends.\nOn the primary, this condition is always satisfied, because each\nbackup triggers a new checkpoint. On a standby, replay begins from the\nmost recent restartpoint. As a result, an incremental backup may fail\non a standby if there has been very little activity since the previous\nbackup. Attempting to take an incremental backup that is lagging\nbehind the primary (or some other standby) using a prior backup taken\nat a later WAL position may fail for the same reason.\n\nI'm not saying that's perfect, but let me know your thoughts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 09:37:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Mon, 2024-07-22 at 09:37 -0400, Robert Haas wrote:\n> How about something like this:\n> \n> An incremental backup is only possible if replay would begin from a\n> later checkpoint than for the previous backup upon which it depends.\n> On the primary, this condition is always satisfied, because each\n> backup triggers a new checkpoint. On a standby, replay begins from the\n> most recent restartpoint. As a result, an incremental backup may fail\n> on a standby if there has been very little activity since the previous\n> backup. Attempting to take an incremental backup that is lagging\n> behind the primary (or some other standby) using a prior backup taken\n> at a later WAL position may fail for the same reason.\n\nBefore I write a v2, a small question for clarification:\nI believe I remember that during my experiments, I ran CHECKPOINT\non the standby server between the first backup and the incremental\nbackup, and that was not enough to make it work. I had to run\na CHECKPOINT on the primary server.\n\nDoes CHECKPOINT on the standby not trigger a restartpoint, or do\nI simply misremember?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 22 Jul 2024 19:05:46 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Mon, Jul 22, 2024 at 1:05 PM Laurenz Albe <[email protected]> wrote:\n> Before I write a v2, a small question for clarification:\n> I believe I remember that during my experiments, I ran CHECKPOINT\n> on the standby server between the first backup and the incremental\n> backup, and that was not enough to make it work. I had to run\n> a CHECKPOINT on the primary server.\n>\n> Does CHECKPOINT on the standby not trigger a restartpoint, or do\n> I simply misremember?\n\nIt's only possible for the standby to create a restartpoint at a\nwrite-ahead log position where the master created a checkpoint. With\ntypical configuration, every or nearly every checkpoint on the primary\nwill trigger a restartpoint on the standby, but for example if you set\nmax_wal_size bigger and checkpoint_timeout longer on the standby than\non the primary, then you might end up with only some of those\ncheckpoints ending up becoming restartpoints and others not.\n\nLooking at the code in CreateRestartPoint(), it looks like what\nhappens if you run CHECKPOINT is that it tries to turn the\nmost-recently replayed checkpoint into a restartpoint if that wasn't\ndone already; otherwise it just returns without doing anything. See\nthe comment that begins with \"If the last checkpoint record we've\nreplayed is already our last\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 13:41:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Mon, 2024-07-22 at 09:37 -0400, Robert Haas wrote:\n> On Fri, Jul 19, 2024 at 6:07 PM Laurenz Albe <[email protected]> wrote:\n> > Here is a patch.\n> > I went for both the errhint and some documentation.\n> \n> Hmm, the hint doesn't end up using the word \"standby\" anywhere. That\n> seems like it might not be optimal?\n\nI guessed that the user was aware that she is taking the backup on\na standby server...\n\nAnyway, I reworded the hint to\n\n This can happen for incremental backups on a standby if there was\n little activity since the previous backup.\n\n> Hmm. I feel like I'm about to be super-nitpicky, but this seems\n> imprecise to me in multiple ways.\n\nOn the contrary, cour comments and explanations are valuable.\n\n> How about something like this:\n> \n> An incremental backup is only possible if replay would begin from a\n> later checkpoint than for the previous backup upon which it depends.\n> On the primary, this condition is always satisfied, because each\n> backup triggers a new checkpoint. On a standby, replay begins from the\n> most recent restartpoint. As a result, an incremental backup may fail\n> on a standby if there has been very little activity since the previous\n> backup. Attempting to take an incremental backup that is lagging\n> behind the primary (or some other standby) using a prior backup taken\n> at a later WAL position may fail for the same reason.\n> \n> I'm not saying that's perfect, but let me know your thoughts.\n\nI tinkered with this some more, and the attached patch has\n\n An incremental backup is only possible if replay would begin from a later\n checkpoint than the checkpoint that started the previous backup upon which\n it depends. If you take the incremental backup on the primary, this\n condition is always satisfied, because each backup triggers a new\n checkpoint. On a standby, replay begins from the most recent restartpoint.\n Therefore, an incremental backup of a standby server can fail if there has\n been very little activity since the previous backup, since no new\n restartpoint might have been created.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 24 Jul 2024 12:46:52 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Wed, Jul 24, 2024 at 6:46 AM Laurenz Albe <[email protected]> wrote:\n> An incremental backup is only possible if replay would begin from a later\n> checkpoint than the checkpoint that started the previous backup upon which\n> it depends.\n\nMy concern here is that the previous backup might have been taken on a\nstandby, and therefore it did not start with a checkpoint. For a\nstandby backup, replay will begin from a checkpoint record, but that\nrecord may be quite a bit earlier in the WAL. For instance, imagine\ncheckpoint_timeout is set to 30 minutes on the standby. When the\nbackup is taken, the most recent restartpoint could be up to 30\nminutes ago -- and it is the checkpoint record for that restartpoint\nfrom which replay will begin. I think that in my phrasing, it's always\nabout the checkpoint from which replay would begin (which is always\nwell-defined) not the checkpoint that started the backup (which is\nonly logical on the primary).\n\n> If you take the incremental backup on the primary, this\n> condition is always satisfied, because each backup triggers a new\n> checkpoint. On a standby, replay begins from the most recent restartpoint.\n> Therefore, an incremental backup of a standby server can fail if there has\n> been very little activity since the previous backup, since no new\n> restartpoint might have been created.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jul 2024 15:27:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Wed, 2024-07-24 at 15:27 -0400, Robert Haas wrote:\n> On Wed, Jul 24, 2024 at 6:46 AM Laurenz Albe <[email protected]> wrote:\n> >    An incremental backup is only possible if replay would begin from a later\n> >    checkpoint than the checkpoint that started the previous backup upon which\n> >    it depends.\n> \n> My concern here is that the previous backup might have been taken on a\n> standby, and therefore it did not start with a checkpoint. For a\n> standby backup, replay will begin from a checkpoint record, but that\n> record may be quite a bit earlier in the WAL. For instance, imagine\n> checkpoint_timeout is set to 30 minutes on the standby. When the\n> backup is taken, the most recent restartpoint could be up to 30\n> minutes ago -- and it is the checkpoint record for that restartpoint\n> from which replay will begin. I think that in my phrasing, it's always\n> about the checkpoint from which replay would begin (which is always\n> well-defined) not the checkpoint that started the backup (which is\n> only logical on the primary).\n\nI see.\n\nThe attached patch uses your wording for the first sentence.\n\nI left out the last sentence from your suggestion, because it sounded\nlike it is likely to confuse the reader. I think you just wanted to\nsay that there are other possible causes for an incremental backup to\nfail. I want to keep the text as simple as possible and focus on the case\nthat I hit, because I expect that a lot of people who experiment with\nincremental backup or run tests could run into the same problem.\n\nI don't think it will be a frequent occurrence during normal operation.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 25 Jul 2024 14:50:58 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Thu, Jul 25, 2024 at 8:51 AM Laurenz Albe <[email protected]> wrote:\n> The attached patch uses your wording for the first sentence.\n>\n> I left out the last sentence from your suggestion, because it sounded\n> like it is likely to confuse the reader. I think you just wanted to\n> say that there are other possible causes for an incremental backup to\n> fail. I want to keep the text as simple as possible and focus on the case\n> that I hit, because I expect that a lot of people who experiment with\n> incremental backup or run tests could run into the same problem.\n>\n> I don't think it will be a frequent occurrence during normal operation.\n\nCommitted this version to master and v17.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Jul 2024 16:12:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Thu, 2024-07-25 at 16:12 -0400, Robert Haas wrote:\n> On Thu, Jul 25, 2024 at 8:51 AM Laurenz Albe <[email protected]> wrote:\n> > The attached patch uses your wording for the first sentence.\n> > \n> > I left out the last sentence from your suggestion, because it sounded\n> > like it is likely to confuse the reader. I think you just wanted to\n> > say that there are other possible causes for an incremental backup to\n> > fail. I want to keep the text as simple as possible and focus on the case\n> > that I hit, because I expect that a lot of people who experiment with\n> > incremental backup or run tests could run into the same problem.\n> > \n> > I don't think it will be a frequent occurrence during normal operation.\n> \n> Committed this version to master and v17.\n\nThanks for taking care of this.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 26 Jul 2024 07:09:29 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, Jul 26, 2024 at 1:09 AM Laurenz Albe <[email protected]> wrote:\n> > Committed this version to master and v17.\n>\n> Thanks for taking care of this.\n\nSure thing!\n\nI knew it was going to confuse someone ... I just wasn't sure what to\ndo about it. Now we've at least done something, which is hopefully\nsuperior to nothing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 09:10:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, Jul 26, 2024 at 4:11 PM Robert Haas <[email protected]> wrote:\n> On Fri, Jul 26, 2024 at 1:09 AM Laurenz Albe <[email protected]> wrote:\n> > > Committed this version to master and v17.\n> >\n> > Thanks for taking care of this.\n>\n> Sure thing!\n>\n> I knew it was going to confuse someone ... I just wasn't sure what to\n> do about it. Now we've at least done something, which is hopefully\n> superior to nothing.\n\nGreat! Should we mark the corresponding v17 open item as closed?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 26 Jul 2024 23:13:41 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" }, { "msg_contents": "On Fri, Jul 26, 2024 at 4:13 PM Alexander Korotkov <[email protected]> wrote:\n> Great! Should we mark the corresponding v17 open item as closed?\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 12:21:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incremental backup from a streaming replication standby fails" } ]
[ { "msg_contents": "\nHi,\n\nmerge sorts requires all the tuples in each input are pre-sorted (1),\nand in tuplesort.c, when the workmem is full, we dumptuples into\ndestTape. (2). it looks to me that we need a unlimited number of Tapes\nif the number of input tuples is big enough. However we set a maxTapes\nin 'inittapes' and we may use a pre-used tapes for storing new tuples in\n'selectnewtape'. Wouldn't this break the prerequisite (1)? \n\nselectnewtape(Tuplesortstate *state)\n{\n\tif (state->nOutputTapes < state->maxTapes)\n\t{..}\n else\n {\n\t\t/*\n\t\t * We have reached the max number of tapes. Append to an existing\n\t\t * tape.\n\t\t */\n\t\tstate->destTape = state->outputTapes[state->nOutputRuns % state->nOutputTapes];\n\t\tstate->nOutputRuns++;\n\t}\n}\n\n\nfor example, at the first use of outputTapes[x], it stores (1, 3, 5, 7),\nand later (2, 4, 6, 8) are put into it. so the overall of (1, 3, 5, 7,\n2, 4, 6, 8) are not sorted? Where did I go wrong?\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 30 Jun 2024 09:48:44 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Question about maxTapes & selectnewtape & dumptuples" }, { "msg_contents": "On 30/06/2024 12:48, Andy Fan wrote:\n> merge sorts requires all the tuples in each input are pre-sorted (1),\n> and in tuplesort.c, when the workmem is full, we dumptuples into\n> destTape. (2). it looks to me that we need a unlimited number of Tapes\n> if the number of input tuples is big enough. However we set a maxTapes\n> in 'inittapes' and we may use a pre-used tapes for storing new tuples in\n> 'selectnewtape'. Wouldn't this break the prerequisite (1)?\n> \n> selectnewtape(Tuplesortstate *state)\n> {\n> \tif (state->nOutputTapes < state->maxTapes)\n> \t{..}\n> else\n> {\n> \t\t/*\n> \t\t * We have reached the max number of tapes. Append to an existing\n> \t\t * tape.\n> \t\t */\n> \t\tstate->destTape = state->outputTapes[state->nOutputRuns % state->nOutputTapes];\n> \t\tstate->nOutputRuns++;\n> \t}\n> }\n> \n> \n> for example, at the first use of outputTapes[x], it stores (1, 3, 5, 7),\n> and later (2, 4, 6, 8) are put into it. so the overall of (1, 3, 5, 7,\n> 2, 4, 6, 8) are not sorted? Where did I go wrong?\n\nThere's a distinction between \"runs\" and \"tapes\". Each \"run\" is sorted, \nbut there can be multiple runs on a tape. In that case, multiple merge \npasses are needed. Note the call to markrunend() between the runs. In \nyour example, the tape would look like (1, 3, 5, 7, <end marker>, 2, 4, \n6, 8).\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sun, 30 Jun 2024 13:22:21 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about maxTapes & selectnewtape & dumptuples" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n\n> On 30/06/2024 12:48, Andy Fan wrote:\n\n>> for example, at the first use of outputTapes[x], it stores (1, 3, 5,\n>> 7),\n>> and later (2, 4, 6, 8) are put into it. so the overall of (1, 3, 5, 7,\n>> 2, 4, 6, 8) are not sorted? Where did I go wrong?\n>\n> There's a distinction between \"runs\" and \"tapes\". Each \"run\" is sorted,\n> but there can be multiple runs on a tape. In that case, multiple merge\n> passes are needed. Note the call to markrunend() between the runs. In\n> your example, the tape would look like (1, 3, 5, 7, <end marker>, 2, 4,\n> 6, 8).\n\nI see, Thank you for your answer!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 30 Jun 2024 11:17:09 +0000", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about maxTapes & selectnewtape & dumptuples" } ]
[ { "msg_contents": "Hi hackers,\n\nIs there any extension that uses meson as build systems?\nI'm starting a extension project that written in c++, cmake is my\ninitial choice as the build system, but since PostgreSQL has adopted\nMeson, so I'm wondering if there is any extension that also uses\nmeson that I can reference.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 30 Jun 2024 21:17:04 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Extension using Meson as build system" }, { "msg_contents": "Hi\n\nne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:\n\n> Hi hackers,\n>\n> Is there any extension that uses meson as build systems?\n> I'm starting a extension project that written in c++, cmake is my\n> initial choice as the build system, but since PostgreSQL has adopted\n> Meson, so I'm wondering if there is any extension that also uses\n> meson that I can reference.\n>\n\nany extension from contrib package\n\nhttps://github.com/postgres/postgres/tree/master/contrib\n\nprobably only these\n\nRegards\n\nPavel\n\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>\n>\n\nHine 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:Hi  hackers,\n\nIs there any extension that uses meson as build systems?\nI'm starting a extension project that written in c++, cmake is my\ninitial choice as the build system, but since PostgreSQL has adopted\nMeson, so I'm wondering if there is any extension that also uses\nmeson that I can reference.any extension from contrib packagehttps://github.com/postgres/postgres/tree/master/contribprobably only theseRegardsPavel \n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sun, 30 Jun 2024 15:20:02 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:\n>>\n>> Hi hackers,\n>>\n>> Is there any extension that uses meson as build systems?\n>> I'm starting a extension project that written in c++, cmake is my\n>> initial choice as the build system, but since PostgreSQL has adopted\n>> Meson, so I'm wondering if there is any extension that also uses\n>> meson that I can reference.\n>\n>\n> any extension from contrib package\n\nAh, yeah, but I'm not sure these extensions have any dependencies\non the main project? Never mind, I'll take a look, thanks.\n\n>\n> https://github.com/postgres/postgres/tree/master/contrib\n>\n> probably only these\n>\n> Regards\n>\n> Pavel\n>>\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n>>\n>>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 30 Jun 2024 21:28:02 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "ne 30. 6. 2024 v 15:28 odesílatel Junwang Zhao <[email protected]> napsal:\n\n> On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]>\n> wrote:\n> >\n> > Hi\n> >\n> > ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]>\n> napsal:\n> >>\n> >> Hi hackers,\n> >>\n> >> Is there any extension that uses meson as build systems?\n> >> I'm starting a extension project that written in c++, cmake is my\n> >> initial choice as the build system, but since PostgreSQL has adopted\n> >> Meson, so I'm wondering if there is any extension that also uses\n> >> meson that I can reference.\n> >\n> >\n> > any extension from contrib package\n>\n> Ah, yeah, but I'm not sure these extensions have any dependencies\n> on the main project? Never mind, I'll take a look, thanks.\n>\n\nany postgres extension has dependency on main project\n\nwhat can be different - if the extension is build inside or outside source\ncode tree\n\n\n>\n> >\n> > https://github.com/postgres/postgres/tree/master/contrib\n> >\n> > probably only these\n> >\n> > Regards\n> >\n> > Pavel\n> >>\n> >>\n> >> --\n> >> Regards\n> >> Junwang Zhao\n> >>\n> >>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nne 30. 6. 2024 v 15:28 odesílatel Junwang Zhao <[email protected]> napsal:On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:\n>>\n>> Hi  hackers,\n>>\n>> Is there any extension that uses meson as build systems?\n>> I'm starting a extension project that written in c++, cmake is my\n>> initial choice as the build system, but since PostgreSQL has adopted\n>> Meson, so I'm wondering if there is any extension that also uses\n>> meson that I can reference.\n>\n>\n> any extension from contrib package\n\nAh, yeah, but I'm not sure these extensions have any dependencies\non the main project? Never mind, I'll take a look, thanks.any postgres extension has dependency on main projectwhat can be different - if the extension is build inside or outside source code tree \n\n>\n> https://github.com/postgres/postgres/tree/master/contrib\n>\n> probably only these\n>\n> Regards\n>\n> Pavel\n>>\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n>>\n>>\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sun, 30 Jun 2024 15:31:19 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "On Sun, Jun 30, 2024 at 9:31 PM Pavel Stehule <[email protected]> wrote:\n>\n>\n>\n> ne 30. 6. 2024 v 15:28 odesílatel Junwang Zhao <[email protected]> napsal:\n>>\n>> On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]> wrote:\n>> >\n>> > Hi\n>> >\n>> > ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:\n>> >>\n>> >> Hi hackers,\n>> >>\n>> >> Is there any extension that uses meson as build systems?\n>> >> I'm starting a extension project that written in c++, cmake is my\n>> >> initial choice as the build system, but since PostgreSQL has adopted\n>> >> Meson, so I'm wondering if there is any extension that also uses\n>> >> meson that I can reference.\n>> >\n>> >\n>> > any extension from contrib package\n>>\n>> Ah, yeah, but I'm not sure these extensions have any dependencies\n>> on the main project? Never mind, I'll take a look, thanks.\n>\n>\n> any postgres extension has dependency on main project\n>\n> what can be different - if the extension is build inside or outside source code tree\n\nTake contrib/ltree as an example, in Makefile, there are some lines:\n\nifdef USE_PGXS\nPG_CONFIG = pg_config\nPGXS := $(shell $(PG_CONFIG) --pgxs)\ninclude $(PGXS)\nelse\nsubdir = contrib/ltree\ntop_builddir = ../..\ninclude $(top_builddir)/src/Makefile.global\ninclude $(top_srcdir)/contrib/contrib-global.mk\nendif\n\nI am taking these as the reason that extension can build outside of postgres\nsource code, but I can't find an equivalent in meson.build.\n\n\n>\n>>\n>>\n>> >\n>> > https://github.com/postgres/postgres/tree/master/contrib\n>> >\n>> > probably only these\n>> >\n>> > Regards\n>> >\n>> > Pavel\n>> >>\n>> >>\n>> >> --\n>> >> Regards\n>> >> Junwang Zhao\n>> >>\n>> >>\n>>\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 30 Jun 2024 21:39:39 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "ne 30. 6. 2024 v 15:39 odesílatel Junwang Zhao <[email protected]> napsal:\n\n> On Sun, Jun 30, 2024 at 9:31 PM Pavel Stehule <[email protected]>\n> wrote:\n> >\n> >\n> >\n> > ne 30. 6. 2024 v 15:28 odesílatel Junwang Zhao <[email protected]>\n> napsal:\n> >>\n> >> On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]>\n> wrote:\n> >> >\n> >> > Hi\n> >> >\n> >> > ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]>\n> napsal:\n> >> >>\n> >> >> Hi hackers,\n> >> >>\n> >> >> Is there any extension that uses meson as build systems?\n> >> >> I'm starting a extension project that written in c++, cmake is my\n> >> >> initial choice as the build system, but since PostgreSQL has adopted\n> >> >> Meson, so I'm wondering if there is any extension that also uses\n> >> >> meson that I can reference.\n> >> >\n> >> >\n> >> > any extension from contrib package\n> >>\n> >> Ah, yeah, but I'm not sure these extensions have any dependencies\n> >> on the main project? Never mind, I'll take a look, thanks.\n> >\n> >\n> > any postgres extension has dependency on main project\n> >\n> > what can be different - if the extension is build inside or outside\n> source code tree\n>\n> Take contrib/ltree as an example, in Makefile, there are some lines:\n>\n> ifdef USE_PGXS\n> PG_CONFIG = pg_config\n> PGXS := $(shell $(PG_CONFIG) --pgxs)\n> include $(PGXS)\n> else\n> subdir = contrib/ltree\n> top_builddir = ../..\n> include $(top_builddir)/src/Makefile.global\n> include $(top_srcdir)/contrib/contrib-global.mk\n> endif\n>\n> I am taking these as the reason that extension can build outside of\n> postgres\n> source code, but I can't find an equivalent in meson.build.\n>\n\nprobably nobody did it yet\n\n\n\n>\n>\n> >\n> >>\n> >>\n> >> >\n> >> > https://github.com/postgres/postgres/tree/master/contrib\n> >> >\n> >> > probably only these\n> >> >\n> >> > Regards\n> >> >\n> >> > Pavel\n> >> >>\n> >> >>\n> >> >> --\n> >> >> Regards\n> >> >> Junwang Zhao\n> >> >>\n> >> >>\n> >>\n> >>\n> >> --\n> >> Regards\n> >> Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nne 30. 6. 2024 v 15:39 odesílatel Junwang Zhao <[email protected]> napsal:On Sun, Jun 30, 2024 at 9:31 PM Pavel Stehule <[email protected]> wrote:\n>\n>\n>\n> ne 30. 6. 2024 v 15:28 odesílatel Junwang Zhao <[email protected]> napsal:\n>>\n>> On Sun, Jun 30, 2024 at 9:20 PM Pavel Stehule <[email protected]> wrote:\n>> >\n>> > Hi\n>> >\n>> > ne 30. 6. 2024 v 15:17 odesílatel Junwang Zhao <[email protected]> napsal:\n>> >>\n>> >> Hi  hackers,\n>> >>\n>> >> Is there any extension that uses meson as build systems?\n>> >> I'm starting a extension project that written in c++, cmake is my\n>> >> initial choice as the build system, but since PostgreSQL has adopted\n>> >> Meson, so I'm wondering if there is any extension that also uses\n>> >> meson that I can reference.\n>> >\n>> >\n>> > any extension from contrib package\n>>\n>> Ah, yeah, but I'm not sure these extensions have any dependencies\n>> on the main project? Never mind, I'll take a look, thanks.\n>\n>\n> any postgres extension has dependency on main project\n>\n> what can be different - if the extension is build inside or outside source code tree\n\nTake contrib/ltree as an example, in Makefile, there are some lines:\n\nifdef USE_PGXS\nPG_CONFIG = pg_config\nPGXS := $(shell $(PG_CONFIG) --pgxs)\ninclude $(PGXS)\nelse\nsubdir = contrib/ltree\ntop_builddir = ../..\ninclude $(top_builddir)/src/Makefile.global\ninclude $(top_srcdir)/contrib/contrib-global.mk\nendif\n\nI am taking these as the reason that extension can build outside of postgres\nsource code, but I can't find an equivalent in meson.build.probably nobody did it yet \n\n\n>\n>>\n>>\n>> >\n>> > https://github.com/postgres/postgres/tree/master/contrib\n>> >\n>> > probably only these\n>> >\n>> > Regards\n>> >\n>> > Pavel\n>> >>\n>> >>\n>> >> --\n>> >> Regards\n>> >> Junwang Zhao\n>> >>\n>> >>\n>>\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sun, 30 Jun 2024 15:48:35 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "On 30.06.24 15:17, Junwang Zhao wrote:\n> Is there any extension that uses meson as build systems?\n> I'm starting a extension project that written in c++, cmake is my\n> initial choice as the build system, but since PostgreSQL has adopted\n> Meson, so I'm wondering if there is any extension that also uses\n> meson that I can reference.\n\nI wrote a little demo:\n\nhttps://github.com/petere/plsh/blob/meson/meson.build\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 17:06:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "Hi, Peter\n\nOn Fri, Jul 26, 2024 at 11:06 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 30.06.24 15:17, Junwang Zhao wrote:\n> > Is there any extension that uses meson as build systems?\n> > I'm starting a extension project that written in c++, cmake is my\n> > initial choice as the build system, but since PostgreSQL has adopted\n> > Meson, so I'm wondering if there is any extension that also uses\n> > meson that I can reference.\n>\n> I wrote a little demo:\n>\n> https://github.com/petere/plsh/blob/meson/meson.build\n>\n\nThanks, I will check it out ;)\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 26 Jul 2024 23:12:21 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension using Meson as build system" }, { "msg_contents": "\nOn 2024-07-26 Fr 11:06 AM, Peter Eisentraut wrote:\n> On 30.06.24 15:17, Junwang Zhao wrote:\n>> Is there any extension that uses meson as build systems?\n>> I'm starting a extension project that written in c++, cmake is my\n>> initial choice as the build system, but since PostgreSQL has adopted\n>> Meson, so I'm wondering if there is any extension that also uses\n>> meson that I can reference.\n>\n> I wrote a little demo:\n>\n> https://github.com/petere/plsh/blob/meson/meson.build\n\n\n\nnifty!\n\n\ncheers\n\n\nandew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 26 Jul 2024 12:43:08 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension using Meson as build system" } ]
[ { "msg_contents": "Hi Hackers\n When I saw this document:https://en.wikipedia.org/wiki/Branch_predictor,\nI thought of Linux likely() vs unlikely() and thus thought that there are\nsome code segments in src/backend/executor/execMain.c that can be optimized.\nFor example :\nif (ExecutorStart_hook)\n(*ExecutorStart_hook) (queryDesc, eflags);\nelse\nstandard_ExecutorStart(queryDesc, eflags);\n}\nvoid\nExecutorRun(QueryDesc *queryDesc,\nScanDirection direction, uint64 count,\nbool execute_once)\n{\nif (ExecutorRun_hook)\n(*ExecutorRun_hook) (queryDesc, direction, count, execute_once);\nelse\nstandard_ExecutorRun(queryDesc, direction, count, execute_once);\n}\n###\nif (unlikely(ExecutorRun_hook)),\n\nI hope to get guidance from community experts,Many thanks\n\nThanks\n\nHi Hackers   When I saw this document:https://en.wikipedia.org/wiki/Branch_predictor, I thought of Linux likely() vs unlikely() and thus thought that there are some code segments in src/backend/executor/execMain.c that can be optimized.For example :if (ExecutorStart_hook)\t\t(*ExecutorStart_hook) (queryDesc, eflags);\telse\t\tstandard_ExecutorStart(queryDesc, eflags);}voidExecutorRun(QueryDesc *queryDesc,\t\t\tScanDirection direction, uint64 count,\t\t\tbool execute_once){\tif (ExecutorRun_hook)\t\t(*ExecutorRun_hook) (queryDesc, direction, count, execute_once);\telse\t\tstandard_ExecutorRun(queryDesc, direction, count, execute_once);}###if (unlikely(ExecutorRun_hook)),I hope to get guidance from community experts,Many thanksThanks", "msg_date": "Sun, 30 Jun 2024 21:55:39 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": true, "msg_subject": "Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "On Sun, 30 Jun 2024, 15:56 wenhui qiu, <[email protected]> wrote:\n>\n> Hi Hackers\n> When I saw this document:https://en.wikipedia.org/wiki/Branch_predictor, I thought of Linux likely() vs unlikely() and thus thought that there are some code segments in src/backend/executor/execMain.c that can be optimized.\n> For example :\n> if (ExecutorStart_hook)\n> (*ExecutorStart_hook) (queryDesc, eflags);\n> else\n> standard_ExecutorStart(queryDesc, eflags);\n> }\n> void\n> ExecutorRun(QueryDesc *queryDesc,\n> ScanDirection direction, uint64 count,\n> bool execute_once)\n> {\n> if (ExecutorRun_hook)\n> (*ExecutorRun_hook) (queryDesc, direction, count, execute_once);\n> else\n> standard_ExecutorRun(queryDesc, direction, count, execute_once);\n> }\n> ###\n> if (unlikely(ExecutorRun_hook)),\n>\n> I hope to get guidance from community experts,Many thanks\n\nWhile hooks are generally not installed by default, I would advise\nagainst marking the hooks as unlikely, as that would unfairly penalize\nthe performance of extensions that do utilise this hook (or hooks in\ngeneral when applied to all hooks).\n\nIf this code is hot enough, then CPU branch prediction will likely\ncorrectly predict this branch correctly after a small amount of time\n(as hook null-ness is generally approximately constant for the\nduration of a backend lifetime); while if this code is cold, the\nbenefit is not worth the additional binary size overhead of the\nout-of-lined code section.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Sun, 30 Jun 2024 16:47:02 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> On Sun, 30 Jun 2024, 15:56 wenhui qiu, <[email protected]> wrote:\n>> if (unlikely(ExecutorRun_hook)),\n\n> While hooks are generally not installed by default, I would advise\n> against marking the hooks as unlikely, as that would unfairly penalize\n> the performance of extensions that do utilise this hook (or hooks in\n> general when applied to all hooks).\n\nIn general, we have a policy of using likely/unlikely very sparingly,\nand only in demonstrably hot code paths. This hook call certainly\ndoesn't qualify as hot.\n\nHaving said that ... something I've been wondering about is how to\nteach the compiler that error paths are unlikely. Doing this\nacross-the-board wouldn't be \"sparingly\", but I think surely it'd\nresult in better code quality on average. This'd be easy enough\nto do in Assert:\n\n #define Assert(condition) \\\n \tdo { \\\n-\t\tif (!(condition)) \\\n+\t\tif (unlikely(!(condition))) \\\n \t\t\tExceptionalCondition(#condition, __FILE__, __LINE__); \\\n \t} while (0)\n\nbut on the other hand we don't care that much about micro-optimization\nof assert-enabled builds, so maybe that's not worth the trouble. The\nreal win would be if constructs like\n\n\tif (trouble)\n\t\tereport(ERROR, ...);\n\ncould be interpreted as\n\n\tif (unlikely(trouble))\n\t\tereport(ERROR, ...);\n\nBut I surely don't want to make thousands of such changes manually.\nAnd it's possible that smart compilers already realize this, using\na heuristic that any path that ends in pg_unreachable() must be\nunlikely. Is there a way to encourage compilers to believe that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jun 2024 11:23:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "Hi Tom ,Matthias\n Thank you for your explanation.Maybe future compilers will be able to\ndo intelligent prediction?\n\n\nThanks\n\nTom Lane <[email protected]> 于2024年6月30日周日 23:23写道:\n\n> Matthias van de Meent <[email protected]> writes:\n> > On Sun, 30 Jun 2024, 15:56 wenhui qiu, <[email protected]> wrote:\n> >> if (unlikely(ExecutorRun_hook)),\n>\n> > While hooks are generally not installed by default, I would advise\n> > against marking the hooks as unlikely, as that would unfairly penalize\n> > the performance of extensions that do utilise this hook (or hooks in\n> > general when applied to all hooks).\n>\n> In general, we have a policy of using likely/unlikely very sparingly,\n> and only in demonstrably hot code paths. This hook call certainly\n> doesn't qualify as hot.\n>\n> Having said that ... something I've been wondering about is how to\n> teach the compiler that error paths are unlikely. Doing this\n> across-the-board wouldn't be \"sparingly\", but I think surely it'd\n> result in better code quality on average. This'd be easy enough\n> to do in Assert:\n>\n> #define Assert(condition) \\\n> do { \\\n> - if (!(condition)) \\\n> + if (unlikely(!(condition))) \\\n> ExceptionalCondition(#condition, __FILE__,\n> __LINE__); \\\n> } while (0)\n>\n> but on the other hand we don't care that much about micro-optimization\n> of assert-enabled builds, so maybe that's not worth the trouble. The\n> real win would be if constructs like\n>\n> if (trouble)\n> ereport(ERROR, ...);\n>\n> could be interpreted as\n>\n> if (unlikely(trouble))\n> ereport(ERROR, ...);\n>\n> But I surely don't want to make thousands of such changes manually.\n> And it's possible that smart compilers already realize this, using\n> a heuristic that any path that ends in pg_unreachable() must be\n> unlikely. Is there a way to encourage compilers to believe that?\n>\n> regards, tom lane\n>\n\nHi Tom ,Matthias    Thank you  for your explanation.Maybe future compilers will be able to do intelligent prediction?ThanksTom Lane <[email protected]> 于2024年6月30日周日 23:23写道:Matthias van de Meent <[email protected]> writes:\n> On Sun, 30 Jun 2024, 15:56 wenhui qiu, <[email protected]> wrote:\n>> if (unlikely(ExecutorRun_hook)),\n\n> While hooks are generally not installed by default, I would advise\n> against marking the hooks as unlikely, as that would unfairly penalize\n> the performance of extensions that do utilise this hook (or hooks in\n> general when applied to all hooks).\n\nIn general, we have a policy of using likely/unlikely very sparingly,\nand only in demonstrably hot code paths.  This hook call certainly\ndoesn't qualify as hot.\n\nHaving said that ... something I've been wondering about is how to\nteach the compiler that error paths are unlikely.  Doing this\nacross-the-board wouldn't be \"sparingly\", but I think surely it'd\nresult in better code quality on average.  This'd be easy enough\nto do in Assert:\n\n #define Assert(condition) \\\n        do { \\\n-               if (!(condition)) \\\n+               if (unlikely(!(condition))) \\\n                        ExceptionalCondition(#condition, __FILE__, __LINE__); \\\n        } while (0)\n\nbut on the other hand we don't care that much about micro-optimization\nof assert-enabled builds, so maybe that's not worth the trouble.  The\nreal win would be if constructs like\n\n        if (trouble)\n                ereport(ERROR, ...);\n\ncould be interpreted as\n\n        if (unlikely(trouble))\n                ereport(ERROR, ...);\n\nBut I surely don't want to make thousands of such changes manually.\nAnd it's possible that smart compilers already realize this, using\na heuristic that any path that ends in pg_unreachable() must be\nunlikely.  Is there a way to encourage compilers to believe that?\n\n                        regards, tom lane", "msg_date": "Mon, 1 Jul 2024 09:52:45 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "On Sun, Jun 30, 2024 at 11:23 AM Tom Lane <[email protected]> wrote:\n> The real win would be if constructs like\n>\n> if (trouble)\n> ereport(ERROR, ...);\n>\n> could be interpreted as\n>\n> if (unlikely(trouble))\n> ereport(ERROR, ...);\n>\n> But I surely don't want to make thousands of such changes manually.\n> And it's possible that smart compilers already realize this, using\n> a heuristic that any path that ends in pg_unreachable() must be\n> unlikely. Is there a way to encourage compilers to believe that?\n\nIsn't that what commit 913ec71d68 did, by adding a call to\npg_attribute_cold to ereport?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 30 Jun 2024 22:07:40 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "On Mon, 1 Jul 2024 at 14:08, Peter Geoghegan <[email protected]> wrote:\n>\n> On Sun, Jun 30, 2024 at 11:23 AM Tom Lane <[email protected]> wrote:\n> > if (trouble)\n> > ereport(ERROR, ...);\n> >\n> > could be interpreted as\n> >\n> > if (unlikely(trouble))\n> > ereport(ERROR, ...);\n> >\n> > But I surely don't want to make thousands of such changes manually.\n> > And it's possible that smart compilers already realize this, using\n> > a heuristic that any path that ends in pg_unreachable() must be\n> > unlikely. Is there a way to encourage compilers to believe that?\n>\n> Isn't that what commit 913ec71d68 did, by adding a call to\n> pg_attribute_cold to report?\n\nYes.\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 14:22:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "Em dom., 30 de jun. de 2024 às 12:24, Tom Lane <[email protected]> escreveu:\n\n> Matthias van de Meent <[email protected]> writes:\n> > On Sun, 30 Jun 2024, 15:56 wenhui qiu, <[email protected]> wrote:\n> >> if (unlikely(ExecutorRun_hook)),\n>\n> > While hooks are generally not installed by default, I would advise\n> > against marking the hooks as unlikely, as that would unfairly penalize\n> > the performance of extensions that do utilise this hook (or hooks in\n> > general when applied to all hooks).\n>\n> In general, we have a policy of using likely/unlikely very sparingly,\n> and only in demonstrably hot code paths. This hook call certainly\n> doesn't qualify as hot.\n>\n> Having said that ... something I've been wondering about is how to\n> teach the compiler that error paths are unlikely. Doing this\n> across-the-board wouldn't be \"sparingly\", but I think surely it'd\n> result in better code quality on average. This'd be easy enough\n> to do in Assert:\n>\n> #define Assert(condition) \\\n> do { \\\n> - if (!(condition)) \\\n> + if (unlikely(!(condition))) \\\n> ExceptionalCondition(#condition, __FILE__,\n> __LINE__); \\\n> } while (0)\n>\n> but on the other hand we don't care that much about micro-optimization\n> of assert-enabled builds, so maybe that's not worth the trouble. The\n> real win would be if constructs like\n>\n> if (trouble)\n> ereport(ERROR, ...);\n>\n> could be interpreted as\n>\n> if (unlikely(trouble))\n> ereport(ERROR, ...);\n>\nHi Tom.\n\nLet's do it?\nBut we will need a 1 hour window to apply the patch.\nI can generate it in 30 minutes.\nThe current size is 1.6MB.\n\nAll expressions for ERROR, FATAL and PANIC.\n\nif (unlikely(expr))\n ereport(ERROR, ...)\n\nAny other expression was left out, such as:\n\nif (expr)\n{\n ereport(ERROR, ...)\n}\n\nThis attached version needs manual adjustment.\nfor cases of:\nif (expr) /* comments */\n\nIf I apply it, I can adjust it.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 4 Jul 2024 08:43:46 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "Hi,\n\nOn 2024-06-30 16:47:02 +0200, Matthias van de Meent wrote:\n> While hooks are generally not installed by default, I would advise\n> against marking the hooks as unlikely, as that would unfairly penalize\n> the performance of extensions that do utilise this hook (or hooks in\n> general when applied to all hooks).\n\nI don't think this could be fairly described as penalizing use of the hooks.\nlikely()/unlikely() isn't the same as __attribute__((cold)).\n\nSee https://godbolt.org/z/qdnz65ez8 - only the version using\n__attribute__((cold)) gets some code put into a separate section.\n\n\nWhat's the possible argument for saying that the generated code should be\noptimized for the presence of hooks?\n\n\nNote that I'm not saying that we ought to use likely() here (nor the\nopposite), just that this argument for not doing so doesn't seem to hold\nwater.\n\n\n> If this code is hot enough, then CPU branch prediction will likely\n> correctly predict this branch correctly after a small amount of time\n> (as hook null-ness is generally approximately constant for the\n> duration of a backend lifetime);\n\nIMO that's not quite right. The branch predictor has a limited\ncapacity. Because postgres executes a lot of code it frequently executes\n\"outer\" code without benefit of the branch predictor, just due to the amount\nof code that was executed - even if the outer code matters for performance. In\nwhich case the static branch prediction can matter (i.e. backward branches are\nassumed to be taken, forward branches not).\n\n\n> while if this code is cold, the benefit is not worth the additional binary\n> size overhead of the out-of-lined code section.\n\nThere wouldn't be any such overhead, afaict.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:36:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "On Sun, Jun 30, 2024 at 10:24 PM Tom Lane <[email protected]> wrote:\n\n> In general, we have a policy of using likely/unlikely very sparingly,\n> and only in demonstrably hot code paths. This hook call certainly\n> doesn't qualify as hot.\n\nI just remembered an article I read a while back by a compiler\nengineer who said that compilers have heuristics that treat NULL\npointers as \"unlikely\" since the most common reason to test that is so\nwe can actually dereference it and do something with it, and in that\ncase a NULL pointer is an exceptional condition. He also said it\nwasn't documented so you can only see this by looking at the source\ncode. Instead of taking the time to verify that, I created some toy\nexamples and it seems to be true:\n\nhttps://godbolt.org/z/dv6M5ecY5\n\nThis works all the way back to clang 3.1 and (at least, because\nCompiler Explorer doesn't go back any further) gcc 3.4.6, so it's an\nancient heuristic. If we can rely on this, we could remove the handful\nof statements of the form \"if (unlikely(foo_ptr == NULL))\" and\ndocument it in c.h. It may be iffy to rely on an undocumented\nheuristic, but it also seems silly to have a use-sparingly policy if\nsome of our current uses have zero effect on the emitted binary (I\nhaven't checked yet).\n\n\n", "msg_date": "Mon, 12 Aug 2024 15:34:26 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" }, { "msg_contents": "On Mon, 12 Aug 2024 at 20:34, John Naylor <[email protected]> wrote:\n> I just remembered an article I read a while back by a compiler\n> engineer who said that compilers have heuristics that treat NULL\n> pointers as \"unlikely\" since the most common reason to test that is so\n> we can actually dereference it and do something with it, and in that\n> case a NULL pointer is an exceptional condition. He also said it\n> wasn't documented so you can only see this by looking at the source\n> code. Instead of taking the time to verify that, I created some toy\n> examples and it seems to be true:\n>\n> https://godbolt.org/z/dv6M5ecY5\n\nInteresting.\n\nI agree with Andres' comment about the no increase in binary size\noverhead. The use of unlikely() should just be a swap in the \"call\"\noperands so the standard_Executor* function goes in the happy path.\n(These are all sibling calls, so in reality, most compilers should do\n\"jmp\" instead of \"call\")\n\nI also agree with Tom with his comment about the tests for these\nexecutor hooks not being hot. Each of them are executed once per\nquery, so an unlikely() isn't going to make any difference to the\nperformance of a query.\n\nTaking into account what your analysis uncovered, I think what might\nbe worth spending some time on would be looking for hot var == NULL\ntests where the common path is true and seeing if using likely() on\nany of those makes a meaningful impact on performance. Maybe something\nlike [1] could be used to inject a macro or function call in each if\n(<var> == NULL test to temporarily install some telemetry and look for\n__FILE__ / __LINE__ combinations that are almost always true. Grouping\nthose results by __FILE__, __LINE__ and sorting by count(*) desc might\nyield something interesting.\n\nDavid\n\n[1] https://coccinelle.gitlabpages.inria.fr/website/\n\n\n", "msg_date": "Tue, 13 Aug 2024 01:14:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux likely() unlikely() for PostgreSQL" } ]
[ { "msg_contents": "Hello,\n\nThis patches changes the HeapKeyTest macro to add handling for SK_SEARCHNULL\nand SK_SEARCHNOTNULL. While currently no core codes uses these ScanKey flags\nit would be useful for extensions if it was supported so extensions\ndont have to implement\nhandling for those by themself.\n\n-- \nRegards, Sven Klemm", "msg_date": "Sun, 30 Jun 2024 20:45:24 +0200", "msg_from": "Sven Klemm <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Handle SK_SEARCHNULL and SK_SEARCHNOTNULL in HeapKeyTest" }, { "msg_contents": "Hi,\n\n> This patches changes the HeapKeyTest macro to add handling for SK_SEARCHNULL\n> and SK_SEARCHNOTNULL. While currently no core codes uses these ScanKey flags\n> it would be useful for extensions if it was supported so extensions\n> dont have to implement\n> handling for those by themself.\n\nAs I recall, previously it was argued that changes like this should\nhave some use within the core [1].\n\nCan you think of any such use?\n\n[1]: https://commitfest.postgresql.org/42/4180/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:48:30 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Handle SK_SEARCHNULL and SK_SEARCHNOTNULL in HeapKeyTest" }, { "msg_contents": "On Mon, 1 Jul 2024 at 15:48, Aleksander Alekseev\n<[email protected]> wrote:\n> As I recall, previously it was argued that changes like this should\n> have some use within the core [1].\n\nI don't see that argument anywhere in the thread honestly. I did see\nheiki asking why it would be useful for extensions, but that was\nanswered there.\n\n\n", "msg_date": "Tue, 2 Jul 2024 09:26:03 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Handle SK_SEARCHNULL and SK_SEARCHNOTNULL in HeapKeyTest" }, { "msg_contents": "Hi,\n\n> > As I recall, previously it was argued that changes like this should\n> > have some use within the core [1].\n>\n> I don't see that argument anywhere in the thread honestly. I did see\n> heiki asking why it would be useful for extensions, but that was\n> answered there.\n\nThe referred patch was rejected at first because it didn't modify\nnodeSeqScan.c to make use of the change within the core.\n\nI'm not saying this is necessarily applicable to this particular patch\nor that this is a general rule though.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:15:22 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Handle SK_SEARCHNULL and SK_SEARCHNOTNULL in HeapKeyTest" }, { "msg_contents": "On Tue, 2 Jul 2024 at 10:15, Aleksander Alekseev\n<[email protected]> wrote:\n> The referred patch was rejected at first because it didn't modify\n> nodeSeqScan.c to make use of the change within the core.\n\nI guess we interpret Heikis email differently. I read it as: \"If this\nimproves performance, then let's also start using it in core. If not,\nwhy do extensions need it?\" And I think you quite clearly explained\nthat even if perf is not better, then the usability for extensions\nthat don't want to use SPI is better.\n\nI don't think Heiki meant his response as not using it in core being a\nblocker for the patch. But maybe my interpretation of his response is\nincorrect.\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:22:44 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Handle SK_SEARCHNULL and SK_SEARCHNOTNULL in HeapKeyTest" } ]
[ { "msg_contents": "Hi all,\n\nAs some of you may have noticed, a new stable branch named\nREL_17_STABLE has just been created, meaning that we are on time to\nbegin the development cycle for v18.\n\nCommit fest 2024-07 (https://commitfest.postgresql.org/48/) is still\nopen for patches, and will be switched as active in the next 12 hours\nor so, mapping with the beginning of 2024-07-01 AoE:\nhttps://www.timeanddate.com/time/zones/aoe\n\nIf you have any patches you would like to register, please make sure\nto add them to the app.\n\nI may have missed some messages, but it appears that we don't have a\nCFM for the moment? I am planning to focus on my usual review work\nand issue triaging without being the top-level CFM, FWIW.\n\nAnd happy v18 hacking. Thanks,\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 08:35:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Creation of REL_17_STABLE and upcoming Commit Fest 2024-07" } ]
[ { "msg_contents": "Hi,\n\nIn HEAD, I encountered the following assertion failure when I enabled summarize_wal\nand ran pg_createsubscriber.\n\n2024-07-01 14:42:15.697 JST [19195] LOG: database system is ready to accept connections\nTRAP: failed Assert(\"switchpoint >= state->EndRecPtr\"), File: \"walsummarizer.c\", Line: 1382, PID: 19200\n0 postgres 0x0000000105c46c5d ExceptionalCondition + 189\n1 postgres 0x000000010590b1e4 summarizer_read_local_xlog_page + 340\n2 postgres 0x00000001054e401e ReadPageInternal + 542\n3 postgres 0x00000001054e24c0 XLogDecodeNextRecord + 464\n4 postgres 0x00000001054e2283 XLogReadAhead + 67\n5 postgres 0x00000001054e2185 XLogReadRecord + 53\n6 postgres 0x000000010590a3ab SummarizeWAL + 1115\n7 postgres 0x000000010590963a WalSummarizerMain + 1242\n8 postgres 0x00000001058fd10a postmaster_child_launch + 234\n9 postgres 0x000000010590133d StartChildProcess + 29\n10 postgres 0x0000000105904582 MaybeStartWalSummarizer + 82\n11 postgres 0x0000000105901af1 ServerLoop + 1153\n12 postgres 0x00000001059007ca PostmasterMain + 6554\n13 postgres 0x00000001057a3782 main + 818\n14 dyld 0x00007ff80e5e2366 start + 1942\n2024-07-01 14:42:15.912 JST [19195] LOG: WAL summarizer process (PID 19200) was terminated by signal 6: Abort trap: 6\n2024-07-01 14:42:15.913 JST [19195] LOG: terminating any other active server processes\n\nHere are the steps to reproduce this issue.\n\n--------------------------------\ninitdb -D pub\ncat <<EOF >> pub/postgresql.conf\nwal_level = 'logical'\nsummarize_wal = on\nEOF\npg_ctl -D pub start\npgbench -i\npgbench -T 600 &\npg_basebackup -D sub -c fast -R\npg_createsubscriber -d postgres -D sub -p 5433 -P \"port=5432\"\n--------------------------------\n\nIs this the known issue?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 1 Jul 2024 14:54:56 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Mon, Jul 01, 2024 at 02:54:56PM +0900, Fujii Masao wrote:\n> In HEAD, I encountered the following assertion failure when I enabled summarize_wal\n> and ran pg_createsubscriber.\n> \n> Is this the known issue?\n\nNope. So, Open Item, here we go.\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 15:07:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Mon, Jul 1, 2024 at 2:08 AM Michael Paquier <[email protected]> wrote:\n> Nope. So, Open Item, here we go.\n\nSome initial investigation:\n\nIn this test case, pg_subscriber, during the \"starting the subscriber\"\nsection of its work, starts up the database in the \"sub\" directory as\na standby. It enters standby mode, begins redo, and is then promoted,\nselecting timeline 2. The WAL summarizer is supposed to end\nsummarization at the point at which timeline 1 ended and then resume\nsummarizing from the beginning of timeline 2. But instead, it fails an\nassertion:\n\n Assert(switchpoint >= state->EndRecPtr);\n\nThis assertion is trying to verify that, when a new timeline is\nspawned, we don't read past the switchpoint on the original timeline.\nHere, we have apparently done that. In one test, I got switchpoint =\n0/51000510, state->EndRecPtr = 0/51000600. According to pg_waldump, on\ntimeline 1, we have this record at that LSN:\n\nrmgr: Heap len (rec/tot): 54/ 54, tx: 2313637, lsn:\n0/51000510, prev 0/510004D0, desc: DELETE xmax: 2313637, off: 3,\ninfobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1663/5/6104 blk\n0\n\nAnd on timeline 2, we have this at that LSN:\n\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/51000510, prev 0/510004D0, desc: CHECKPOINT_SHUTDOWN redo\n0/51000510; tli 2; prev tli 1; fpw true; xid 0:2313638; oid 24576;\nmulti 1; offset 0; oldest xid 730 in DB 1; oldest multi 1 in DB 1;\noldest/newest commit timestamp xid: 0/0; oldest running xid 0;\nshutdown\n\nIt appears that pg_subscriber creates a recovery.conf that includes:\n\nrecovery_target_timeline = 'latest'\nrecovery_target_inclusive = true\nrecovery_target_lsn = '%X/%X'\n\n...where %X/%X represents a valid LSN.\n\nI think the problem here is that the WAL summarizer believes that when\na new timeline appears, it should pick up from where the old timeline\nended. And here, that doesn't happen: the new timeline branches off\nbefore the end of the old timeline, because of the recovery target.\n\nI'm not yet sure what should be done about this. The obvious answer is\n\"remove the assertion,\" and maybe that is all we need to do. However,\nI'm not quite sure what the actual behavior will be if we just do\nthat, so I think more investigation is needed. I'll keep looking at\nthis, although given the US holiday I may not have results until next\nweek.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 13:07:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Wed, Jul 3, 2024 at 1:07 PM Robert Haas <[email protected]> wrote:\n> I think the problem here is that the WAL summarizer believes that when\n> a new timeline appears, it should pick up from where the old timeline\n> ended. And here, that doesn't happen: the new timeline branches off\n> before the end of the old timeline, because of the recovery target.\n>\n> I'm not yet sure what should be done about this. The obvious answer is\n> \"remove the assertion,\" and maybe that is all we need to do. However,\n> I'm not quite sure what the actual behavior will be if we just do\n> that, so I think more investigation is needed. I'll keep looking at\n> this, although given the US holiday I may not have results until next\n> week.\n\nHere is a draft patch that attempts to fix this problem. I'm not\ncertain that it's completely correct, but it does seem to fix the\nreported issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 17:01:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Wed, Jul 10, 2024 at 5:01 PM Robert Haas <[email protected]> wrote:\n> Here is a draft patch that attempts to fix this problem. I'm not\n> certain that it's completely correct, but it does seem to fix the\n> reported issue.\n\nI tried to write a test case for this and discovered that there are\nactually two separate problems in this area. First, as shown by the\nassertion failure reported by Fujii Masao, the WAL summarizer thinks\nthat it should never need to back up to an earlier LSN, and the test\ncase he provided shows that this is incorrect. Second, the WAL\nsummarizer can end up in a bad state after the startup process renames\nthe last WAL file on the old timeline to a .partial file. If this\nhappens before the file has been summarized, then the WAL summarizer\ncan't access it any more and errors out. Promotion also removes WAL\nfiles from the old timeline completely, but only if they're after the\nswitch point, and summarization doesn't care about those anyway. So\nthe partial file seems to be the only problem case.\n\nIn theory, the problem with the partial file could be handled in a\nvariety of ways: we could teach summarization to read the partial\nfile, perhaps, or postpone adding the .partial suffix until after\nsummarization has happened. But in practice, given where we are in the\nrelease cycle, the only reasonable approach that I can see is to have\npromotion wait for summarization to catch up, so that's what I did in\n0003.\n\n0002 is the same as what I posted previously as 0001, and teaches the\nsummarizer about backing up when we switch timelines. 0001 adds a\nmissing call to ConditionVariableCancelSleep; AFAIK, that omission has\nno important consequences, but still seems like it should be fixed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Jul 2024 12:45:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "I went ahead and committed 0001, which is pretty trivial. Hopefully,\nat least, it's trivial enough that I didn't mess it up.\n\nHere's a rebase of 0002 and 0003, now 0001 and 0002, with one minor\nfix to hopefully avoid annoying CI.\n\nI'm still hoping for some review/feedback/testing of these before I\ncommit them, but I also don't want to wait too long.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Jul 2024 11:02:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Mon, Jul 22, 2024 at 11:02 AM Robert Haas <[email protected]> wrote:\n> I'm still hoping for some review/feedback/testing of these before I\n> commit them, but I also don't want to wait too long.\n\nHearing nothing, pushed 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 10:09:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Fri, Jul 26, 2024 at 10:09:22AM -0400, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 11:02 AM Robert Haas <[email protected]> wrote:\n>> I'm still hoping for some review/feedback/testing of these before I\n>> commit them, but I also don't want to wait too long.\n> \n> Hearing nothing, pushed 0001.\n\nnitpick: I think this one needs a pgindent.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 26 Jul 2024 10:51:42 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Fri, Jul 26, 2024 at 11:51 AM Nathan Bossart\n<[email protected]> wrote:\n> nitpick: I think this one needs a pgindent.\n\nUgh, sorry. I thought of that while I was working on the commit but\nthen I messed up some other aspect of it and this went out of my head.\n\nFixed now, I hope.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 12:02:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Fri, Jul 26, 2024 at 7:02 PM Robert Haas <[email protected]> wrote:\n> On Fri, Jul 26, 2024 at 11:51 AM Nathan Bossart\n> <[email protected]> wrote:\n> > nitpick: I think this one needs a pgindent.\n>\n> Ugh, sorry. I thought of that while I was working on the commit but\n> then I messed up some other aspect of it and this went out of my head.\n>\n> Fixed now, I hope.\n\n0002 could also use pg_indent and pgperltidy. And I have couple other\nnotes regarding 0002.\n\n> In the process of fixing these bugs, I realized that the logic to wait\n> for WAL summarization to catch up was spread out in a way that made\n> it difficult to reuse properly, so this code refactors things to make\n> it easier.\n\nIt would be nice to split refactoring out of material logic changed.\nThis way it would be easier to review and would look cleaner in the\ngit history.\n\n> To make this fix work, also teach the WAL summarizer that after a\n> promotion has occurred, no more WAL can appear on the previous\n> timeline: previously, the WAL summarizer wouldn't switch to the new\n> timeline until we actually started writing WAL there, but that meant\n> that when the startup process was waiting for the WAL summarizer, it\n> was waiting for an action that the summarizer wasn't yet prepared to\n> take.\n\nI think this is a pretty long sentence, and I'm not sure I can\nunderstand it correctly. Does startup process wait for the WAL\nsummarizer without this patch? If not, it's not clear for me that\nword \"previously\" doesn't distribute to this part of sentence.\nBreaking this into multiple sentences could improve the clarity for\nme.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 26 Jul 2024 23:10:29 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Fri, Jul 26, 2024 at 4:10 PM Alexander Korotkov <[email protected]> wrote:\n> 0002 could also use pg_indent and pgperltidy. And I have couple other\n> notes regarding 0002.\n\nAs Tom said elsewhere, we don't have a practice of requiring perltidy\nfor every commit, and I normally don't bother. The tree is\npgindent-clean currently so I believe I got that part right.\n\n> > In the process of fixing these bugs, I realized that the logic to wait\n> > for WAL summarization to catch up was spread out in a way that made\n> > it difficult to reuse properly, so this code refactors things to make\n> > it easier.\n>\n> It would be nice to split refactoring out of material logic changed.\n> This way it would be easier to review and would look cleaner in the\n> git history.\n\nI support that idea in general but felt it was overkill in this case:\nit's new code, and there was only one existing caller of the function\nthat got refactored, and I'm not a huge fan of cluttering the git\nhistory with a bunch of tiny little refactoring commits to fix a\nsingle bug. I might have changed it if I'd seen this note before\ncommitting, though.\n\n> > To make this fix work, also teach the WAL summarizer that after a\n> > promotion has occurred, no more WAL can appear on the previous\n> > timeline: previously, the WAL summarizer wouldn't switch to the new\n> > timeline until we actually started writing WAL there, but that meant\n> > that when the startup process was waiting for the WAL summarizer, it\n> > was waiting for an action that the summarizer wasn't yet prepared to\n> > take.\n>\n> I think this is a pretty long sentence, and I'm not sure I can\n> understand it correctly. Does startup process wait for the WAL\n> summarizer without this patch? If not, it's not clear for me that\n> word \"previously\" doesn't distribute to this part of sentence.\n> Breaking this into multiple sentences could improve the clarity for\n> me.\n\nYes, I think that phrasing is muddled. It's too late to amend the\ncommit message, but for posterity: I initially thought that I could\nfix this bug by just teaching the startup process to wait for WAL\nsummarization before performing the .partial renaming, but that was\nnot enough by itself. The problem is that the WAL summarizer would\nread all the records that were present in the final WAL file on the\nold timeline, but it wouldn't actually write out a summary file,\nbecause that only happens when it reaches XLOG_CHECKPOINT_REDO or a\ntimeline switch point. Since no WAL had appeared on the new timeline\nyet, it didn't view the end of the old timeline as a switch point, so\nit just sat there waiting, without ever writing out a file; and the\nstartup process sat there waiting for it. So the second part of the\nfix is to make the WAL summarizer realize that once the startup\nprocess has chosen a new timeline ID, no more WAL is going to appear\non the old timeline, and thus it can (and should) write out the final\nsummary file for the old timeline and prepare to read WAL from the new\ntimeline.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 12:20:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Mon, Jul 29, 2024 at 7:20 PM Robert Haas <[email protected]> wrote:\n> On Fri, Jul 26, 2024 at 4:10 PM Alexander Korotkov <[email protected]> wrote:\n> > 0002 could also use pg_indent and pgperltidy. And I have couple other\n> > notes regarding 0002.\n>\n> As Tom said elsewhere, we don't have a practice of requiring perltidy\n> for every commit, and I normally don't bother. The tree is\n> pgindent-clean currently so I believe I got that part right.\n\nGot it, thanks.\n\n> > > In the process of fixing these bugs, I realized that the logic to wait\n> > > for WAL summarization to catch up was spread out in a way that made\n> > > it difficult to reuse properly, so this code refactors things to make\n> > > it easier.\n> >\n> > It would be nice to split refactoring out of material logic changed.\n> > This way it would be easier to review and would look cleaner in the\n> > git history.\n>\n> I support that idea in general but felt it was overkill in this case:\n> it's new code, and there was only one existing caller of the function\n> that got refactored, and I'm not a huge fan of cluttering the git\n> history with a bunch of tiny little refactoring commits to fix a\n> single bug. I might have changed it if I'd seen this note before\n> committing, though.\n\nI understand your point. I'm also not huge fan of a flood of small\ncommits. Nevertheless, I find splitting refactoring from other\nchanges generally useful. That could be a single commit of many small\nrefactorings, not many small commits. The point for me is easier\nreview: you can expect refactoring commit to contain \"isomorphic\"\nchanges, while other commits implementing material logic changes. But\nthat might be a committer preference though.\n\n> > > To make this fix work, also teach the WAL summarizer that after a\n> > > promotion has occurred, no more WAL can appear on the previous\n> > > timeline: previously, the WAL summarizer wouldn't switch to the new\n> > > timeline until we actually started writing WAL there, but that meant\n> > > that when the startup process was waiting for the WAL summarizer, it\n> > > was waiting for an action that the summarizer wasn't yet prepared to\n> > > take.\n> >\n> > I think this is a pretty long sentence, and I'm not sure I can\n> > understand it correctly. Does startup process wait for the WAL\n> > summarizer without this patch? If not, it's not clear for me that\n> > word \"previously\" doesn't distribute to this part of sentence.\n> > Breaking this into multiple sentences could improve the clarity for\n> > me.\n>\n> Yes, I think that phrasing is muddled. It's too late to amend the\n> commit message, but for posterity: I initially thought that I could\n> fix this bug by just teaching the startup process to wait for WAL\n> summarization before performing the .partial renaming, but that was\n> not enough by itself. The problem is that the WAL summarizer would\n> read all the records that were present in the final WAL file on the\n> old timeline, but it wouldn't actually write out a summary file,\n> because that only happens when it reaches XLOG_CHECKPOINT_REDO or a\n> timeline switch point. Since no WAL had appeared on the new timeline\n> yet, it didn't view the end of the old timeline as a switch point, so\n> it just sat there waiting, without ever writing out a file; and the\n> startup process sat there waiting for it. So the second part of the\n> fix is to make the WAL summarizer realize that once the startup\n> process has chosen a new timeline ID, no more WAL is going to appear\n> on the old timeline, and thus it can (and should) write out the final\n> summary file for the old timeline and prepare to read WAL from the new\n> timeline.\n\nGreat, thank you for the explanation. Now that's clear.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 31 Jul 2024 16:49:54 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" }, { "msg_contents": "On Wed, Jul 31, 2024 at 04:49:54PM +0300, Alexander Korotkov wrote:\n> On Mon, Jul 29, 2024 at 7:20 PM Robert Haas <[email protected]> wrote:\n>> I support that idea in general but felt it was overkill in this case:\n>> it's new code, and there was only one existing caller of the function\n>> that got refactored, and I'm not a huge fan of cluttering the git\n>> history with a bunch of tiny little refactoring commits to fix a\n>> single bug. I might have changed it if I'd seen this note before\n>> committing, though.\n> \n> I understand your point. I'm also not huge fan of a flood of small\n> commits. Nevertheless, I find splitting refactoring from other\n> changes generally useful. That could be a single commit of many small\n> refactorings, not many small commits. The point for me is easier\n> review: you can expect refactoring commit to contain \"isomorphic\"\n> changes, while other commits implementing material logic changes.\n\nFor review, it also tends to matter a lot to me, especially if the\nsame areas of code are changed across multiple commits. That's more\nannoying for authors as the splits are annoying to maintain. For a\nsingle caller introduced, what Robert has done is fine IMO.\n\n> But that might be a committer preference though.\n\nI tend to prefer refactorings if it comes to a cleaner git history,\nstill that's always case-by-case, and all of us have our own habits.\n--\nMichael", "msg_date": "Thu, 1 Aug 2024 15:57:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with summarize_wal enabled during\n pg_createsubscriber" } ]
[ { "msg_contents": "Hello hackers,\n\nAttached patch introduces an optimization of mul_var() in numeric.c, targeting cases where the multiplicands consist of only one or two base-NBASE digits. Such small multiplicands can fit into an int64 and thus be computed directly, resulting in a significant performance improvement, between 26% - 34% benchmarked on Intel Core i9-14900K.\n\nThis optimization is similar to commit d1b307eef2, that also targeted one and two base-NBASE digit operands, but optimized div_var().\n\nRegards,\nJoel", "msg_date": "Mon, 01 Jul 2024 08:04:06 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Mon, Jul 1, 2024, at 08:04, Joel Jacobson wrote:\n> * 0001-optimize-numeric-mul_var-small-factors.patch\n\nNew version to silence maybe-uninitialized error reported by cfbot.\n\n/Joel", "msg_date": "Mon, 01 Jul 2024 10:07:19 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "\"Joel Jacobson\" <[email protected]> writes:\n\n> Hello hackers,\n>\n> Attached patch introduces an optimization of mul_var() in numeric.c,\n> targeting cases where the multiplicands consist of only one or two\n> base-NBASE digits. Such small multiplicands can fit into an int64 and\n> thus be computed directly, resulting in a significant performance\n> improvement, between 26% - 34% benchmarked on Intel Core i9-14900K.\n>\n> This optimization is similar to commit d1b307eef2, that also targeted\n> one and two base-NBASE digit operands, but optimized div_var().\n\ndiv_var() also has an optimisation for 3- and 4-digit operands under\nHAVE_INT128 (added in commit 0aa38db56bf), would that make sense in\nmul_var() too?\n\n> Regards,\n> Joel\n\n- ilmari\n\n\n", "msg_date": "Mon, 01 Jul 2024 13:25:08 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE\n digit multiplicands." }, { "msg_contents": "On Mon, Jul 1, 2024, at 14:25, Dagfinn Ilmari Mannsåker wrote:\n> div_var() also has an optimisation for 3- and 4-digit operands under\n> HAVE_INT128 (added in commit 0aa38db56bf), would that make sense in\n> mul_var() too?\n\nI considered it, but it only gives a marginal speed-up on Intel Core i9-14900K,\nand is actually slower on Apple M3 Max.\nNot really sure why. Maybe the code I tried can be optimized further:\n\n```\n#ifdef HAVE_INT128\n\t/*\n\t * If var1 and var2 are up to four digits, their product will fit in\n\t * an int128 can be computed directly, which is significantly faster.\n\t */\n\tif (var2ndigits <= 4)\n\t{\n\t\tint128\t\tproduct = 0;\n\n\t\tswitch (var1ndigits)\n\t\t{\n\t\t\tcase 1:\n\t\t\t\tproduct = var1digits[0];\n\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\tproduct = var1digits[0] * NBASE + var1digits[1];\n\t\t\t\tbreak;\n\t\t\tcase 3:\n\t\t\t\tproduct = ((int128) var1digits[0] * NBASE + var1digits[1])\n\t\t\t\t\t\t* NBASE + var1digits[2];\n\t\t\t\tbreak;\n\t\t\tcase 4:\n\t\t\t\tproduct = (((int128) var1digits[0] * NBASE + var1digits[1])\n\t\t\t\t\t\t* NBASE + var1digits[2]) * NBASE + var1digits[3];\n\t\t\t\tbreak;\n\t\t}\n\n\t\tswitch (var2ndigits)\n\t\t{\n\t\t\tcase 1:\n\t\t\t\tproduct *= var2digits[0];\n\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\tproduct *= var2digits[0] * NBASE + var2digits[1];\n\t\t\t\tbreak;\n\t\t\tcase 3:\n\t\t\t\tproduct = ((int128) var2digits[0] * NBASE + var2digits[1])\n\t\t\t\t\t\t* NBASE + var2digits[2];\n\t\t\t\tbreak;\n\t\t\tcase 4:\n\t\t\t\tproduct = (((int128) var2digits[0] * NBASE + var2digits[1])\n\t\t\t\t\t\t* NBASE + var2digits[2]) * NBASE + var2digits[3];\n\t\t\t\tbreak;\n\t\t}\n\n\t\talloc_var(result, res_ndigits);\n\t\tres_digits = result->digits;\n\t\tfor (i = res_ndigits - 1; i >= 0; i--)\n\t\t{\n\t\t\tres_digits[i] = product % NBASE;\n\t\t\tproduct /= NBASE;\n\t\t}\n\t\tAssert(product == 0);\n\n\t\t/*\n\t\t * Finally, round the result to the requested precision.\n\t\t */\n\t\tresult->weight = res_weight;\n\t\tresult->sign = res_sign;\n\n\t\t/* Round to target rscale (and set result->dscale) */\n\t\tround_var(result, rscale);\n\n\t\t/* Strip leading and trailing zeroes */\n\t\tstrip_var(result);\n\n\t\treturn;\n\t}\n#endif\n```\n\nBenchmark 1, testing 2 ndigits * 2 ndigits:\n\nSELECT\n timeit.pretty_time(total_time_a / 1e6 / executions,3) AS execution_time_a,\n timeit.pretty_time(total_time_b / 1e6 / executions,3) AS execution_time_b,\n total_time_a::numeric/total_time_b AS performance_ratio\nFROM timeit.cmp(\n 'numeric_mul',\n 'numeric_mul_patched',\n input_values := ARRAY[\n '11112222',\n '33334444'\n ],\n min_time := 1000000,\n timeout := '10 s'\n);\n\nApple M3 Max:\n\n execution_time_a | execution_time_b | performance_ratio\n------------------+------------------+--------------------\n 32.2 ns | 20.5 ns | 1.5700112246809388\n(1 row)\n\nIntel Core i9-14900K:\n\n execution_time_a | execution_time_b | performance_ratio\n------------------+------------------+--------------------\n 30.2 ns | 21.4 ns | 1.4113042510107371\n(1 row)\n\nSo 57% and 41% faster.\n\nBenchmark 2, testing 4 ndigits * 4 ndigits:\n\nSELECT\n timeit.pretty_time(total_time_a / 1e6 / executions,3) AS execution_time_a,\n timeit.pretty_time(total_time_b / 1e6 / executions,3) AS execution_time_b,\n total_time_a::numeric/total_time_b AS performance_ratio\nFROM timeit.cmp(\n 'numeric_mul',\n 'numeric_mul_patched',\n input_values := ARRAY[\n '1111222233334444',\n '5555666677778888'\n ],\n min_time := 1000000,\n timeout := '10 s'\n);\n\nApple M3 Max:\n\n execution_time_a | execution_time_b | performance_ratio\n------------------+------------------+------------------------\n 41.9 ns | 51.3 ns | 0.81733655797170943614\n(1 row)\n\nIntel Core i9-14900K:\n\n execution_time_a | execution_time_b | performance_ratio\n------------------+------------------+--------------------\n 40 ns | 38 ns | 1.0515610914706320\n(1 row)\n\nSo 18% slower on Apple M3 Max and just 5% faster on Intel Core i9-14900K.\n\n/Joel\n\n\n", "msg_date": "Mon, 01 Jul 2024 15:11:43 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Mon, Jul 1, 2024, at 15:11, Joel Jacobson wrote:\n> Not really sure why. Maybe the code I tried can be optimized further:\n\nIf anyone want experiment with the int128 version,\nhere is a patch that adds a separate numeric_mul_patched() function,\nso it's easier to benchmark against the unmodified numeric_mul().\n\n/Joel", "msg_date": "Mon, 01 Jul 2024 15:14:58 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Mon, Jul 1, 2024, at 15:14, Joel Jacobson wrote:\n> * 0001-Optimize-mul_var-for-var2ndigits-4.patch\n\nFound a typo, fixed in new version.\n\nThe int128 version is still slower though,\nI wonder if there is something that can be done to speed it up further.\n\nBelow is a more realistic benchmark than just microbenchmarking mul_var(),\nnot testing the int128 version, but the code for up to 2 NBASE-digits:\n\n```\nCREATE TABLE bench_mul_var (num1 numeric, num2 numeric);\n\nINSERT INTO bench_mul_var (num1, num2)\nSELECT random(0::numeric,1e8::numeric), random(0::numeric,1e8::numeric) FROM generate_series(1,1e8);\n\nSELECT SUM(num1*num2) FROM bench_mul_var;\nTime: 8331.953 ms (00:08.332)\nTime: 7415.241 ms (00:07.415)\nTime: 7298.296 ms (00:07.298)\nTime: 7314.754 ms (00:07.315)\nTime: 7289.560 ms (00:07.290)\n\nSELECT SUM(numeric_mul_patched(num1,num2)) FROM bench_mul_var;\nTime: 6403.426 ms (00:06.403)\nTime: 6401.797 ms (00:06.402)\nTime: 6366.136 ms (00:06.366)\nTime: 6376.049 ms (00:06.376)\nTime: 6317.282 ms (00:06.317)\n``\n\nBenchmarked on a Intel Core i9-14900K machine.\n\n/Joel", "msg_date": "Mon, 01 Jul 2024 21:56:23 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Mon, 1 Jul 2024 at 20:56, Joel Jacobson <[email protected]> wrote:\n>\n> Below is a more realistic benchmark\n>\n> CREATE TABLE bench_mul_var (num1 numeric, num2 numeric);\n>\n> INSERT INTO bench_mul_var (num1, num2)\n> SELECT random(0::numeric,1e8::numeric), random(0::numeric,1e8::numeric) FROM generate_series(1,1e8);\n>\n> SELECT SUM(num1*num2) FROM bench_mul_var;\n\nI had a play with this, and came up with a slightly different way of\ndoing it that works for var2 of any size, as long as var1 is just 1 or\n2 digits.\n\nRepeating your benchmark where both numbers have up to 2 NBASE-digits,\nthis new approach was slightly faster:\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- HEAD\nTime: 4762.990 ms (00:04.763)\nTime: 4332.166 ms (00:04.332)\nTime: 4276.211 ms (00:04.276)\nTime: 4247.321 ms (00:04.247)\nTime: 4166.738 ms (00:04.167)\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- v2 patch\nTime: 4398.812 ms (00:04.399)\nTime: 3672.668 ms (00:03.673)\nTime: 3650.227 ms (00:03.650)\nTime: 3611.420 ms (00:03.611)\nTime: 3534.218 ms (00:03.534)\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- this patch\nTime: 3350.596 ms (00:03.351)\nTime: 3336.224 ms (00:03.336)\nTime: 3335.599 ms (00:03.336)\nTime: 3336.990 ms (00:03.337)\nTime: 3351.453 ms (00:03.351)\n\n(This was on an older Intel Core i9-9900K, so I'm not sure why all the\ntimings are faster. What compiler settings are you using?)\n\nThe approach taken in this patch only uses 32-bit integers, so in\ntheory it could be extended to work for var1ndigits = 3, 4, or even\nmore, but the code would get increasingly complex, and I ran out of\nsteam at 2 digits. It might be worth trying though.\n\nRegards,\nDean", "msg_date": "Mon, 1 Jul 2024 23:19:36 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 00:19, Dean Rasheed wrote:\n> I had a play with this, and came up with a slightly different way of\n> doing it that works for var2 of any size, as long as var1 is just 1 or\n> 2 digits.\n>\n> Repeating your benchmark where both numbers have up to 2 NBASE-digits,\n> this new approach was slightly faster:\n>\n...\n>\n> (This was on an older Intel Core i9-9900K, so I'm not sure why all the\n> timings are faster. What compiler settings are you using?)\n\nStrange. I just did `./configure` with a --prefix.\n\nCompiler settings on my Intel Core i9-14900K machine:\n\n$ pg_config | grep -E '^(CC|CFLAGS|CPPFLAGS|LDFLAGS)'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2\nCFLAGS_SL = -fPIC\nLDFLAGS = -Wl,--as-needed -Wl,-rpath,'/home/joel/pg-dev/lib',--enable-new-dtags\nLDFLAGS_EX =\nLDFLAGS_SL =\n\n> The approach taken in this patch only uses 32-bit integers, so in\n> theory it could be extended to work for var1ndigits = 3, 4, or even\n> more, but the code would get increasingly complex, and I ran out of\n> steam at 2 digits. It might be worth trying though.\n>\n> Regards,\n> Dean\n>\n> Attachments:\n> * optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.txt\n\nReally nice!\n\nI've benchmarked your patch on my three machines with great results.\nI added a setseed() step, to make the benchmarks reproducible,\nshouldn't matter much since it should statistically average out, but I thought why not.\n\nCREATE TABLE bench_mul_var (num1 numeric, num2 numeric);\nSELECT setseed(0.12345);\nINSERT INTO bench_mul_var (num1, num2)\nSELECT random(0::numeric,1e8::numeric), random(0::numeric,1e8::numeric) FROM generate_series(1,1e8);\n\\timing\n\n/*\n * Apple M3 Max\n */\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- HEAD\nTime: 3622.342 ms (00:03.622)\nTime: 3029.786 ms (00:03.030)\nTime: 3046.497 ms (00:03.046)\nTime: 3035.910 ms (00:03.036)\nTime: 3034.073 ms (00:03.034)\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.txt\nTime: 2484.685 ms (00:02.485)\nTime: 2478.341 ms (00:02.478)\nTime: 2494.397 ms (00:02.494)\nTime: 2470.987 ms (00:02.471)\nTime: 2490.215 ms (00:02.490)\n\n/*\n * Intel Core i9-14900K\n */\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- HEAD\nTime: 2555.569 ms (00:02.556)\nTime: 2523.145 ms (00:02.523)\nTime: 2518.671 ms (00:02.519)\nTime: 2514.501 ms (00:02.515)\nTime: 2516.919 ms (00:02.517)\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.txt\nTime: 2246.441 ms (00:02.246)\nTime: 2243.900 ms (00:02.244)\nTime: 2245.350 ms (00:02.245)\nTime: 2245.080 ms (00:02.245)\nTime: 2247.856 ms (00:02.248)\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- HEAD\nTime: 3037.497 ms (00:03.037)\nTime: 3010.037 ms (00:03.010)\nTime: 3000.956 ms (00:03.001)\nTime: 2989.424 ms (00:02.989)\nTime: 2984.911 ms (00:02.985)\n\nSELECT SUM(num1*num2) FROM bench_mul_var; -- optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.txt\nTime: 2645.530 ms (00:02.646)\nTime: 2640.472 ms (00:02.640)\nTime: 2638.613 ms (00:02.639)\nTime: 2637.889 ms (00:02.638)\nTime: 2638.054 ms (00:02.638)\n\n/Joel\n\n\n", "msg_date": "Tue, 02 Jul 2024 09:49:49 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, 2 Jul 2024 at 08:50, Joel Jacobson <[email protected]> wrote:\n>\n> On Tue, Jul 2, 2024, at 00:19, Dean Rasheed wrote:\n>\n> > Attachments:\n> > * optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.txt\n>\n\nShortly after posting that, I realised that there was a small bug. This bit:\n\n case 2:\n newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n\nisn't quite right in the case where rscale is less than the full\nresult. In that case, the least significant result digit has a\ncontribution from one more var2 digit, so it needs to be:\n\n newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n if (res_ndigits - 3 < var2ndigits)\n newdig += (int) var1digits[0] * var2digits[res_ndigits - 3];\n\nThat doesn't noticeably affect the performance though. Update attached.\n\n> I've benchmarked your patch on my three machines with great results.\n> I added a setseed() step, to make the benchmarks reproducible,\n> shouldn't matter much since it should statistically average out, but I thought why not.\n\nNice. The results on the Apple M3 Max look particularly impressive.\n\nI think it'd probably be worth trying to extend this to 3 or maybe 4\nvar1 digits, since that would cover a lot of \"everyday\" sized numeric\nvalues that a lot of people might be using. I don't think we should go\nbeyond that though.\n\nRegards,\nDean", "msg_date": "Tue, 2 Jul 2024 09:22:15 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 10:22, Dean Rasheed wrote:\n> Shortly after posting that, I realised that there was a small bug. This bit:\n>\n> case 2:\n> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>\n> isn't quite right in the case where rscale is less than the full\n> result. In that case, the least significant result digit has a\n> contribution from one more var2 digit, so it needs to be:\n>\n> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n> if (res_ndigits - 3 < var2ndigits)\n> newdig += (int) var1digits[0] * var2digits[res_ndigits - 3];\n>\n> That doesn't noticeably affect the performance though. Update attached.\n\nNice catch. Could we add a test somehow that would test mul_var()\nwith rscale less than the full result, that would catch bugs like this one\nand others?\n\nI created a new benchmark, that specifically tests different var1ndigits.\nI've only run it on Apple M3 Max yet. More to come.\n\n\\timing\nSELECT setseed(0.12345);\nCREATE TABLE bench_mul_var_var1ndigits_1 (var1 numeric, var2 numeric);\nINSERT INTO bench_mul_var_var1ndigits_1 (var1, var2)\nSELECT random(0::numeric,9999::numeric), random(10000000::numeric,1e32::numeric) FROM generate_series(1,1e8);\nCREATE TABLE bench_mul_var_var1ndigits_2 (var1 numeric, var2 numeric);\nINSERT INTO bench_mul_var_var1ndigits_2 (var1, var2)\nSELECT random(10000000::numeric,99999999::numeric), random(100000000000::numeric,1e32::numeric) FROM generate_series(1,1e8);\nCREATE TABLE bench_mul_var_var1ndigits_3 (var1 numeric, var2 numeric);\nINSERT INTO bench_mul_var_var1ndigits_3 (var1, var2)\nSELECT random(100000000000::numeric,999999999999::numeric), random(1000000000000000::numeric,1e32::numeric) FROM generate_series(1,1e8);\nCREATE TABLE bench_mul_var_var1ndigits_4 (var1 numeric, var2 numeric);\nINSERT INTO bench_mul_var_var1ndigits_4 (var1, var2)\nSELECT random(1000000000000000::numeric,9999999999999999::numeric), random(10000000000000000000::numeric,1e32::numeric) FROM generate_series(1,1e8);\n\n/*\n * Apple M3 Max\n */\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- HEAD\nTime: 2986.952 ms (00:02.987)\nTime: 2991.765 ms (00:02.992)\nTime: 2987.253 ms (00:02.987)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v2-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 2874.841 ms (00:02.875)\nTime: 2883.070 ms (00:02.883)\nTime: 2899.973 ms (00:02.900)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- HEAD\nTime: 3459.556 ms (00:03.460)\nTime: 3304.983 ms (00:03.305)\nTime: 3299.728 ms (00:03.300)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v2-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3053.140 ms (00:03.053)\nTime: 3065.227 ms (00:03.065)\nTime: 3069.995 ms (00:03.070)\n\n/*\n * Just for completeness, also testing var1ndigits 3 and 4,\n * although no change is expected since not yet implemented.\n */\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 3809.005 ms (00:03.809)\nTime: 3438.260 ms (00:03.438)\nTime: 3453.920 ms (00:03.454)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v2-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3437.592 ms (00:03.438)\nTime: 3457.586 ms (00:03.458)\nTime: 3474.344 ms (00:03.474)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 4133.193 ms (00:04.133)\nTime: 3554.477 ms (00:03.554)\nTime: 3560.855 ms (00:03.561)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v2-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3508.540 ms (00:03.509)\nTime: 3566.721 ms (00:03.567)\nTime: 3524.083 ms (00:03.524)\n\n> I think it'd probably be worth trying to extend this to 3 or maybe 4\n> var1 digits, since that would cover a lot of \"everyday\" sized numeric\n> values that a lot of people might be using. I don't think we should go\n> beyond that though.\n\nI think so too. I'm working on var1ndigits=3 now.\n\n/Joel\n\n\n", "msg_date": "Tue, 02 Jul 2024 11:05:03 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 11:05, Joel Jacobson wrote:\n> On Tue, Jul 2, 2024, at 10:22, Dean Rasheed wrote:\n>> Shortly after posting that, I realised that there was a small bug. This bit:\n>>\n>> case 2:\n>> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>>\n>> isn't quite right in the case where rscale is less than the full\n>> result. In that case, the least significant result digit has a\n>> contribution from one more var2 digit, so it needs to be:\n>>\n>> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>> if (res_ndigits - 3 < var2ndigits)\n>> newdig += (int) var1digits[0] * var2digits[res_ndigits - 3];\n>>\n\nJust to be able to verify mul_var() is working as expected when\nrscale is less than the full result, I've added a numeric_mul_patched()\nfunction that takes a third rscale_adjustment parameter:\n\nI've tried to get a different result with and without the fix,\nbut I'm failing to hit the bug.\n\nCan you think of an example that should trigger the bug?\n\nExample usage:\n\nSELECT numeric_mul_patched(9999.999,2,0);\nvar1ndigits=1 var2ndigits=2 rscale=3\n numeric_mul_patched\n---------------------\n 19999.998\n(1 row)\n\nSELECT numeric_mul_patched(9999.999,2,1);\nvar1ndigits=1 var2ndigits=2 rscale=4\n numeric_mul_patched\n---------------------\n 19999.9980\n(1 row)\n\nSELECT numeric_mul_patched(9999.999,2,-1);\nvar1ndigits=1 var2ndigits=2 rscale=2\n numeric_mul_patched\n---------------------\n 20000.00\n(1 row)\n\nSELECT numeric_mul_patched(9999.999,2,-2);\nvar1ndigits=1 var2ndigits=2 rscale=1\n numeric_mul_patched\n---------------------\n 20000.0\n(1 row)\n\n/Joel\n\n\n", "msg_date": "Tue, 02 Jul 2024 12:23:36 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, 2 Jul 2024 at 11:23, Joel Jacobson <[email protected]> wrote:\n>\n> Just to be able to verify mul_var() is working as expected when\n> rscale is less than the full result, I've added a numeric_mul_patched()\n> function that takes a third rscale_adjustment parameter:\n\nYeah, we could expose such a function, and maybe it would be generally\nuseful, not just for testing, but that's really a separate proposal.\nIn general, mul_var() with reduced rscale doesn't guarantee correctly\nrounded results though, which might make it less generally acceptable.\n\nFor this patch though, the aim is just to ensure the results are the\nsame as before.\n\n> I've tried to get a different result with and without the fix,\n> but I'm failing to hit the bug.\n>\n> Can you think of an example that should trigger the bug?\n\n9999.0001 * 5000.9999_9999 with rscale = 0 triggers it (returned\n50004999 instead of 50005000).\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 2 Jul 2024 12:44:18 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 13:44, Dean Rasheed wrote:\n>> Can you think of an example that should trigger the bug?\n>\n> 9999.0001 * 5000.9999_9999 with rscale = 0 triggers it (returned\n> 50004999 instead of 50005000).\n\nThanks, helpful.\n\nAttached patch adds the var1ndigits=3 case.\n\nBenchmark:\n\n/*\n * Apple M3 Max\n */\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 3809.005 ms (00:03.809)\nTime: 3438.260 ms (00:03.438)\nTime: 3453.920 ms (00:03.454)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v3-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3230.017 ms (00:03.230)\nTime: 3244.094 ms (00:03.244)\nTime: 3246.370 ms (00:03.246)\n\n/*\n * Intel Core i9-14900K\n */\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 4082.759 ms (00:04.083)\nTime: 4077.372 ms (00:04.077)\nTime: 4080.830 ms (00:04.081)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v3-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3989.021 ms (00:03.989)\nTime: 3978.263 ms (00:03.978)\nTime: 3955.331 ms (00:03.955)\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 3791.271 ms (00:03.791)\nTime: 3760.138 ms (00:03.760)\nTime: 3758.773 ms (00:03.759)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v3-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3400.824 ms (00:03.401)\nTime: 3373.714 ms (00:03.374)\nTime: 3345.505 ms (00:03.346)\n\nRegards,\nJoel", "msg_date": "Tue, 02 Jul 2024 18:20:35 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 18:20, Joel Jacobson wrote:\n> * v3-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n\nHmm, v3 contains a bug which I haven't been able to solve yet.\nReporting now to avoid time waste reviewing it since it's buggy.\n\nThe attached patch is how I tested and found the bug.\nIt contains a file test-mul-var.sql, which tests mul_var()\nwith varying rscale, using the SQL-callable numeric_mul_patched(),\nwhich third argument is the rscale_adjustment.\n\nOut of 2481600 random tests, all passed except 4:\n\nSELECT\n result = numeric_mul_patched(var1,var2,rscale_adjustment),\n COUNT(*)\nFROM test_numeric_mul_patched\nGROUP BY 1\nORDER BY 1;\n ?column? | count\n----------+---------\n f | 4\n t | 2481596\n(2 rows)\n\nSELECT\n var1,\n var2,\n var1*var2 AS full_resolution,\n rscale_adjustment,\n result AS expected,\n numeric_mul_patched(var1,var2,rscale_adjustment) AS actual\nFROM test_numeric_mul_patched\nWHERE result <> numeric_mul_patched(var1,var2,rscale_adjustment);\n var1 | var2 | full_resolution | rscale_adjustment | expected | actual\n-------------------+----------------+---------------------------+-------------------+----------+--------\n 676.797214075 | 0.308068877759 | 208.500158210502929257925 | -21 | 209 | 208\n 555.07029 | 0.381033094735 | 211.50015039415392315 | -17 | 212 | 211\n 0.476637718921 | 66.088276 | 31.500165120061470196 | -18 | 32 | 31\n 0.060569165063082 | 998.85933 | 60.50007563356949425506 | -20 | 61 | 60\n(4 rows)\n\nTrying to wrap my head around what could cause this.\n\nIt's rounding down instead of up, and these cases all end with decimal .500XXXX.\n\nRegards,\nJoel", "msg_date": "Tue, 02 Jul 2024 20:53:23 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 20:53, Joel Jacobson wrote:\n> Trying to wrap my head around what could cause this.\n>\n> It's rounding down instead of up, and these cases all end with decimal .500XXXX.\n\nInteresting, I actually think there is a bug in the normal mul_var() code.\nFound a case that rounds down, when it should round up:\n\nCalling mul_var() with:\nvar1=51.2945442386599\nvar2=0.828548712212\nrscale=0\n\nreturns 42, but I think it should return 43,\nsince 51.2945442386599*0.828548712212=42.5000285724431241296446988\n\nBut maybe this is expected and OK, having to do with MUL_GUARD_DIGITS?\n\n/Joel\n\n\n", "msg_date": "Tue, 02 Jul 2024 21:55:19 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 21:55, Joel Jacobson wrote:\n> On Tue, Jul 2, 2024, at 20:53, Joel Jacobson wrote:\n>> Trying to wrap my head around what could cause this.\n\nI found the bug in the case 3 code,\nand it turns out the same type of bug also exists in the case 2 code:\n\n\t\t\tcase 2:\n\t\t\t\tnewdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n\nThe problem here is that res_ndigits could become less than 4,\nif rscale is low enough,\nand then we would try to access a negative array index in var2digits.\n\nFix:\n\n\t\t\tcase 2:\n\t\t\t\tif (res_ndigits - 4 >= 0 && res_ndigits - 4 < var2ndigits)\n\t\t\t\t\tnewdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n\t\t\t\telse\n\t\t\t\t\tnewdig = 0;\n\nAnother problem in the case 2 code:\n\n\t\t\t\tif (res_ndigits - 3 < var2ndigits)\n\t\t\t\t\tnewdig += (int) var1digits[0] * var2digits[res_ndigits - 3];\n\nFix:\n\n\t\t\t\tif (res_ndigits - 3 >= 0 && res_ndigits - 3 < var2ndigits)\n\t\t\t\t\tnewdig += (int) var1digits[0] * var2digits[res_ndigits - 3];\n\nNew version attached that fixes both the case 2 and case 3 code.\n\nRegards,\nJoel", "msg_date": "Tue, 02 Jul 2024 22:10:27 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 2, 2024, at 22:10, Joel Jacobson wrote:\n> * v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n\nInstead of these boundary checks, maybe it would be cleaner to just\nskip this optimization if there are too few res_ndigits?\n\n/Joel\n\n\n", "msg_date": "Tue, 02 Jul 2024 22:13:04 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, 2 Jul 2024 at 20:55, Joel Jacobson <[email protected]> wrote:\n>\n> Interesting, I actually think there is a bug in the normal mul_var() code.\n> Found a case that rounds down, when it should round up:\n>\n> Calling mul_var() with:\n> var1=51.2945442386599\n> var2=0.828548712212\n> rscale=0\n>\n> returns 42, but I think it should return 43,\n> since 51.2945442386599*0.828548712212=42.5000285724431241296446988\n>\n> But maybe this is expected and OK, having to do with MUL_GUARD_DIGITS?\n>\n\nNo, that's not a bug. It's to be expected. The point of using only\nMUL_GUARD_DIGITS extra digits is to get the correctly rounded result\n*most of the time*, while saving a lot of effort by not computing the\nfull result.\n\nThe callers of mul_var() that ask for reduced rscale results are the\ntranscendental functions like ln_var() and exp_var(), which are\nworking to some limited precision intended to compute the final result\nreasonably accurately to a particular rscale.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 3 Jul 2024 11:41:58 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, 2 Jul 2024 at 21:10, Joel Jacobson <[email protected]> wrote:\n>\n> I found the bug in the case 3 code,\n> and it turns out the same type of bug also exists in the case 2 code:\n>\n> case 2:\n> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>\n> The problem here is that res_ndigits could become less than 4,\n\nYes. It can't be less than 3 though (per an earlier test), so the case\n2 code was correct.\n\nI've been hacking on this a bit and trying to tidy it up. Firstly, I\nmoved it to a separate function, because it was starting to look messy\nhaving so much extra code in mul_var(). Then I added a bunch more\ncomments to explain what's going on, and the limits of the various\nvariables. Note that most of the boundary checks are actually\nunnecessary -- in particular all the ones in or after the main loop,\nprovided you pull out the first 2 result digits from the main loop in\nthe 3-digit case. That does seem to work very well, but...\n\nI wasn't entirely happy with how messy that code is getting, so I\ntried a different approach. Similar to div_var_int(), I tried writing\na mul_var_int() function instead. This can be used for 1 and 2 digit\nfactors, and we could add a similar mul_var_int64() function on\nplatforms with 128-bit integers. The code looks quite a lot neater, so\nit's probably less likely to contain bugs (though I have just written\nit in a hurry,so it might still have bugs). In testing, it seemed to\ngive a decent speedup, but perhaps a little less than before. But\nthat's to be balanced against having more maintainable code, and also\na function that might be useful elsewhere in numeric.c.\n\nAnyway, here are both patches for comparison. I'll stop hacking for a\nwhile and let you see what you make of these.\n\nRegards,\nDean", "msg_date": "Wed, 3 Jul 2024 12:17:38 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "Em qua., 3 de jul. de 2024 às 08:18, Dean Rasheed <[email protected]>\nescreveu:\n\n> On Tue, 2 Jul 2024 at 21:10, Joel Jacobson <[email protected]> wrote:\n> >\n> > I found the bug in the case 3 code,\n> > and it turns out the same type of bug also exists in the case 2 code:\n> >\n> > case 2:\n> > newdig = (int) var1digits[1] *\n> var2digits[res_ndigits - 4];\n> >\n> > The problem here is that res_ndigits could become less than 4,\n>\n> Yes. It can't be less than 3 though (per an earlier test), so the case\n> 2 code was correct.\n>\n> I've been hacking on this a bit and trying to tidy it up. Firstly, I\n> moved it to a separate function, because it was starting to look messy\n> having so much extra code in mul_var(). Then I added a bunch more\n> comments to explain what's going on, and the limits of the various\n> variables. Note that most of the boundary checks are actually\n> unnecessary -- in particular all the ones in or after the main loop,\n> provided you pull out the first 2 result digits from the main loop in\n> the 3-digit case. That does seem to work very well, but...\n>\n> I wasn't entirely happy with how messy that code is getting, so I\n> tried a different approach. Similar to div_var_int(), I tried writing\n> a mul_var_int() function instead. This can be used for 1 and 2 digit\n> factors, and we could add a similar mul_var_int64() function on\n> platforms with 128-bit integers. The code looks quite a lot neater, so\n> it's probably less likely to contain bugs (though I have just written\n> it in a hurry,so it might still have bugs). In testing, it seemed to\n> give a decent speedup, but perhaps a little less than before. But\n> that's to be balanced against having more maintainable code, and also\n> a function that might be useful elsewhere in numeric.c.\n>\n> Anyway, here are both patches for comparison. I'll stop hacking for a\n> while and let you see what you make of these.\n>\nI liked v5-add-mul_var_int.patch better.\n\nI think that *var_digits can be const too.\n+ const NumericDigit *var_digits = var->digits;\n\nTypo In the comments:\n- by procssing\n+ by processing\n\nbest regards,\nRanier Vilela\n\nEm qua., 3 de jul. de 2024 às 08:18, Dean Rasheed <[email protected]> escreveu:On Tue, 2 Jul 2024 at 21:10, Joel Jacobson <[email protected]> wrote:\n>\n> I found the bug in the case 3 code,\n> and it turns out the same type of bug also exists in the case 2 code:\n>\n>                         case 2:\n>                                 newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>\n> The problem here is that res_ndigits could become less than 4,\n\nYes. It can't be less than 3 though (per an earlier test), so the case\n2 code was correct.\n\nI've been hacking on this a bit and trying to tidy it up. Firstly, I\nmoved it to a separate function, because it was starting to look messy\nhaving so much extra code in mul_var(). Then I added a bunch more\ncomments to explain what's going on, and the limits of the various\nvariables. Note that most of the boundary checks are actually\nunnecessary -- in particular all the ones in or after the main loop,\nprovided you pull out the first 2 result digits from the main loop in\nthe 3-digit case. That does seem to work very well, but...\n\nI wasn't entirely happy with how messy that code is getting, so I\ntried a different approach. Similar to div_var_int(), I tried writing\na mul_var_int() function instead. This can be used for 1 and 2 digit\nfactors, and we could add a similar mul_var_int64() function on\nplatforms with 128-bit integers. The code looks quite a lot neater, so\nit's probably less likely to contain bugs (though I have just written\nit in a hurry,so it might still have bugs). In testing, it seemed to\ngive a decent speedup, but perhaps a little less than before. But\nthat's to be balanced against having more maintainable code, and also\na function that might be useful elsewhere in numeric.c.\n\nAnyway, here are both patches for comparison. I'll stop hacking for a\nwhile and let you see what you make of these.I liked v5-add-mul_var_int.patch better.I think that *var_digits can be const too.+\tconst NumericDigit *var_digits = var->digits; Typo In the comments:\n- by procssing\n\n+ by processingbest regards,Ranier Vilela", "msg_date": "Wed, 3 Jul 2024 08:43:50 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Mon, Jul 1, 2024 at 6:19 PM Dean Rasheed <[email protected]> wrote:\n> Repeating your benchmark where both numbers have up to 2 NBASE-digits,\n> this new approach was slightly faster:\n>\n> SELECT SUM(num1*num2) FROM bench_mul_var; -- HEAD\n> Time: 4762.990 ms (00:04.763)\n> Time: 4332.166 ms (00:04.332)\n> Time: 4276.211 ms (00:04.276)\n> Time: 4247.321 ms (00:04.247)\n> Time: 4166.738 ms (00:04.167)\n>\n> SELECT SUM(num1*num2) FROM bench_mul_var; -- v2 patch\n> Time: 4398.812 ms (00:04.399)\n> Time: 3672.668 ms (00:03.673)\n> Time: 3650.227 ms (00:03.650)\n> Time: 3611.420 ms (00:03.611)\n> Time: 3534.218 ms (00:03.534)\n>\n> SELECT SUM(num1*num2) FROM bench_mul_var; -- this patch\n> Time: 3350.596 ms (00:03.351)\n> Time: 3336.224 ms (00:03.336)\n> Time: 3335.599 ms (00:03.336)\n> Time: 3336.990 ms (00:03.337)\n> Time: 3351.453 ms (00:03.351)\n\nI don't have any particular technical insight on this topic, but I\njust want to mention that I'm excited about the work. Numeric\nperformance can be painfully slow, and these seem like quite\nsignificant speedups that will affect lots of real-world cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 08:22:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, Jul 3, 2024, at 13:17, Dean Rasheed wrote:\n> On Tue, 2 Jul 2024 at 21:10, Joel Jacobson <[email protected]> wrote:\n>>\n>> I found the bug in the case 3 code,\n>> and it turns out the same type of bug also exists in the case 2 code:\n>>\n>> case 2:\n>> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>>\n>> The problem here is that res_ndigits could become less than 4,\n>\n> Yes. It can't be less than 3 though (per an earlier test), so the case\n> 2 code was correct.\n\nHmm, I don't see how the case 2 code can be correct?\nIf, like you say, res_ndigits can't be less than 3, that means it can be 3, right?\nAnd if res_ndigits=3 then `var2digits[res_ndigits - 4]` would try to access `var2digits[-1]`.\n\n> I've been hacking on this a bit and trying to tidy it up. Firstly, I\n> moved it to a separate function, because it was starting to look messy\n> having so much extra code in mul_var(). Then I added a bunch more\n> comments to explain what's going on, and the limits of the various\n> variables. Note that most of the boundary checks are actually\n> unnecessary -- in particular all the ones in or after the main loop,\n> provided you pull out the first 2 result digits from the main loop in\n> the 3-digit case. That does seem to work very well, but...\n\nNice, I was starting to feel a bit uncomfortable with the level of increased complexity.\n\n> I wasn't entirely happy with how messy that code is getting, so I\n> tried a different approach. Similar to div_var_int(), I tried writing\n> a mul_var_int() function instead. This can be used for 1 and 2 digit\n> factors, and we could add a similar mul_var_int64() function on\n> platforms with 128-bit integers. The code looks quite a lot neater, so\n> it's probably less likely to contain bugs (though I have just written\n> it in a hurry,so it might still have bugs). In testing, it seemed to\n> give a decent speedup, but perhaps a little less than before. But\n> that's to be balanced against having more maintainable code, and also\n> a function that might be useful elsewhere in numeric.c.\n>\n> Anyway, here are both patches for comparison. I'll stop hacking for a\n> while and let you see what you make of these.\n\nI've tested both patches, and they produces the same output given the\nsame input as HEAD, when rscale is unmodified (full precision).\n\nHowever, for a reduced rscale, there are some differences:\n\nmul_var_small() seems more resilient to rscale reductions than mul_var_int().\n\nThe previous version we worked on, I've called \"mul_var inlined\" in the output below.\n\n```\nCREATE TABLE test_numeric_mul_patched (\n var1 numeric,\n var2 numeric,\n rscale_adjustment int,\n result numeric\n);\n\nDO $$\nDECLARE\nvar1 numeric;\nvar2 numeric;\nBEGIN\n FOR i IN 1..1000 LOOP\n RAISE NOTICE '%', i;\n FOR var1ndigits IN 1..4 LOOP\n FOR var2ndigits IN 1..4 LOOP\n FOR var1dscale IN 0..(var1ndigits*4) LOOP\n FOR var2dscale IN 0..(var2ndigits*4) LOOP\n FOR rscale_adjustment IN 0..(var1dscale+var2dscale) LOOP\n var1 := round(random(\n format('1%s',repeat('0',(var1ndigits-1)*4-1))::numeric,\n format('%s',repeat('9',var1ndigits*4))::numeric\n ) / 10::numeric^var1dscale, var1dscale);\n var2 := round(random(\n format('1%s',repeat('0',(var2ndigits-1)*4-1))::numeric,\n format('%s',repeat('9',var2ndigits*4))::numeric\n ) / 10::numeric^var2dscale, var2dscale);\n INSERT INTO test_numeric_mul_patched\n (var1, var2, rscale_adjustment)\n VALUES\n (var1, var2, -rscale_adjustment);\n END LOOP;\n END LOOP;\n END LOOP;\n END LOOP;\n END LOOP;\n END LOOP;\nEND $$;\n\nUPDATE test_numeric_mul_patched SET result = numeric_mul_head(var1, var2, rscale_adjustment);\n\nSELECT\n rscale_adjustment,\n COUNT(*),\n COUNT(*) FILTER (WHERE result IS DISTINCT FROM numeric_mul_patch_int(var1,var2,rscale_adjustment)) AS \"mul_var_int\",\n COUNT(*) FILTER (WHERE result IS DISTINCT FROM numeric_mul_patch_small(var1,var2,rscale_adjustment)) AS \"mul_var_small\",\n COUNT(*) FILTER (WHERE result IS DISTINCT FROM numeric_mul_patch_inline(var1,var2,rscale_adjustment)) AS \"mul_var inlined\"\nFROM test_numeric_mul_patched\nGROUP BY 1\nORDER BY 1;\n\n rscale_adjustment | count | mul_var_int | mul_var_small | mul_var inlined\n-------------------+---------+-------------+---------------+-----------------\n -32 | 1000 | 0 | 0 | 0\n -31 | 3000 | 0 | 0 | 0\n -30 | 6000 | 0 | 0 | 0\n -29 | 10000 | 0 | 0 | 0\n -28 | 17000 | 0 | 0 | 0\n -27 | 27000 | 0 | 0 | 0\n -26 | 40000 | 0 | 1 | 0\n -25 | 56000 | 1 | 11 | 0\n -24 | 78000 | 316 | 119 | 1\n -23 | 106000 | 498 | 1696 | 0\n -22 | 140000 | 531 | 2480 | 1\n -21 | 180000 | 591 | 3145 | 0\n -20 | 230000 | 1956 | 5309 | 1\n -19 | 290000 | 2189 | 5032 | 0\n -18 | 360000 | 2314 | 4868 | 0\n -17 | 440000 | 2503 | 4544 | 1\n -16 | 533000 | 5201 | 3633 | 0\n -15 | 631000 | 5621 | 3006 | 0\n -14 | 734000 | 5907 | 2631 | 0\n -13 | 842000 | 6268 | 2204 | 0\n -12 | 957000 | 9558 | 778 | 0\n -11 | 1071000 | 10597 | 489 | 0\n -10 | 1184000 | 10765 | 193 | 0\n -9 | 1296000 | 9452 | 0 | 0\n -8 | 1408000 | 1142 | 0 | 0\n -7 | 1512000 | 391 | 0 | 0\n -6 | 1608000 | 235 | 0 | 0\n -5 | 1696000 | 0 | 0 | 0\n -4 | 1776000 | 0 | 0 | 0\n -3 | 1840000 | 0 | 0 | 0\n -2 | 1888000 | 0 | 0 | 0\n -1 | 1920000 | 0 | 0 | 0\n 0 | 1936000 | 0 | 0 | 0\n(33 rows)\n\nSELECT\n result - numeric_mul_patch_int(var1,var2,rscale_adjustment),\n COUNT(*)\nFROM test_numeric_mul_patched\nGROUP BY 1\nORDER BY 1;\n\n ?column? | count\n----------------+----------\n 0 | 24739964\n 0.000000000001 | 2170\n 0.00000000001 | 234\n 0.0000000001 | 18\n 0.000000001 | 4\n 0.00000001 | 8927\n 0.0000001 | 882\n 0.000001 | 90\n 0.00001 | 6\n 0.0001 | 21963\n 0.001 | 2174\n 0.01 | 214\n 0.1 | 18\n 1 | 39336\n(14 rows)\n\nSELECT\n result - numeric_mul_patch_small(var1,var2,rscale_adjustment),\n COUNT(*)\nFROM test_numeric_mul_patched\nGROUP BY 1\nORDER BY 1;\n\n ?column? | count\n-------------------+----------\n -1 | 1233\n -0.01 | 9\n -0.001 | 73\n -0.0001 | 647\n -0.000001 | 2\n -0.0000001 | 9\n -0.00000001 | 116\n 0.000000000000000 | 24775861\n 0.00000001 | 1035\n 0.00000002 | 2\n 0.0000001 | 96\n 0.000001 | 9\n 0.0001 | 8771\n 0.0002 | 3\n 0.001 | 952\n 0.01 | 69\n 0.1 | 10\n 1 | 27098\n 2 | 5\n(19 rows)\n\n\nSELECT\n result - numeric_mul_patch_inline(var1,var2,rscale_adjustment),\n COUNT(*)\nFROM test_numeric_mul_patched\nGROUP BY 1\nORDER BY 1;\n\n ?column? | count\n----------+----------\n -1 | 4\n 0 | 24815996\n(2 rows)\n```\n\nI found these two interesting to look closer at:\n```\n 0.00000002 | 2\n 0.0002 | 3\n\nSELECT\n *,\n numeric_mul_patch_small(var1,var2,rscale_adjustment)\nFROM test_numeric_mul_patched\nWHERE result - numeric_mul_patch_small(var1,var2,rscale_adjustment) IN (0.00000002, 0.0002);\n\n var1 | var2 | rscale_adjustment | result | numeric_mul_patch_small\n-------------------+----------------+-------------------+------------+-------------------------\n 8952.12658563 | 0.902315486665 | -16 | 8077.6425 | 8077.6423\n 0.881715409579 | 0.843165739371 | -16 | 0.74343223 | 0.74343221\n 0.905322758954 | 0.756905996850 | -16 | 0.68524423 | 0.68524421\n 8464.043170546608 | 0.518100129611 | -20 | 4385.2219 | 4385.2217\n 5253.006296984449 | 0.989308019355 | -20 | 5196.8413 | 5196.8411\n(5 rows)\n```\n\nWhat can be said about mul_var()'s contract with regards to rscale?\nIt's the number of decimal digits requested by the caller, and if not\nrequesting full precision, then the decimal digits might not be accurate,\nbut can something be said about how far off they can be?\n\nThe mul_var_int() patch only produces a difference that is exactly\n1 less than the exact result, at the last non-zero decimal digit.\n\nCould the difference be more than 1 at the last non-zero digit,\nlike in the five cases found above?\n\nIt would be nice if we could define mul_var()'s contract with regards to\nrscale, in terms of what precision can be expected in the result.\n\nAttaching the hacked together version with all the patches, used to do the testing above.\n\nRegards,\nJoel", "msg_date": "Wed, 03 Jul 2024 15:48:58 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, 3 Jul 2024 at 14:49, Joel Jacobson <[email protected]> wrote:\n>\n> Hmm, I don't see how the case 2 code can be correct?\n> If, like you say, res_ndigits can't be less than 3, that means it can be 3, right?\n> And if res_ndigits=3 then `var2digits[res_ndigits - 4]` would try to access `var2digits[-1]`.\n>\n\nAh yes, I think I was looking at a newer version of the code where I'd\nalready fixed that bug. Unless you think there are still bugs in any\nof the boundary checks, which is entirely possible.\n\n> I've tested both patches, and they produces the same output given the\n> same input as HEAD, when rscale is unmodified (full precision).\n>\n> However, for a reduced rscale, there are some differences:\n>\n> mul_var_small() seems more resilient to rscale reductions than mul_var_int().\n>\n\nAh, I can see what's going on. It's perhaps best illustrated with a\nsimple example. Suppose you are multiplying a 4-digit integer x by a\n2-digit integer y (both with dscale=0). Then the terms of the full\nproduct computed by mul_var() or mul_var_small() would look something\nlike this:\n\n x0 x1 x2 x3\n * y0 y1\n ---------------------------------\n x0*y0 x1*y0 x2*y0 x3*y0\n x0*y1 x1*y1 x2*y1 x3*y1\n\nIn the reduced-rscale case, it might perform a truncated computation,\ncomputing just the first 3 columns (say), and discarding the last two\ncolumns. Therefore it would skip the 3 rightmost digit products.\n\nHowever, in mul_var_int(), y0 and y1 have been combined into a single\ninteger equal to y0*NBASE+y1, and the terms of full product are\ncomputed as follows:\n\n x0*(y0*NBASE+y1) x1*(y0*NBASE+y1) x2*(y0*NBASE+y1) x3*(y0*NBASE+y1)\n\nIn the full product, that gives the same result, but if you follow the\nsame rule in the reduced-rscale case, skipping the last two terms, it\nwould actually discard 4 digit products, making it less accurate.\n\nThat could be avoided by increasing maxdigits by 1 in mul_var_int() so\nit would always be at least as accurate as it was before, but that\nmight not really be necessary. However, if we implemented\nmul_var_int64() in the same way, it would be discarding much\nhigher-order digit products, and so we probably would have to increase\nmaxdigits to get sufficiently accurate results. But there's an even\nbigger problem: the results would be different between platforms that\ndid and didn't have 128-bit integers, which I really don't like. We\ncould avoid that by not using it in reduced-rscale cases, but that\nwould involve another test condition.\n\nBy contrast, mul_var_small() is intended to replicate the arithmetic\nin mul_var() exactly (but in a different order) for all rscales. So if\nyou've found any cases where they give different results, that's a\nbug.\n\nIn light of that, it might be that mul_var_small() is the better\noption, rather than mul_var_int(), but it'd be interesting to see how\nthey compare in terms of performance first.\n\n> What can be said about mul_var()'s contract with regards to rscale?\n> It's the number of decimal digits requested by the caller, and if not\n> requesting full precision, then the decimal digits might not be accurate,\n> but can something be said about how far off they can be?\n>\n\nI wouldn't expect it to ever be off by more than 1, given that\nMUL_GUARD_DIGITS = 2, which corresponds to 8 decimal digits, and the\nnumber of digits in the smaller input (and hence the number of digit\nproducts in each column) is limited to something like 16,000 NBASE\ndigits.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 3 Jul 2024 19:57:48 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, Jul 3, 2024, at 15:48, Joel Jacobson wrote:\n> On Wed, Jul 3, 2024, at 13:17, Dean Rasheed wrote:\n>> On Tue, 2 Jul 2024 at 21:10, Joel Jacobson <[email protected]> wrote:\n>>>\n>>> I found the bug in the case 3 code,\n>>> and it turns out the same type of bug also exists in the case 2 code:\n>>>\n>>> case 2:\n>>> newdig = (int) var1digits[1] * var2digits[res_ndigits - 4];\n>>>\n>>> The problem here is that res_ndigits could become less than 4,\n>>\n>> Yes. It can't be less than 3 though (per an earlier test), so the case\n>> 2 code was correct.\n>\n> Hmm, I don't see how the case 2 code can be correct?\n> If, like you say, res_ndigits can't be less than 3, that means it can \n> be 3, right?\n> And if res_ndigits=3 then `var2digits[res_ndigits - 4]` would try to \n> access `var2digits[-1]`.\n\nHere is an example on how to trigger the bug:\n\n```\n\t\t\tcase 2:\n\t\t\t\tif (res_ndigits - 4 < 0)\n\t\t\t\t{\n\t\t\t\t\tprintf(\"var1=%s\\n\",get_str_from_var(var1));\n\t\t\t\t\tprintf(\"var2=%s\\n\",get_str_from_var(var2));\n\t\t\t\t\tprintf(\"rscale=%d\\n\", rscale);\n\t\t\t\t\tprintf(\"res_ndigits - 4 < 0 => var2digits[%d]=%d\\n\", res_ndigits - 4, var2digits[res_ndigits - 4]);\n\t\t\t\t}\n```\n\nRunning through my tests, I hit lots of cases, including:\n\nvar1=0.10968501\nvar2=0.903728177113\nrscale=0\nres_ndigits - 4 < 0 => var2digits[-1]=-31105\n\nAll of the spotted cases had rscale=0.\n\nIf we know that mul_var() will never be called with rscale=0 when dealing with decimal inputs, perhaps we should enforce this with an Assert(), to prevent the otherwise possible out-of-bounds access (negative indexing) and provide early detection?\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 03 Jul 2024 21:05:51 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, Jul 3, 2024, at 20:57, Dean Rasheed wrote:\n> Ah yes, I think I was looking at a newer version of the code where I'd\n> already fixed that bug. Unless you think there are still bugs in any\n> of the boundary checks, which is entirely possible.\n\nAh, that explains it.\nAnd no, I can't find any other bugs in the boundary checks.\n\n> Ah, I can see what's going on. It's perhaps best illustrated with a\n> simple example. Suppose you are multiplying a 4-digit integer x by a\n> 2-digit integer y (both with dscale=0). Then the terms of the full\n> product computed by mul_var() or mul_var_small() would look something\n> like this:\n>\n> x0 x1 x2 x3\n> * y0 y1\n> ---------------------------------\n> x0*y0 x1*y0 x2*y0 x3*y0\n> x0*y1 x1*y1 x2*y1 x3*y1\n>\n> In the reduced-rscale case, it might perform a truncated computation,\n> computing just the first 3 columns (say), and discarding the last two\n> columns. Therefore it would skip the 3 rightmost digit products.\n>\n> However, in mul_var_int(), y0 and y1 have been combined into a single\n> integer equal to y0*NBASE+y1, and the terms of full product are\n> computed as follows:\n>\n> x0*(y0*NBASE+y1) x1*(y0*NBASE+y1) x2*(y0*NBASE+y1) x3*(y0*NBASE+y1)\n>\n> In the full product, that gives the same result, but if you follow the\n> same rule in the reduced-rscale case, skipping the last two terms, it\n> would actually discard 4 digit products, making it less accurate.\n>\n> That could be avoided by increasing maxdigits by 1 in mul_var_int() so\n> it would always be at least as accurate as it was before, but that\n> might not really be necessary. However, if we implemented\n> mul_var_int64() in the same way, it would be discarding much\n> higher-order digit products, and so we probably would have to increase\n> maxdigits to get sufficiently accurate results. But there's an even\n> bigger problem: the results would be different between platforms that\n> did and didn't have 128-bit integers, which I really don't like. We\n> could avoid that by not using it in reduced-rscale cases, but that\n> would involve another test condition.\n>\n> By contrast, mul_var_small() is intended to replicate the arithmetic\n> in mul_var() exactly (but in a different order) for all rscales. So if\n> you've found any cases where they give different results, that's a\n> bug.\n>\n> In light of that, it might be that mul_var_small() is the better\n> option, rather than mul_var_int(), but it'd be interesting to see how\n> they compare in terms of performance first.\n\nThanks for explaining, very helpful.\n\nI agree on your reasoning about the pros and cons.\nNot sure yet which version I prefer. Let's see how it evolves.\n\nI've done some benchmarks.\nHaven't tested Intel and AMD yet, but this is what I get on my Apple M3 Max:\n\n--\n-- varndigits=1\n--\n\n-- HEAD\nSELECT SUM(numeric_mul_head(var1,var2,0)) FROM bench_mul_var_var1ndigits_1;\nTime: 2976.896 ms (00:02.977)\nTime: 2984.759 ms (00:02.985)\nTime: 2970.364 ms (00:02.970)\n\n-- mul_var_int() patch\nSELECT SUM(numeric_mul_patch_int(var1,var2,0)) FROM bench_mul_var_var1ndigits_1;\nTime: 2790.227 ms (00:02.790)\nTime: 2786.338 ms (00:02.786)\nTime: 2784.957 ms (00:02.785)\n\n-- mul_var_small() patch\nSELECT SUM(numeric_mul_patch_small(var1,var2,0)) FROM bench_mul_var_var1ndigits_1;\nTime: 2770.211 ms (00:02.770)\nTime: 2760.685 ms (00:02.761)\nTime: 2773.221 ms (00:02.773)\n\n--\n-- varndigits=2\n--\n\n-- HEAD\nSELECT SUM(numeric_mul_head(var1,var2,0)) FROM bench_mul_var_var1ndigits_2;\nTime: 3353.258 ms (00:03.353)\nTime: 3273.055 ms (00:03.273)\nTime: 3266.392 ms (00:03.266)\n\n-- mul_var_int() patch\nSELECT SUM(numeric_mul_patch_int(var1,var2,0)) FROM bench_mul_var_var1ndigits_2;\nTime: 2694.169 ms (00:02.694)\nTime: 2687.935 ms (00:02.688)\nTime: 2692.398 ms (00:02.692)\n\n-- mul_var_small() patch\nSELECT SUM(numeric_mul_patch_small(var1,var2,0)) FROM bench_mul_var_var1ndigits_2;\nTime: 2997.685 ms (00:02.998)\nTime: 2984.418 ms (00:02.984)\nTime: 2986.976 ms (00:02.987)\n\n--\n-- varndigits=3\n--\n\n-- HEAD\nSELECT SUM(numeric_mul_head(var1,var2,0)) FROM bench_mul_var_var1ndigits_3;\nTime: 3471.391 ms (00:03.471)\nTime: 3384.114 ms (00:03.384)\nTime: 3387.031 ms (00:03.387)\n\n-- mul_var_int() patch\nSELECT SUM(numeric_mul_patch_int(var1,var2,0)) FROM bench_mul_var_var1ndigits_3;\nTime: 3384.428 ms (00:03.384)\nTime: 3398.044 ms (00:03.398)\nTime: 3393.727 ms (00:03.394)\n\n-- mul_var_small() patch\nSELECT SUM(numeric_mul_patch_small(var1,var2,0)) FROM bench_mul_var_var1ndigits_3;\nTime: 3100.567 ms (00:03.101)\nTime: 3114.225 ms (00:03.114)\nTime: 3116.137 ms (00:03.116)\n\nInteresting, mul_var_small() seems to be the winner for var1ndigits=3\nand mul_var_int() to be the winner for var1ndigits=2,\nand about the same for var1ndigits=1.\n\n>> What can be said about mul_var()'s contract with regards to rscale?\n>> It's the number of decimal digits requested by the caller, and if not\n>> requesting full precision, then the decimal digits might not be accurate,\n>> but can something be said about how far off they can be?\n>>\n>\n> I wouldn't expect it to ever be off by more than 1, given that\n> MUL_GUARD_DIGITS = 2, which corresponds to 8 decimal digits, and the\n> number of digits in the smaller input (and hence the number of digit\n> products in each column) is limited to something like 16,000 NBASE\n> digits.\n\nOK, so then the cases I found where it was off by 2 for the mul_var_int() patch\nare unexpected?\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 03 Jul 2024 22:27:33 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, Jul 3, 2024, at 22:27, Joel Jacobson wrote:\n> On Wed, Jul 3, 2024, at 20:57, Dean Rasheed wrote:\n>> I wouldn't expect it to ever be off by more than 1, given that\n>> MUL_GUARD_DIGITS = 2, which corresponds to 8 decimal digits, and the\n>> number of digits in the smaller input (and hence the number of digit\n>> products in each column) is limited to something like 16,000 NBASE\n>> digits.\n>\n> OK, so then the cases I found where it was off by 2 for the mul_var_int() patch\n> are unexpected?\n\nSorry, I meant off by 2 for the mul_var_small() patch, these cases that I found:\n\n var1 | var2 | rscale_adjustment | result | numeric_mul_patch_small\n-------------------+----------------+-------------------+------------+-------------------------\n 8952.12658563 | 0.902315486665 | -16 | 8077.6425 | 8077.6423\n 0.881715409579 | 0.843165739371 | -16 | 0.74343223 | 0.74343221\n 0.905322758954 | 0.756905996850 | -16 | 0.68524423 | 0.68524421\n 8464.043170546608 | 0.518100129611 | -20 | 4385.2219 | 4385.2217\n 5253.006296984449 | 0.989308019355 | -20 | 5196.8413 | 5196.8411\n(5 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 03 Jul 2024 22:45:34 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, Jul 3, 2024, at 13:17, Dean Rasheed wrote:\n> Anyway, here are both patches for comparison. I'll stop hacking for a\n> while and let you see what you make of these.\n>\n> Regards,\n> Dean\n>\n> Attachments:\n> * v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n> * v5-add-mul_var_int.patch\n\nI've now benchmarked the patches on all my machines,\nsee bench_mul_var.sql for details.\n\nSummary of benchmark results:\n\n cpu | var1ndigits | winner\n----------------------+-------------+-------------------------------------------------------------\n AMD Ryzen 9 7950X3D | 1 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n AMD Ryzen 9 7950X3D | 2 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n AMD Ryzen 9 7950X3D | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n Apple M3 Max | 1 | v5-add-mul_var_int.patch\n Apple M3 Max | 2 | v5-add-mul_var_int.patch\n Apple M3 Max | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n Intel Core i9-14900K | 1 | v5-add-mul_var_int.patch\n Intel Core i9-14900K | 2 | v5-add-mul_var_int.patch\n Intel Core i9-14900K | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n(9 rows)\n\nPerformance ratio against HEAD per CPU and var1ndigits:\n\n cpu | var1ndigits | version | performance_ratio\n----------------------+-------------+-------------------------------------------------------------+-------------------\n AMD Ryzen 9 7950X3D | 1 | HEAD | 1.00\n AMD Ryzen 9 7950X3D | 1 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.11\n AMD Ryzen 9 7950X3D | 1 | v5-add-mul_var_int.patch | 1.07\n AMD Ryzen 9 7950X3D | 1 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.12\n AMD Ryzen 9 7950X3D | 2 | HEAD | 1.00\n AMD Ryzen 9 7950X3D | 2 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.10\n AMD Ryzen 9 7950X3D | 2 | v5-add-mul_var_int.patch | 1.11\n AMD Ryzen 9 7950X3D | 2 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.13\n AMD Ryzen 9 7950X3D | 3 | HEAD | 1.00\n AMD Ryzen 9 7950X3D | 3 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.10\n AMD Ryzen 9 7950X3D | 3 | v5-add-mul_var_int.patch | 0.98\n AMD Ryzen 9 7950X3D | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.15\n Apple M3 Max | 1 | HEAD | 1.00\n Apple M3 Max | 1 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.07\n Apple M3 Max | 1 | v5-add-mul_var_int.patch | 1.08\n Apple M3 Max | 1 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.07\n Apple M3 Max | 2 | HEAD | 1.00\n Apple M3 Max | 2 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.09\n Apple M3 Max | 2 | v5-add-mul_var_int.patch | 1.21\n Apple M3 Max | 2 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.10\n Apple M3 Max | 3 | HEAD | 1.00\n Apple M3 Max | 3 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.09\n Apple M3 Max | 3 | v5-add-mul_var_int.patch | 0.99\n Apple M3 Max | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.09\n Intel Core i9-14900K | 1 | HEAD | 1.00\n Intel Core i9-14900K | 1 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.05\n Intel Core i9-14900K | 1 | v5-add-mul_var_int.patch | 1.07\n Intel Core i9-14900K | 1 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.06\n Intel Core i9-14900K | 2 | HEAD | 1.00\n Intel Core i9-14900K | 2 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.06\n Intel Core i9-14900K | 2 | v5-add-mul_var_int.patch | 1.08\n Intel Core i9-14900K | 2 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.06\n Intel Core i9-14900K | 3 | HEAD | 1.00\n Intel Core i9-14900K | 3 | v4-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.04\n Intel Core i9-14900K | 3 | v5-add-mul_var_int.patch | 1.00\n Intel Core i9-14900K | 3 | v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch | 1.04\n(36 rows)\n\nThe queries to produce the above are in bench_csv_queries.txt\n\n/Joel", "msg_date": "Thu, 04 Jul 2024 09:38:44 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Thu, Jul 4, 2024, at 09:38, Joel Jacobson wrote:\n> Summary of benchmark results:\n>\n> cpu | var1ndigits | winner\n> ----------------------+-------------+-------------------------------------------------------------\n..\n> v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n> AMD Ryzen 9 7950X3D | 3 | \n> v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n...\n> Apple M3 Max | 3 | \n> v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n...\n> Intel Core i9-14900K | 3 | \n> v5-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n\nSince v5-add-mul_var_int.patch only implements (var1ndigits <= 2)\nit can't possibly win the var1ndigits=3 competition.\n\n/Joel\n\n\n", "msg_date": "Thu, 04 Jul 2024 12:38:59 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Wed, 3 Jul 2024 at 21:45, Joel Jacobson <[email protected]> wrote:\n>\n> > On Wed, Jul 3, 2024, at 20:57, Dean Rasheed wrote:\n> >> I wouldn't expect it to ever be off by more than 1\n> >\n> > OK, so then the cases I found where it was off by 2 for the mul_var_int() patch\n> > are unexpected?\n>\n> Sorry, I meant off by 2 for the mul_var_small() patch, these cases that I found:\n>\n\nYeah, so that was another bug in mul_var_small(). If rscale is made\nsmall enough, the result index for the digits computed before the main\nloop overlaps the ones after, so it would overwrite digits already\ncomputed.\n\nOf course, that's fairly easy to fix, but at this point I think the\nbetter solution is to only use mul_var_small() when an exact product\nis requested. We would have to do that for mul_var_int() anyway,\nbecause of its accuracy issues discussed earlier. I think this is a\nreasonable thing to do because only functions like ln_var() and\nexp_var() will ask mul_var() for a reduced-rscale result, and those\nfunctions are likely to be dominated by computations involving larger\nnumbers, for which this patch wouldn't help anyway. Also those\nfunctions are probably less widely used.\n\nIf we make that decision, a lot of the complexity in mul_var_small()\ngoes away, including all the conditional array accesses, making it\nsimpler and more efficient. v6 patch attached.\n\nI also updated the mul_var_int() patch so that it is also only invoked\nwhen an exact product is requested, and I noticed a couple of other\nminor optimisations that could be made. Then I decided to try\nimplementing mul_var_int64(). This gives a pretty decent speedup for\n3-digit inputs, but unfortunately it is much slower for 4-digit inputs\n(for which most values will go through the 128-bit code path). I'm\nattaching that too, just for information, but it's clearly not going\nto be acceptable as-is.\n\nRunning your benchmark queries, I got these results:\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1;\nTime: 4520.874 ms (00:04.521) -- HEAD\nTime: 3937.536 ms (00:03.938) -- v5-mul_var_int.patch\nTime: 3919.495 ms (00:03.919) -- v5-mul_var_small.patch\nTime: 3916.964 ms (00:03.917) -- v6-mul_var_int64.patch\nTime: 3811.118 ms (00:03.811) -- v6-mul_var_small.patch\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2;\nTime: 4762.528 ms (00:04.763) -- HEAD\nTime: 4075.546 ms (00:04.076) -- v5-mul_var_int.patch\nTime: 4055.180 ms (00:04.055) -- v5-mul_var_small.patch\nTime: 4037.866 ms (00:04.038) -- v6-mul_var_int64.patch\nTime: 4018.488 ms (00:04.018) -- v6-mul_var_small.patch\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3;\nTime: 5387.514 ms (00:05.388) -- HEAD\nTime: 5350.736 ms (00:05.351) -- v5-mul_var_int.patch\nTime: 4648.449 ms (00:04.648) -- v5-mul_var_small.patch\nTime: 4655.204 ms (00:04.655) -- v6-mul_var_int64.patch\nTime: 4645.962 ms (00:04.646) -- v6-mul_var_small.patch\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4;\nTime: 5617.150 ms (00:05.617) -- HEAD\nTime: 5505.913 ms (00:05.506) -- v5-mul_var_int.patch\nTime: 5486.441 ms (00:05.486) -- v5-mul_var_small.patch\nTime: 8203.081 ms (00:08.203) -- v6-mul_var_int64.patch\nTime: 5598.909 ms (00:05.599) -- v6-mul_var_small.patch\n\nSo v6-mul_var_int64 improves on v5-mul_var_int in the 3-digit case,\nbut is terrible in the 4-digit case. None of the other patches touch\nthe 4-digit case, but it might be interesting to try mul_var_small()\nwith 4 digits.\n\nRegards,\nDean", "msg_date": "Thu, 4 Jul 2024 19:43:10 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Thu, Jul 4, 2024, at 20:43, Dean Rasheed wrote:\n> On Wed, 3 Jul 2024 at 21:45, Joel Jacobson <[email protected]> wrote:\n>>\n>> > On Wed, Jul 3, 2024, at 20:57, Dean Rasheed wrote:\n>> >> I wouldn't expect it to ever be off by more than 1\n>> >\n>> > OK, so then the cases I found where it was off by 2 for the mul_var_int() patch\n>> > are unexpected?\n>>\n>> Sorry, I meant off by 2 for the mul_var_small() patch, these cases that I found:\n>>\n>\n> Yeah, so that was another bug in mul_var_small(). If rscale is made\n> small enough, the result index for the digits computed before the main\n> loop overlaps the ones after, so it would overwrite digits already\n> computed.\n>\n> Of course, that's fairly easy to fix, but at this point I think the\n> better solution is to only use mul_var_small() when an exact product\n> is requested. We would have to do that for mul_var_int() anyway,\n> because of its accuracy issues discussed earlier. I think this is a\n> reasonable thing to do because only functions like ln_var() and\n> exp_var() will ask mul_var() for a reduced-rscale result, and those\n> functions are likely to be dominated by computations involving larger\n> numbers, for which this patch wouldn't help anyway. Also those\n> functions are probably less widely used.\n>\n> If we make that decision, a lot of the complexity in mul_var_small()\n> goes away, including all the conditional array accesses, making it\n> simpler and more efficient. v6 patch attached.\n\nNice, I think that looks like the better trade-off.\n\n> I also updated the mul_var_int() patch so that it is also only invoked\n> when an exact product is requested, and I noticed a couple of other\n> minor optimisations that could be made.\n\nLooks nice.\n\n> Then I decided to try\n> implementing mul_var_int64(). This gives a pretty decent speedup for\n> 3-digit inputs, but unfortunately it is much slower for 4-digit inputs\n> (for which most values will go through the 128-bit code path). I'm\n> attaching that too, just for information, but it's clearly not going\n> to be acceptable as-is.\n>\n> Running your benchmark queries, I got these results:\n>\n> SELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1;\n> Time: 4520.874 ms (00:04.521) -- HEAD\n> Time: 3937.536 ms (00:03.938) -- v5-mul_var_int.patch\n> Time: 3919.495 ms (00:03.919) -- v5-mul_var_small.patch\n> Time: 3916.964 ms (00:03.917) -- v6-mul_var_int64.patch\n> Time: 3811.118 ms (00:03.811) -- v6-mul_var_small.patch\n\nMy benchmarks also indicate v6-mul_var_small.patch (v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch)\nis the winner on all my three CPUs, for var1ndigits=1:\n\n-- Apple M3 Max\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- HEAD\nTime: 3046.668 ms (00:03.047)\nTime: 3053.327 ms (00:03.053)\nTime: 3045.517 ms (00:03.046)\nTime: 3049.626 ms (00:03.050)\nTime: 3075.101 ms (00:03.075)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-add-mul_var_int64.patch\nTime: 2781.536 ms (00:02.782)\nTime: 2781.324 ms (00:02.781)\nTime: 2781.301 ms (00:02.781)\nTime: 2786.524 ms (00:02.787)\nTime: 2784.494 ms (00:02.784)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 2711.167 ms (00:02.711)\nTime: 2723.647 ms (00:02.724)\nTime: 2706.372 ms (00:02.706)\nTime: 2708.883 ms (00:02.709)\nTime: 2704.621 ms (00:02.705)\n\n-- Intel Core i9-14900K\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- HEAD\nTime: 3496.714 ms (00:03.497)\nTime: 3278.909 ms (00:03.279)\nTime: 3278.631 ms (00:03.279)\nTime: 3277.658 ms (00:03.278)\nTime: 3276.121 ms (00:03.276)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-add-mul_var_int64.patch\nTime: 3080.751 ms (00:03.081)\nTime: 3078.118 ms (00:03.078)\nTime: 3079.932 ms (00:03.080)\nTime: 3080.668 ms (00:03.081)\nTime: 3080.697 ms (00:03.081)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3043.635 ms (00:03.044)\nTime: 3043.606 ms (00:03.044)\nTime: 3041.391 ms (00:03.041)\nTime: 3041.997 ms (00:03.042)\nTime: 3045.464 ms (00:03.045)\n\n-- AMD Ryzen 9 7950X3D\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- HEAD\nTime: 3421.307 ms (00:03.421)\nTime: 3400.935 ms (00:03.401)\nTime: 3359.839 ms (00:03.360)\nTime: 3374.246 ms (00:03.374)\nTime: 3375.085 ms (00:03.375)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-add-mul_var_int64.patch\nTime: 3002.214 ms (00:03.002)\nTime: 3016.042 ms (00:03.016)\nTime: 3010.541 ms (00:03.011)\nTime: 3009.204 ms (00:03.009)\nTime: 3002.088 ms (00:03.002)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 2959.319 ms (00:02.959)\nTime: 2957.694 ms (00:02.958)\nTime: 2971.559 ms (00:02.972)\nTime: 2958.788 ms (00:02.959)\nTime: 2958.812 ms (00:02.959)\n\n> SELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2;\n> Time: 4762.528 ms (00:04.763) -- HEAD\n> Time: 4075.546 ms (00:04.076) -- v5-mul_var_int.patch\n> Time: 4055.180 ms (00:04.055) -- v5-mul_var_small.patch\n> Time: 4037.866 ms (00:04.038) -- v6-mul_var_int64.patch\n> Time: 4018.488 ms (00:04.018) -- v6-mul_var_small.patch\n\nI get mixed results for var1ndigits=2:\nWinner on Apple M3 Max and AMD Ryzen 9 7950X3D is v6-add-mul_var_int64.patch.\nWinner on Intel Core i9-14900K is v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n\n-- Apple M3 Max\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- HEAD\nTime: 3369.724 ms (00:03.370)\nTime: 3340.977 ms (00:03.341)\nTime: 3331.407 ms (00:03.331)\nTime: 3333.304 ms (00:03.333)\nTime: 3332.136 ms (00:03.332)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-add-mul_var_int64.patch\nTime: 2768.108 ms (00:02.768)\nTime: 2736.996 ms (00:02.737)\nTime: 2730.217 ms (00:02.730)\nTime: 2743.556 ms (00:02.744)\nTime: 2746.725 ms (00:02.747)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 2895.200 ms (00:02.895)\nTime: 2894.823 ms (00:02.895)\nTime: 2899.475 ms (00:02.899)\nTime: 2895.385 ms (00:02.895)\nTime: 2898.647 ms (00:02.899)\n\n-- Intel Core i9-14900K\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- HEAD\nTime: 3385.836 ms (00:03.386)\nTime: 3367.739 ms (00:03.368)\nTime: 3363.321 ms (00:03.363)\nTime: 3365.433 ms (00:03.365)\nTime: 3365.301 ms (00:03.365)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-add-mul_var_int64.patch\nTime: 3086.253 ms (00:03.086)\nTime: 3085.272 ms (00:03.085)\nTime: 3085.769 ms (00:03.086)\nTime: 3089.128 ms (00:03.089)\nTime: 3086.735 ms (00:03.087)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3053.775 ms (00:03.054)\nTime: 3058.392 ms (00:03.058)\nTime: 3068.113 ms (00:03.068)\nTime: 3057.333 ms (00:03.057)\nTime: 3057.218 ms (00:03.057)\n\n-- AMD Ryzen 9 7950X3D\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- HEAD\nTime: 3619.441 ms (00:03.619)\nTime: 3609.553 ms (00:03.610)\nTime: 3574.277 ms (00:03.574)\nTime: 3578.031 ms (00:03.578)\nTime: 3558.545 ms (00:03.559)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-add-mul_var_int64.patch\nTime: 3061.858 ms (00:03.062)\nTime: 3082.174 ms (00:03.082)\nTime: 3081.093 ms (00:03.081)\nTime: 3093.610 ms (00:03.094)\nTime: 3064.507 ms (00:03.065)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3100.287 ms (00:03.100)\nTime: 3100.264 ms (00:03.100)\nTime: 3097.207 ms (00:03.097)\nTime: 3101.477 ms (00:03.101)\nTime: 3098.035 ms (00:03.098)\n\n> SELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3;\n> Time: 5387.514 ms (00:05.388) -- HEAD\n> Time: 5350.736 ms (00:05.351) -- v5-mul_var_int.patch\n> Time: 4648.449 ms (00:04.648) -- v5-mul_var_small.patch\n> Time: 4655.204 ms (00:04.655) -- v6-mul_var_int64.patch\n> Time: 4645.962 ms (00:04.646) -- v6-mul_var_small.patch\n\nI get mixed results for var1ndigits=3:\nWinner on Apple M3 Max and AMD Ryzen 9 7950X3D is v6-add-mul_var_int64.patch.\nWinner on Intel Core i9-14900K is v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nSame winners as for var1ndigits=2.\n\n-- Apple M3 Max\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 3466.932 ms (00:03.467)\nTime: 3447.001 ms (00:03.447)\nTime: 3457.259 ms (00:03.457)\nTime: 3445.938 ms (00:03.446)\nTime: 3443.310 ms (00:03.443)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-add-mul_var_int64.patch\nTime: 2988.444 ms (00:02.988)\nTime: 2976.036 ms (00:02.976)\nTime: 2982.756 ms (00:02.983)\nTime: 2986.436 ms (00:02.986)\nTime: 2973.457 ms (00:02.973)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3026.666 ms (00:03.027)\nTime: 3024.458 ms (00:03.024)\nTime: 3022.976 ms (00:03.023)\nTime: 3029.526 ms (00:03.030)\nTime: 3021.543 ms (00:03.022)\n\n-- Intel Core i9-14900K\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 4510.078 ms (00:04.510)\nTime: 4222.501 ms (00:04.223)\nTime: 4179.509 ms (00:04.180)\nTime: 4179.307 ms (00:04.179)\nTime: 4183.026 ms (00:04.183)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-add-mul_var_int64.patch\nTime: 3811.866 ms (00:03.812)\nTime: 3816.664 ms (00:03.817)\nTime: 3811.695 ms (00:03.812)\nTime: 3811.674 ms (00:03.812)\nTime: 3812.265 ms (00:03.812)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 4095.053 ms (00:04.095)\nTime: 3896.002 ms (00:03.896)\nTime: 3888.999 ms (00:03.889)\nTime: 3889.346 ms (00:03.889)\nTime: 3889.017 ms (00:03.889)\n\n-- AMD Ryzen 9 7950X3D\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 3946.255 ms (00:03.946)\nTime: 3896.110 ms (00:03.896)\nTime: 3877.470 ms (00:03.877)\nTime: 3854.402 ms (00:03.854)\nTime: 3901.218 ms (00:03.901)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-add-mul_var_int64.patch\nTime: 3393.572 ms (00:03.394)\nTime: 3395.401 ms (00:03.395)\nTime: 3395.199 ms (00:03.395)\nTime: 3418.555 ms (00:03.419)\nTime: 3399.619 ms (00:03.400)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3308.791 ms (00:03.309)\nTime: 3309.316 ms (00:03.309)\nTime: 3318.238 ms (00:03.318)\nTime: 3296.061 ms (00:03.296)\nTime: 3320.282 ms (00:03.320)\n\n> SELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4;\n> Time: 5617.150 ms (00:05.617) -- HEAD\n> Time: 5505.913 ms (00:05.506) -- v5-mul_var_int.patch\n> Time: 5486.441 ms (00:05.486) -- v5-mul_var_small.patch\n> Time: 8203.081 ms (00:08.203) -- v6-mul_var_int64.patch\n> Time: 5598.909 ms (00:05.599) -- v6-mul_var_small.patch\n> So v6-mul_var_int64 improves on v5-mul_var_int in the 3-digit case,\n> but is terrible in the 4-digit case. None of the other patches touch\n> the 4-digit case, but it might be interesting to try mul_var_small()\n> with 4 digits.\n\nInteresting you got so bad bench results for v6-mul_var_int64.patch\nfor var1ndigits=4, that patch is actually the winner on AMD Ryzen 9 7950X3D.\n\nOn Intel Core i9-14900K the winner is v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.\nOn Apple M3 Max, HEAD is the winner.\n\n-- Apple M3 Max\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 3618.530 ms (00:03.619)\nTime: 3595.239 ms (00:03.595)\nTime: 3600.412 ms (00:03.600)\nTime: 3607.500 ms (00:03.607)\nTime: 3604.122 ms (00:03.604)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-add-mul_var_int64.patch\nTime: 4498.993 ms (00:04.499)\nTime: 4527.302 ms (00:04.527)\nTime: 4493.613 ms (00:04.494)\nTime: 4482.194 ms (00:04.482)\nTime: 4493.660 ms (00:04.494)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3628.177 ms (00:03.628)\nTime: 3613.975 ms (00:03.614)\nTime: 3612.213 ms (00:03.612)\nTime: 3614.026 ms (00:03.614)\nTime: 3622.959 ms (00:03.623)\n\n-- Intel Core i9-14900K\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 4130.810 ms (00:04.131)\nTime: 3836.462 ms (00:03.836)\nTime: 3810.604 ms (00:03.811)\nTime: 3805.443 ms (00:03.805)\nTime: 3803.055 ms (00:03.803)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-add-mul_var_int64.patch\nTime: 3952.862 ms (00:03.953)\nTime: 3792.272 ms (00:03.792)\nTime: 3793.995 ms (00:03.794)\nTime: 3790.531 ms (00:03.791)\nTime: 3793.647 ms (00:03.794)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3762.828 ms (00:03.763)\nTime: 3754.869 ms (00:03.755)\nTime: 3756.041 ms (00:03.756)\nTime: 3758.719 ms (00:03.759)\nTime: 3754.894 ms (00:03.755)\n\n-- AMD Ryzen 9 7950X3D\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 4075.988 ms (00:04.076)\nTime: 4067.702 ms (00:04.068)\nTime: 4035.191 ms (00:04.035)\nTime: 4022.411 ms (00:04.022)\nTime: 4016.475 ms (00:04.016)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-add-mul_var_int64.patch\nTime: 3830.021 ms (00:03.830)\nTime: 3837.947 ms (00:03.838)\nTime: 3834.894 ms (00:03.835)\nTime: 3832.755 ms (00:03.833)\nTime: 3834.512 ms (00:03.835)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 4031.128 ms (00:04.031)\nTime: 4001.590 ms (00:04.002)\nTime: 4031.212 ms (00:04.031)\nTime: 4035.941 ms (00:04.036)\nTime: 4031.218 ms (00:04.031)\n\nRegards,\nJoel\n\n\n", "msg_date": "Fri, 05 Jul 2024 13:56:17 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Fri, 5 Jul 2024 at 12:56, Joel Jacobson <[email protected]> wrote:\n>\n> Interesting you got so bad bench results for v6-mul_var_int64.patch\n> for var1ndigits=4, that patch is actually the winner on AMD Ryzen 9 7950X3D.\n\nInteresting.\n\n> On Intel Core i9-14900K the winner is v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.\n\nThat must be random noise, since\nv6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch doesn't\ninvoke mul_var_small() for 4-digit inputs.\n\n> On Apple M3 Max, HEAD is the winner.\n\nImportantly, mul_var_int64() is around 1.25x slower there, and it was\neven worse on my machine.\n\nAttached is a v7 mul_var_small() patch adding 4-digit support. For me,\nthis gives a nice speedup:\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4;\nTime: 5617.150 ms (00:05.617) -- HEAD\nTime: 8203.081 ms (00:08.203) -- v6-mul_var_int64.patch\nTime: 4750.212 ms (00:04.750) -- v7-mul_var_small.patch\n\nThe other advantage, of course, is that it doesn't require 128-bit\ninteger support.\n\nRegards,\nDean", "msg_date": "Fri, 5 Jul 2024 16:41:33 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Fri, Jul 5, 2024, at 17:41, Dean Rasheed wrote:\n> On Fri, 5 Jul 2024 at 12:56, Joel Jacobson <[email protected]> wrote:\n>>\n>> Interesting you got so bad bench results for v6-mul_var_int64.patch\n>> for var1ndigits=4, that patch is actually the winner on AMD Ryzen 9 7950X3D.\n>\n> Interesting.\n\nI remeasured just to be sure, and yes, it was the winner among the previous patches,\nbut the new v7 beats it.\n\n>> On Intel Core i9-14900K the winner is v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch.\n>\n> That must be random noise, since\n> v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch doesn't\n> invoke mul_var_small() for 4-digit inputs.\n\nYes, something was off with the HEAD measurements for that one,\nI remeasured and then got almost identical results (as expected)\nbetween HEAD and v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nfor 4-digit inputs.\n\n>> On Apple M3 Max, HEAD is the winner.\n>\n> Importantly, mul_var_int64() is around 1.25x slower there, and it was\n> even worse on my machine.\n>\n> Attached is a v7 mul_var_small() patch adding 4-digit support. For me,\n> this gives a nice speedup:\n>\n> SELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4;\n> Time: 5617.150 ms (00:05.617) -- HEAD\n> Time: 8203.081 ms (00:08.203) -- v6-mul_var_int64.patch\n> Time: 4750.212 ms (00:04.750) -- v7-mul_var_small.patch\n>\n> The other advantage, of course, is that it doesn't require 128-bit\n> integer support.\n\nVery nice, v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nis now the winner on all my CPUs:\n\n-- Apple M3 Max\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 3574.865 ms (00:03.575)\nTime: 3573.678 ms (00:03.574)\nTime: 3576.953 ms (00:03.577)\nTime: 3580.536 ms (00:03.581)\nTime: 3589.007 ms (00:03.589)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3110.171 ms (00:03.110)\nTime: 3098.558 ms (00:03.099)\nTime: 3105.873 ms (00:03.106)\nTime: 3104.484 ms (00:03.104)\nTime: 3109.035 ms (00:03.109)\n\n-- Intel Core i9-14900K\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 3751.767 ms (00:03.752)\nTime: 3745.916 ms (00:03.746)\nTime: 3742.542 ms (00:03.743)\nTime: 3746.139 ms (00:03.746)\nTime: 3745.493 ms (00:03.745)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3747.640 ms (00:03.748)\nTime: 3747.231 ms (00:03.747)\nTime: 3747.965 ms (00:03.748)\nTime: 3748.309 ms (00:03.748)\nTime: 3746.498 ms (00:03.746)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3417.924 ms (00:03.418)\nTime: 3417.088 ms (00:03.417)\nTime: 3415.708 ms (00:03.416)\nTime: 3415.453 ms (00:03.415)\nTime: 3419.566 ms (00:03.420)\n\n-- AMD Ryzen 9 7950X3D\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 3970.131 ms (00:03.970)\nTime: 3924.335 ms (00:03.924)\nTime: 3927.863 ms (00:03.928)\nTime: 3924.761 ms (00:03.925)\nTime: 3926.290 ms (00:03.926)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v6-add-mul_var_int64.patch\nTime: 3874.769 ms (00:03.875)\nTime: 3858.071 ms (00:03.858)\nTime: 3836.698 ms (00:03.837)\nTime: 3871.388 ms (00:03.871)\nTime: 3844.907 ms (00:03.845)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\nTime: 3397.846 ms (00:03.398)\nTime: 3398.050 ms (00:03.398)\nTime: 3395.279 ms (00:03.395)\nTime: 3393.285 ms (00:03.393)\nTime: 3402.570 ms (00:03.403)\n\nCode wise I think it's now very nice and clear, with just enough comments.\n\nAlso nice to see that the var1ndigits=4 case isn't much more complex\nthan var1ndigits=3, since it follows the same pattern.\n\nRegards,\nJoel\n\n\n", "msg_date": "Fri, 05 Jul 2024 18:42:45 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Fri, Jul 5, 2024, at 18:42, Joel Jacobson wrote:\n> Very nice, v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n> is now the winner on all my CPUs:\n\nI thought it would be interesting to also measure the isolated effect\non just numeric_mul() without the query overhead.\n\nIncluded var1ndigits=5 var2ndigits=5, that should be unaffected,\njust to get a sense of the noise level.\n\nSELECT timeit.h('numeric_mul',array['9999','9999'],2,min_time:='1 s'::interval);\nSELECT timeit.h('numeric_mul',array['9999_9999','9999_9999'],2,min_time:='1 s'::interval);\nSELECT timeit.h('numeric_mul',array['9999_9999_9999','9999_9999_9999'],2,min_time:='1 s'::interval);\nSELECT timeit.h('numeric_mul',array['9999_9999_9999_9999','9999_9999_9999_9999'],2,min_time:='1 s'::interval);\nSELECT timeit.h('numeric_mul',array['9999_9999_9999_9999_9999','9999_9999_9999_9999_9999'],2,min_time:='1 s'::interval);\n\nCPU | var1ndigits | var2ndigits | HEAD | v7 | HEAD/v7\n---------------------+-------------+-------------+-------+-------+---------\nApple M3 Max | 1 | 1 | 28 ns | 18 ns | 1.56\nApple M3 Max | 2 | 2 | 32 ns | 18 ns | 1.78\nApple M3 Max | 3 | 3 | 38 ns | 21 ns | 1.81\nApple M3 Max | 4 | 4 | 42 ns | 24 ns | 1.75\nIntel Core i9-14900K | 1 | 1 | 25 ns | 20 ns | 1.25\nIntel Core i9-14900K | 2 | 2 | 28 ns | 20 ns | 1.40\nIntel Core i9-14900K | 3 | 3 | 33 ns | 24 ns | 1.38\nIntel Core i9-14900K | 4 | 4 | 37 ns | 25 ns | 1.48\nAMD Ryzen 9 7950X3D | 1 | 1 | 37 ns | 29 ns | 1.28\nAMD Ryzen 9 7950X3D | 2 | 2 | 43 ns | 31 ns | 1.39\nAMD Ryzen 9 7950X3D | 3 | 3 | 50 ns | 37 ns | 1.35\nAMD Ryzen 9 7950X3D | 4 | 4 | 55 ns | 39 ns | 1.41\n\nImpressive speed-up, between 25% - 81%.\n\nRegards,\nJoel\n\n\n", "msg_date": "Fri, 05 Jul 2024 19:37:12 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Fri, 5 Jul 2024 at 18:37, Joel Jacobson <[email protected]> wrote:\n>\n> On Fri, Jul 5, 2024, at 18:42, Joel Jacobson wrote:\n> > Very nice, v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n> > is now the winner on all my CPUs:\n>\n> I thought it would be interesting to also measure the isolated effect\n> on just numeric_mul() without the query overhead.\n>\n> Impressive speed-up, between 25% - 81%.\n>\n\nCool. I think we should go with the mul_var_small() patch then, since\nit's more generally applicable.\n\nI also did some testing with much larger var2 values, and saw similar\nspeed-ups. One high-level function that benefits from that is\nfactorial(), which accepts inputs up to 32177, and so uses both the\n1-digit and 2-digit code with very large var2 values. I doubt anyone\nactually uses it with such large inputs, but it's interesting\nnonetheless:\n\nSELECT factorial(32177);\nTime: 923.117 ms -- HEAD\nTime: 534.375 ms -- mul_var_small() patch\n\nI did one more round of (mostly cosmetic) copy-editing. Aside from\nimproving some of the comments, it occurred to me that there's no need\nto pass rscale to mul_var_small(), or for it to call round_var(),\nsince it's always computing the exact result. That shaves off a few\nmore cycles.\n\nAdditionally, I didn't like how res_weight and res_ndigits were being\nset 1 higher than they needed to be. That makes sense in mul_var()\nbecause it may round the result, causing a non-zero carry to propagate\ninto the next digit up, but it's just confusing in mul_var_small(). So\nI've reduced those by 1, which makes the look much more logical. To be\nclear, this doesn't change how many digits we're calculating. But now\nres_ndigits is actually the number of digits being calculated, whereas\nbefore, res_ndigits was 1 larger and we were calculating res_ndigits -\n1 digits, which was confusing.\n\nI think this is good to go, so unless there are any further comments,\nI plan to commit it soon.\n\nPossible future work would be to try extending it to larger var1\nvalues. I have a feeling that might work quite well for 5 or 6 digits,\nbut at some point, we'll start seeing diminishing returns, and the\ncode bloat won't be worth it.\n\nRegards,\nDean", "msg_date": "Sat, 6 Jul 2024 10:34:18 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Sat, Jul 6, 2024, at 11:34, Dean Rasheed wrote:\n> On Fri, 5 Jul 2024 at 18:37, Joel Jacobson <[email protected]> wrote:\n>>\n>> On Fri, Jul 5, 2024, at 18:42, Joel Jacobson wrote:\n>> > Very nice, v7-optimize-numeric-mul_var-small-var1-arbitrary-var2.patch\n>> > is now the winner on all my CPUs:\n>>\n>> I thought it would be interesting to also measure the isolated effect\n>> on just numeric_mul() without the query overhead.\n>>\n>> Impressive speed-up, between 25% - 81%.\n>>\n>\n> Cool. I think we should go with the mul_var_small() patch then, since\n> it's more generally applicable.\n\nI agree.\n\n> I also did some testing with much larger var2 values, and saw similar\n> speed-ups. One high-level function that benefits from that is\n> factorial(), which accepts inputs up to 32177, and so uses both the\n> 1-digit and 2-digit code with very large var2 values. I doubt anyone\n> actually uses it with such large inputs, but it's interesting\n> nonetheless:\n>\n> SELECT factorial(32177);\n> Time: 923.117 ms -- HEAD\n> Time: 534.375 ms -- mul_var_small() patch\n\nNice!\n\n> I did one more round of (mostly cosmetic) copy-editing. Aside from\n> improving some of the comments, it occurred to me that there's no need\n> to pass rscale to mul_var_small(), or for it to call round_var(),\n> since it's always computing the exact result. That shaves off a few\n> more cycles.\n\nNice, also cleaner.\n\n> Additionally, I didn't like how res_weight and res_ndigits were being\n> set 1 higher than they needed to be. That makes sense in mul_var()\n> because it may round the result, causing a non-zero carry to propagate\n> into the next digit up, but it's just confusing in mul_var_small(). So\n> I've reduced those by 1, which makes the look much more logical. To be\n> clear, this doesn't change how many digits we're calculating. But now\n> res_ndigits is actually the number of digits being calculated, whereas\n> before, res_ndigits was 1 larger and we were calculating res_ndigits -\n> 1 digits, which was confusing.\n\nNice, much cleaner.\n\n> I think this is good to go, so unless there are any further comments,\n> I plan to commit it soon.\n\nLGTM.\n\nBenchmark, only on Apple M3 Max:\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- HEAD\nTime: 3042.157 ms (00:03.042)\nTime: 3027.711 ms (00:03.028)\nTime: 3078.215 ms (00:03.078)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_1; -- v8\nTime: 2700.676 ms (00:02.701)\nTime: 2713.594 ms (00:02.714)\nTime: 2704.139 ms (00:02.704)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- HEAD\nTime: 4506.064 ms (00:04.506)\nTime: 3316.204 ms (00:03.316)\nTime: 3321.086 ms (00:03.321)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_2; -- v8\nTime: 2904.786 ms (00:02.905)\nTime: 2921.996 ms (00:02.922)\nTime: 2919.269 ms (00:02.919)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- HEAD\nTime: 4636.051 ms (00:04.636)\nTime: 3439.951 ms (00:03.440)\nTime: 3471.245 ms (00:03.471)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_3; -- v8\nTime: 3034.364 ms (00:03.034)\nTime: 3025.351 ms (00:03.025)\nTime: 3075.024 ms (00:03.075)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- HEAD\nTime: 4978.086 ms (00:04.978)\nTime: 3580.283 ms (00:03.580)\nTime: 3582.719 ms (00:03.583)\n\nSELECT SUM(var1*var2) FROM bench_mul_var_var1ndigits_4; -- v8\nTime: 3147.352 ms (00:03.147)\nTime: 3135.903 ms (00:03.136)\nTime: 3172.491 ms (00:03.172)\n\nRegards,\nJoel\n\n\n", "msg_date": "Sat, 06 Jul 2024 13:16:58 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Sat, 6 Jul 2024 at 12:17, Joel Jacobson <[email protected]> wrote:\n>\n> > I think this is good to go, so unless there are any further comments,\n> > I plan to commit it soon.\n>\n> LGTM.\n>\n\nOK, I have committed this.\n\nAt the last minute, I changed the name of the new function to\nmul_var_short() because \"short\" is probably a better term to use in\nthis context (we already use it in a preceding comment). \"Small\" is\npotentially misleading, because the numbers themselves could be\nnumerically very large.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 9 Jul 2024 10:11:24 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, 9 Jul 2024 at 10:11, Dean Rasheed <[email protected]> wrote:\n>\n> OK, I have committed this.\n>\n\nBefore considering the other patches to optimise for larger inputs, I\nthink it's worth optimising the existing mul_var() code as much as\npossible.\n\nOne thing I noticed while testing the earlier patches on this thread\nwas that they were significantly faster if they used unsigned integers\nrather than signed integers. I think the reason is that operations\nlike \"x / 10000\" and \"x % 10000\" use fewer CPU instructions (on every\nplatform, according to godbolt.org) if x is unsigned.\n\nIn addition, this reduces the number of times the digit array needs to\nbe renormalised, which seems to be the biggest factor.\n\nAnother small optimisation that seems to be just about worthwhile is\nto pull the first digit of var1 out of the main loop, so that its\ncontributions can be set directly in dig[], rather than being added to\nit. This allows palloc() to be used to allocate dig[], rather than\npalloc0(), and only requires the upper part of dig[] to be initialised\nto zeros, rather than all of it.\n\nTogether, these seem to give a decent speed-up:\n\n NBASE digits | HEAD rate | patch rate\n--------------+---------------+---------------\n 5 | 5.8797105e+06 | 6.047134e+06\n 12 | 4.140017e+06 | 4.3429845e+06\n 25 | 2.5711072e+06 | 2.7530615e+06\n 50 | 1.0367389e+06 | 1.3370771e+06\n 100 | 367924.1 | 462437.38\n 250 | 77231.32 | 104593.95\n 2500 | 881.48694 | 1197.4739\n 15000 | 25.064987 | 32.78391\n\nThe largest gains are above around 50 NBASE digits, where the time\nspent renormalising dig[] becomes significant.\n\nRegards,\nDean", "msg_date": "Tue, 9 Jul 2024 13:01:35 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 9, 2024, at 14:01, Dean Rasheed wrote:\n> Before considering the other patches to optimise for larger inputs, I\n> think it's worth optimising the existing mul_var() code as much as\n> possible.\n>\n> One thing I noticed while testing the earlier patches on this thread\n> was that they were significantly faster if they used unsigned integers\n> rather than signed integers. I think the reason is that operations\n> like \"x / 10000\" and \"x % 10000\" use fewer CPU instructions (on every\n> platform, according to godbolt.org) if x is unsigned.\n>\n> In addition, this reduces the number of times the digit array needs to\n> be renormalised, which seems to be the biggest factor.\n>\n> Another small optimisation that seems to be just about worthwhile is\n> to pull the first digit of var1 out of the main loop, so that its\n> contributions can be set directly in dig[], rather than being added to\n> it. This allows palloc() to be used to allocate dig[], rather than\n> palloc0(), and only requires the upper part of dig[] to be initialised\n> to zeros, rather than all of it.\n\nNice, really smart!\n\n> Together, these seem to give a decent speed-up:\n>\n> NBASE digits | HEAD rate | patch rate\n> --------------+---------------+---------------\n> 5 | 5.8797105e+06 | 6.047134e+06\n> 12 | 4.140017e+06 | 4.3429845e+06\n> 25 | 2.5711072e+06 | 2.7530615e+06\n> 50 | 1.0367389e+06 | 1.3370771e+06\n> 100 | 367924.1 | 462437.38\n> 250 | 77231.32 | 104593.95\n> 2500 | 881.48694 | 1197.4739\n> 15000 | 25.064987 | 32.78391\n>\n> The largest gains are above around 50 NBASE digits, where the time\n> spent renormalising dig[] becomes significant.\n\nI added some more ndigits test cases:\n\n/*\n * Intel Core i9-14900K\n */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 4.7846890e+07 | 4.7846890e+07 | 0.00%\n 2 | 4.9504950e+07 | 4.7393365e+07 | -4.27%\n 3 | 4.0816327e+07 | 4.0983607e+07 | 0.41%\n 4 | 4.1152263e+07 | 3.9370079e+07 | -4.33%\n 5 | 2.2573363e+07 | 2.1978022e+07 | -2.64%\n 6 | 2.1739130e+07 | 1.9646365e+07 | -9.63%\n 7 | 1.6393443e+07 | 1.6339869e+07 | -0.33%\n 8 | 1.6863406e+07 | 1.6778523e+07 | -0.50%\n 9 | 1.5105740e+07 | 1.6420361e+07 | 8.70%\n 10 | 1.3315579e+07 | 1.5527950e+07 | 16.61%\n 11 | 1.2360939e+07 | 1.4124294e+07 | 14.27%\n 12 | 1.1764706e+07 | 1.2836970e+07 | 9.11%\n 13 | 1.0060362e+07 | 1.1820331e+07 | 17.49%\n 14 | 9.0909091e+06 | 1.0000000e+07 | 10.00%\n 15 | 7.6923077e+06 | 8.0000000e+06 | 4.00%\n 16 | 9.0909091e+06 | 9.4339623e+06 | 3.77%\n 17 | 7.2992701e+06 | 9.0909091e+06 | 24.55%\n 18 | 7.0921986e+06 | 7.8125000e+06 | 10.16%\n 19 | 6.5789474e+06 | 6.6666667e+06 | 1.33%\n 20 | 6.2500000e+06 | 6.5789474e+06 | 5.26%\n 21 | 5.8479532e+06 | 6.1728395e+06 | 5.56%\n 22 | 5.5555556e+06 | 5.9880240e+06 | 7.78%\n 24 | 5.2631579e+06 | 5.8823529e+06 | 11.76%\n 25 | 5.2083333e+06 | 5.5555556e+06 | 6.67%\n 26 | 4.7619048e+06 | 5.2631579e+06 | 10.53%\n 27 | 4.5045045e+06 | 5.2083333e+06 | 15.63%\n 28 | 4.4247788e+06 | 4.7619048e+06 | 7.62%\n 29 | 4.1666667e+06 | 4.5454545e+06 | 9.09%\n 30 | 4.0000000e+06 | 4.3478261e+06 | 8.70%\n 31 | 3.4482759e+06 | 4.0000000e+06 | 16.00%\n 32 | 3.9840637e+06 | 4.2016807e+06 | 5.46%\n 50 | 2.0964361e+06 | 2.6595745e+06 | 26.86%\n 100 | 666666.67 | 869565.22 | 30.43%\n 250 | 142653.35 | 171526.59 | 20.24%\n 2500 | 1642.04 | 2197.80 | 33.85%\n 15000 | 41.67 | 52.63 | 26.32%\n(36 rows)\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 3.6900369e+07 | 3.8022814e+07 | 3.04%\n 2 | 3.4602076e+07 | 3.5714286e+07 | 3.21%\n 3 | 2.8011204e+07 | 2.7777778e+07 | -0.83%\n 4 | 2.7932961e+07 | 2.8328612e+07 | 1.42%\n 5 | 1.6420361e+07 | 1.7123288e+07 | 4.28%\n 6 | 1.4705882e+07 | 1.5313936e+07 | 4.13%\n 7 | 1.3192612e+07 | 1.3888889e+07 | 5.28%\n 8 | 1.2121212e+07 | 1.2919897e+07 | 6.59%\n 9 | 1.1235955e+07 | 1.2135922e+07 | 8.01%\n 10 | 1.0000000e+07 | 1.1312217e+07 | 13.12%\n 11 | 9.0909091e+06 | 1.0000000e+07 | 10.00%\n 12 | 8.1967213e+06 | 8.4033613e+06 | 2.52%\n 13 | 7.2463768e+06 | 7.7519380e+06 | 6.98%\n 14 | 6.7567568e+06 | 7.1428571e+06 | 5.71%\n 15 | 5.5555556e+06 | 5.8823529e+06 | 5.88%\n 16 | 6.3291139e+06 | 5.7803468e+06 | -8.67%\n 17 | 5.8823529e+06 | 5.9880240e+06 | 1.80%\n 18 | 5.5555556e+06 | 5.7142857e+06 | 2.86%\n 19 | 5.2356021e+06 | 5.6179775e+06 | 7.30%\n 20 | 4.9019608e+06 | 5.1020408e+06 | 4.08%\n 21 | 4.5454545e+06 | 4.8543689e+06 | 6.80%\n 22 | 4.1841004e+06 | 4.5871560e+06 | 9.63%\n 24 | 4.4642857e+06 | 4.4052863e+06 | -1.32%\n 25 | 4.1666667e+06 | 4.2194093e+06 | 1.27%\n 26 | 4.0000000e+06 | 3.9525692e+06 | -1.19%\n 27 | 3.8461538e+06 | 3.8022814e+06 | -1.14%\n 28 | 3.9062500e+06 | 3.8759690e+06 | -0.78%\n 29 | 3.7878788e+06 | 3.8022814e+06 | 0.38%\n 30 | 3.3898305e+06 | 3.7174721e+06 | 9.67%\n 31 | 2.7472527e+06 | 2.8571429e+06 | 4.00%\n 32 | 3.0395137e+06 | 3.1446541e+06 | 3.46%\n 50 | 1.7094017e+06 | 2.0576132e+06 | 20.37%\n 100 | 518134.72 | 609756.10 | 17.68%\n 250 | 108577.63 | 136612.02 | 25.82%\n 2500 | 1264.22 | 1610.31 | 27.38%\n 15000 | 34.48 | 43.48 | 26.09%\n(36 rows)\n\n/*\n * Apple M3 Max\n */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 4.9504950e+07 | 4.9504950e+07 | 0.00%\n 2 | 6.1349693e+07 | 5.9171598e+07 | -3.55%\n 3 | 5.2631579e+07 | 5.2083333e+07 | -1.04%\n 4 | 4.5248869e+07 | 4.5248869e+07 | 0.00%\n 5 | 2.1978022e+07 | 2.2727273e+07 | 3.41%\n 6 | 1.9920319e+07 | 2.0876827e+07 | 4.80%\n 7 | 1.7182131e+07 | 1.8018018e+07 | 4.86%\n 8 | 1.5822785e+07 | 1.6051364e+07 | 1.44%\n 9 | 1.3368984e+07 | 1.3333333e+07 | -0.27%\n 10 | 1.1709602e+07 | 1.1627907e+07 | -0.70%\n 11 | 1.0020040e+07 | 1.0526316e+07 | 5.05%\n 12 | 9.0909091e+06 | 9.0909091e+06 | 0.00%\n 13 | 8.2644628e+06 | 8.2644628e+06 | 0.00%\n 14 | 7.6923077e+06 | 7.6335878e+06 | -0.76%\n 15 | 7.1428571e+06 | 7.0921986e+06 | -0.71%\n 16 | 6.6225166e+06 | 6.5789474e+06 | -0.66%\n 17 | 5.9880240e+06 | 6.2111801e+06 | 3.73%\n 18 | 5.7803468e+06 | 5.5865922e+06 | -3.35%\n 19 | 5.2631579e+06 | 5.2356021e+06 | -0.52%\n 20 | 4.6296296e+06 | 4.8543689e+06 | 4.85%\n 21 | 4.4444444e+06 | 4.3859649e+06 | -1.32%\n 22 | 4.2016807e+06 | 4.0485830e+06 | -3.64%\n 24 | 3.7453184e+06 | 3.5714286e+06 | -4.64%\n 25 | 3.4843206e+06 | 3.4013605e+06 | -2.38%\n 26 | 3.2786885e+06 | 3.2786885e+06 | 0.00%\n 27 | 3.0674847e+06 | 3.1055901e+06 | 1.24%\n 28 | 2.8818444e+06 | 2.9069767e+06 | 0.87%\n 29 | 2.7322404e+06 | 2.7700831e+06 | 1.39%\n 30 | 2.5839793e+06 | 2.6246719e+06 | 1.57%\n 31 | 2.5062657e+06 | 2.4630542e+06 | -1.72%\n 32 | 4.5871560e+06 | 4.6082949e+06 | 0.46%\n 50 | 1.7513135e+06 | 1.9880716e+06 | 13.52%\n 100 | 714285.71 | 833333.33 | 16.67%\n 250 | 124223.60 | 149925.04 | 20.69%\n 2500 | 1440.92 | 1760.56 | 22.18%\n 15000 | 39.53 | 48.08 | 21.63%\n(36 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Tue, 09 Jul 2024 16:11:22 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 9, 2024, at 16:11, Joel Jacobson wrote:\n> I added some more ndigits test cases:\n\nOps, please ignore previous benchmark;\nI had forgot to commit in between the measurements,\nso they all ran in the same db txn,\nwhich caused a lot of noise on few ndigits.\n\nNew benchmark:\n\n> /*\n> * Intel Core i9-14900K\n> */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 5.0251256e+07 | 5.2631579e+07 | 4.74%\n 2 | 4.8543689e+07 | 4.9751244e+07 | 2.49%\n 3 | 4.1493776e+07 | 4.3478261e+07 | 4.78%\n 4 | 4.1493776e+07 | 4.0816327e+07 | -1.63%\n 5 | 2.2371365e+07 | 2.3364486e+07 | 4.44%\n 6 | 2.1008403e+07 | 2.1186441e+07 | 0.85%\n 7 | 1.7152659e+07 | 1.6233766e+07 | -5.36%\n 8 | 1.7123288e+07 | 1.8450185e+07 | 7.75%\n 9 | 1.5290520e+07 | 1.7271157e+07 | 12.95%\n 10 | 1.3351135e+07 | 1.5384615e+07 | 15.23%\n 11 | 1.2453300e+07 | 1.4164306e+07 | 13.74%\n 12 | 1.1655012e+07 | 1.2936611e+07 | 11.00%\n 13 | 1.0373444e+07 | 1.1904762e+07 | 14.76%\n 14 | 9.0909091e+06 | 1.0162602e+07 | 11.79%\n 15 | 7.7519380e+06 | 8.1300813e+06 | 4.88%\n 16 | 9.0909091e+06 | 9.8039216e+06 | 7.84%\n 17 | 7.5757576e+06 | 9.0909091e+06 | 20.00%\n 18 | 7.2463768e+06 | 8.2644628e+06 | 14.05%\n 19 | 6.6225166e+06 | 7.5757576e+06 | 14.39%\n 20 | 6.4516129e+06 | 7.0422535e+06 | 9.15%\n 21 | 6.0606061e+06 | 6.5789474e+06 | 8.55%\n 22 | 5.7142857e+06 | 6.2500000e+06 | 9.38%\n 24 | 5.4054054e+06 | 6.0240964e+06 | 11.45%\n 25 | 5.2356021e+06 | 5.8139535e+06 | 11.05%\n 26 | 5.0251256e+06 | 5.8139535e+06 | 15.70%\n 27 | 4.7393365e+06 | 5.7142857e+06 | 20.57%\n 28 | 4.6082949e+06 | 5.2083333e+06 | 13.02%\n 29 | 4.3478261e+06 | 4.9504950e+06 | 13.86%\n 30 | 4.0816327e+06 | 4.6728972e+06 | 14.49%\n 31 | 3.4843206e+06 | 3.9682540e+06 | 13.89%\n 32 | 4.0000000e+06 | 4.1666667e+06 | 4.17%\n 50 | 2.1097046e+06 | 2.8571429e+06 | 35.43%\n 100 | 680272.11 | 909090.91 | 33.64%\n 250 | 141643.06 | 174216.03 | 23.00%\n 2500 | 1626.02 | 2188.18 | 34.57%\n 15000 | 41.67 | 52.63 | 26.32%\n(36 rows)\n\n\n> /*\n> * AMD Ryzen 9 7950X3D\n> */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 3.7037037e+07 | 3.8910506e+07 | 5.06%\n 2 | 3.5587189e+07 | 3.5971223e+07 | 1.08%\n 3 | 3.0581040e+07 | 2.9239766e+07 | -4.39%\n 4 | 2.7322404e+07 | 3.0303030e+07 | 10.91%\n 5 | 1.8050542e+07 | 1.9011407e+07 | 5.32%\n 6 | 1.5974441e+07 | 1.6233766e+07 | 1.62%\n 7 | 1.3106160e+07 | 1.3071895e+07 | -0.26%\n 8 | 1.2285012e+07 | 1.3106160e+07 | 6.68%\n 9 | 1.1534025e+07 | 1.2269939e+07 | 6.38%\n 10 | 1.1135857e+07 | 1.1507480e+07 | 3.34%\n 11 | 9.7943193e+06 | 1.0976948e+07 | 12.07%\n 12 | 9.5238095e+06 | 1.0256410e+07 | 7.69%\n 13 | 8.6206897e+06 | 8.7719298e+06 | 1.75%\n 14 | 7.3529412e+06 | 8.1967213e+06 | 11.48%\n 15 | 6.2893082e+06 | 6.7114094e+06 | 6.71%\n 16 | 7.2463768e+06 | 7.0422535e+06 | -2.82%\n 17 | 6.2893082e+06 | 7.2463768e+06 | 15.22%\n 18 | 6.3694268e+06 | 7.4626866e+06 | 17.16%\n 19 | 5.6818182e+06 | 6.6225166e+06 | 16.56%\n 20 | 5.2083333e+06 | 6.1728395e+06 | 18.52%\n 21 | 5.0251256e+06 | 5.7471264e+06 | 14.37%\n 22 | 4.5248869e+06 | 5.1282051e+06 | 13.33%\n 24 | 4.9261084e+06 | 5.1020408e+06 | 3.57%\n 25 | 4.6511628e+06 | 4.9504950e+06 | 6.44%\n 26 | 4.2553191e+06 | 4.6082949e+06 | 8.29%\n 27 | 3.9682540e+06 | 4.2918455e+06 | 8.15%\n 28 | 3.8910506e+06 | 4.1322314e+06 | 6.20%\n 29 | 3.8167939e+06 | 3.7593985e+06 | -1.50%\n 30 | 3.5842294e+06 | 3.6101083e+06 | 0.72%\n 31 | 3.1948882e+06 | 3.1645570e+06 | -0.95%\n 32 | 3.4722222e+06 | 3.7174721e+06 | 7.06%\n 50 | 1.6474465e+06 | 2.1691974e+06 | 31.67%\n 100 | 555555.56 | 653594.77 | 17.65%\n 250 | 109409.19 | 140449.44 | 28.37%\n 2500 | 1236.09 | 1555.21 | 25.82%\n 15000 | 34.48 | 43.48 | 26.09%\n(36 rows)\n\n> /*\n> * Apple M3 Max\n> */\n\n NBASE digits | HEAD rate | patch rate | relative difference\n--------------+----------------+----------------+---------------------\n 1 | 4.7169811e+07 | 4.7619048e+07 | 0.95%\n 2 | 6.0240964e+07 | 5.8479532e+07 | -2.92%\n 3 | 5.2083333e+07 | 5.3191489e+07 | 2.13%\n 4 | 4.5871560e+07 | 4.6948357e+07 | 2.35%\n 5 | 2.2075055e+07 | 2.3529412e+07 | 6.59%\n 6 | 2.0080321e+07 | 2.1505376e+07 | 7.10%\n 7 | 1.7301038e+07 | 1.8975332e+07 | 9.68%\n 8 | 1.6025641e+07 | 1.6556291e+07 | 3.31%\n 9 | 1.3245033e+07 | 1.3717421e+07 | 3.57%\n 10 | 1.1709602e+07 | 1.2315271e+07 | 5.17%\n 11 | 1.0000000e+07 | 1.0989011e+07 | 9.89%\n 12 | 9.0909091e+06 | 9.7276265e+06 | 7.00%\n 13 | 8.3333333e+06 | 9.0090090e+06 | 8.11%\n 14 | 7.6923077e+06 | 8.0645161e+06 | 4.84%\n 15 | 7.0921986e+06 | 7.5187970e+06 | 6.02%\n 16 | 6.6666667e+06 | 7.0921986e+06 | 6.38%\n 17 | 6.2111801e+06 | 6.3694268e+06 | 2.55%\n 18 | 5.7803468e+06 | 5.9523810e+06 | 2.98%\n 19 | 5.2910053e+06 | 5.4347826e+06 | 2.72%\n 20 | 4.7846890e+06 | 5.0505051e+06 | 5.56%\n 21 | 4.5454545e+06 | 4.6728972e+06 | 2.80%\n 22 | 4.2372881e+06 | 4.3859649e+06 | 3.51%\n 24 | 3.7174721e+06 | 3.8759690e+06 | 4.26%\n 25 | 3.4722222e+06 | 3.6231884e+06 | 4.35%\n 26 | 3.2894737e+06 | 3.3898305e+06 | 3.05%\n 27 | 3.0674847e+06 | 3.1847134e+06 | 3.82%\n 28 | 2.9239766e+06 | 3.0120482e+06 | 3.01%\n 29 | 2.7548209e+06 | 2.8901734e+06 | 4.91%\n 30 | 2.6041667e+06 | 2.7322404e+06 | 4.92%\n 31 | 2.5000000e+06 | 2.5773196e+06 | 3.09%\n 32 | 4.6082949e+06 | 4.7846890e+06 | 3.83%\n 50 | 1.7241379e+06 | 2.0703934e+06 | 20.08%\n 100 | 719424.46 | 869565.22 | 20.87%\n 250 | 124688.28 | 157977.88 | 26.70%\n 2500 | 1455.60 | 1811.59 | 24.46%\n 15000 | 40.00 | 50.00 | 25.00%\n(36 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Tue, 09 Jul 2024 21:48:57 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." }, { "msg_contents": "On Tue, Jul 9, 2024, at 14:01, Dean Rasheed wrote:\n> One thing I noticed while testing the earlier patches on this thread\n> was that they were significantly faster if they used unsigned integers\n> rather than signed integers. I think the reason is that operations\n> like \"x / 10000\" and \"x % 10000\" use fewer CPU instructions (on every\n> platform, according to godbolt.org) if x is unsigned.\n>\n> In addition, this reduces the number of times the digit array needs to\n> be renormalised, which seems to be the biggest factor.\n>\n> Another small optimisation that seems to be just about worthwhile is\n> to pull the first digit of var1 out of the main loop, so that its\n> contributions can be set directly in dig[], rather than being added to\n> it. This allows palloc() to be used to allocate dig[], rather than\n> palloc0(), and only requires the upper part of dig[] to be initialised\n> to zeros, rather than all of it.\n>\n> Together, these seem to give a decent speed-up:\n..\n> Attachments:\n> * optimise-mul_var.patch\n\nI've reviewed the patch now.\nCode is straightforward, and comments easy to understand.\nLGTM.\n\nRegards,\nJoel\n\n\n", "msg_date": "Tue, 09 Jul 2024 22:28:36 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric multiplication for one and two base-NBASE digit\n multiplicands." } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working on a rebase for [1] due to 0cecc908e97, I noticed that\nCheckRelationLockedByMe() and CheckRelationOidLockedByMe() are used only in\nassertions.\n\nI think it would make sense to declare / define those functions only for\nassert enabled build: please find attached a tiny patch doing so.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/flat/ZiYjn0eVc7pxVY45%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 1 Jul 2024 06:42:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 1, 2024 at 12:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> While working on a rebase for [1] due to 0cecc908e97, I noticed that\n> CheckRelationLockedByMe() and CheckRelationOidLockedByMe() are used only in\n> assertions.\n>\n> I think it would make sense to declare / define those functions only for\n> assert enabled build: please find attached a tiny patch doing so.\n>\n> Thoughts?\n\nIf turning the CheckRelationXXXLocked() compile for non-assert builds,\nwhy not do the same for LWLockHeldByMe, LWLockAnyHeldByMe and\nLWLockHeldByMeInMode that are debug-only and being used in asserts?\nWhile it might reduce the compiled binary size a bit for release\nbuilds, we may have to be cautious about external or out of core\nmodules using them.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:35:34 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 01, 2024 at 12:35:34PM +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> On Mon, Jul 1, 2024 at 12:12 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > While working on a rebase for [1] due to 0cecc908e97, I noticed that\n> > CheckRelationLockedByMe() and CheckRelationOidLockedByMe() are used only in\n> > assertions.\n> >\n> > I think it would make sense to declare / define those functions only for\n> > assert enabled build: please find attached a tiny patch doing so.\n> >\n> > Thoughts?\n> \n> If turning the CheckRelationXXXLocked() compile for non-assert builds,\n> why not do the same for LWLockHeldByMe, LWLockAnyHeldByMe and\n> LWLockHeldByMeInMode that are debug-only and being used in asserts?\n> While it might reduce the compiled binary size a bit for release\n> builds, we may have to be cautious about external or out of core\n> modules using them.\n\nThanks for the feedback.\n\nCheckRelationOidLockedByMe() is new (as it has been added in 0cecc908e97. While\nits counterpart CheckRelationLockedByMe() has been added since a few years (2018)\nin commit b04aeb0a053, I thought it would make sense to surround both of them.\n\nWhile it's true that we could also surround LWLockHeldByMe() (added in e6cba71503f\n, 2004 and signature change in ea9df812d85, 2014), LWLockAnyHeldByMe() (added in\need959a457e, 2022) and LWLockHeldByMeInMode() (added in 016abf1fb83, 2016), I'm\nnot sure we should (due to their \"age\" and as you said we have to be cautious\nabout out of core modules / extensions that may use them).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 07:52:59 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "On Mon, Jul 01, 2024 at 06:42:46AM +0000, Bertrand Drouvot wrote:\n> While working on a rebase for [1] due to 0cecc908e97, I noticed that\n> CheckRelationLockedByMe() and CheckRelationOidLockedByMe() are used only in\n> assertions.\n> \n> I think it would make sense to declare / define those functions only for\n> assert enabled build: please find attached a tiny patch doing so.\n> \n> Thoughts?\n\nNot convinced that's a good idea. What about out-of-core code that\nmay use these routines for runtime checks in non-assert builds?\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 17:01:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 01, 2024 at 05:01:46PM +0900, Michael Paquier wrote:\n> On Mon, Jul 01, 2024 at 06:42:46AM +0000, Bertrand Drouvot wrote:\n> > While working on a rebase for [1] due to 0cecc908e97, I noticed that\n> > CheckRelationLockedByMe() and CheckRelationOidLockedByMe() are used only in\n> > assertions.\n> > \n> > I think it would make sense to declare / define those functions only for\n> > assert enabled build: please find attached a tiny patch doing so.\n> > \n> > Thoughts?\n> \n> Not convinced that's a good idea. What about out-of-core code that\n> may use these routines for runtime checks in non-assert builds?\n\nThanks for the feedback.\n\nYeah that could be an issue for CheckRelationLockedByMe() (CheckRelationOidLockedByMe()\nis too recent to be a concern).\n\nHaving said that 1. out of core could want to use CheckRelationOidLockedByMe() (\nprobably if it was already using CheckRelationLockedByMe()) and 2. I just\nsubmitted a rebase for [1] in which I thought that using\nCheckRelationOidLockedByMe() would be a good idea.\n\nSo I think that we can get rid of this proposal.\n\n[1]: https://www.postgresql.org/message-id/ZoJ5RVtMziIa3TQp%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 09:41:31 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Jul 01, 2024 at 06:42:46AM +0000, Bertrand Drouvot wrote:\n>> I think it would make sense to declare / define those functions only for\n>> assert enabled build: please find attached a tiny patch doing so.\n\n> Not convinced that's a good idea. What about out-of-core code that\n> may use these routines for runtime checks in non-assert builds?\n\nYeah. Also, I believe it's possible for an extension that's been\nbuilt with assertions enabled to be used with a core server that\nwasn't. This is why, for example, ExceptionalCondition() is not\nifdef'd away in a non-assert build. Even if you think there's\nno use for CheckRelation[Oid]LockedByMe except in assertions,\nit'd still be plenty reasonable for an extension to call them\nin assertions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2024 10:21:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 01, 2024 at 10:21:35AM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n> > On Mon, Jul 01, 2024 at 06:42:46AM +0000, Bertrand Drouvot wrote:\n> >> I think it would make sense to declare / define those functions only for\n> >> assert enabled build: please find attached a tiny patch doing so.\n> \n> > Not convinced that's a good idea. What about out-of-core code that\n> > may use these routines for runtime checks in non-assert builds?\n> \n> Yeah. Also, I believe it's possible for an extension that's been\n> built with assertions enabled to be used with a core server that\n> wasn't. This is why, for example, ExceptionalCondition() is not\n> ifdef'd away in a non-assert build. Even if you think there's\n> no use for CheckRelation[Oid]LockedByMe except in assertions,\n> it'd still be plenty reasonable for an extension to call them\n> in assertions.\n\nYeah good point, thanks for the feedback! I've withdrawn the CF entry.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 14:38:44 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Surround CheckRelation[Oid]LockedByMe() with USE_ASSERT_CHECKING" } ]
[ { "msg_contents": "Currently, TupleDescData contains the descriptor's attributes in a\nvariable length array of FormData_pg_attribute allocated within the\nsame allocation as the TupleDescData. According to my IDE,\nsizeof(FormData_pg_attribute) == 104 bytes. It's that large mainly due\nto attname being 64 bytes. The TupleDescData.attrs[] array could end\nup quite large on tables with many columns and that could result in\npoor CPU cache hit ratios when deforming tuples.\n\nInstead, we could make TupleDescData contain an out-of-line pointer to\nthe array of FormData_pg_attribute and have a much more compact\ninlined array of some other struct that much more densely contains the\nfields required for tuple deformation. attname and many of the other\nfields are not required to deform a tuple.\n\nI've attached a patch series which does this.\n\n0001: Just fixes up some missing usages of TupleDescAttr(). (mostly\nmissed by me, apparently :-( )\n0002: Adjusts the TupleDescData.attrs array to make it out of line. I\nwanted to make sure nothing weird happened by doing this before doing\nthe bulk of the other changes to add the new struct.\n0003: Adds a very compact 8-byte struct named TupleDescDeformAttr,\nwhich can be used for tuple deformation. 8 columns fits on a 64-byte\ncacheline rather than 13 cachelines.\n0004: Adjusts the attalign to change it from char to uint8. See below.\n\nThe 0004 patch changes the TupleDescDeformAttr.attalign to a uint8\nrather than a char containing 'c', 's', 'i' or 'd'. This allows much\nmore simple code in the att_align_nominal() macro. What's in master is\nquite a complex expression to evaluate every time we deform a column\nas it much translate: 'c' -> 1, 's' -> 2, 'i' -> 4, 'd' -> 8. If we\njust store that numeric value in the struct that macro can become a\nsimple TYPEALIGN() so the operation becomes simple bit masking rather\nthan a poorly branch predictable series of compare and jump.\n\nThe state of this patch series is \"proof of concept\". I think the\ncurrent state should be enough to get an idea of the rough amount of\ncode churn this change would cause and also an idea of the expected\nperformance of the change. It certainly isn't in a finished state.\nI've not put much effort into updating comments or looking at READMEs\nto see what's now outdated.\n\nI also went with trying to patch a bunch of additional boolean columns\nfrom pg_attribute so they just take up 1 bit of space in the attflags\nfield in the new struct. I've not tested the performance of expanding\nthis out so these use 1 bool field each. That would make the struct\nbigger than 8 bytes. Having the struct be a power-of-2 size is also\nbeneficial as it allows fast bit-shifting to be used to get the array\nelement address rather than a more complex (and slower) LEA\ninstruction. I could try making the struct 16 bytes and see if there\nare any further wins by avoiding the bitwise AND on the\nTupleDescDeformAttr.attflags field.\n\nTo test the performance of this, I tried using the attached script\nwhich creates a table where the first column is a variable length\ncolumn and the final column is an int. The query I ran to test the\nperformance inserted 1 million rows into this table and performed a\nsum() on the final column. The attached graph shows that the query is\n30% faster than master with 15 columns between the first and last\ncolumn. For fewer columns, the speedup is less. This is quite a\ndeform-heavy query so it's not like it speeds up every table with that\ncolumn arrangement by 30%, but certainly, some queries could see that\nmuch gain and even more seems possible. I didn't go to a great deal of\ntrouble to find the most deform-heavy workload.\n\nI'll stick this in the July CF. It would be good to get some feedback\non the idea and feedback on whether more work on this is worthwhile.\n\nAs mentioned, the 0001 patch just fixes up the missing usages of the\nTupleDescAttr() macro. I see no reason not to commit this now.\n\nThanks\n\nDavid", "msg_date": "Mon, 1 Jul 2024 20:55:44 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Make tuple deformation faster" }, { "msg_contents": "David Rowley <[email protected]> writes:\n\n> Currently, TupleDescData contains the descriptor's attributes in a\n> variable length array of FormData_pg_attribute allocated within the\n> same allocation as the TupleDescData. According to my IDE,\n> sizeof(FormData_pg_attribute) == 104 bytes. It's that large mainly due\n> to attname being 64 bytes. The TupleDescData.attrs[] array could end\n> up quite large on tables with many columns and that could result in\n> poor CPU cache hit ratios when deforming tuples.\n...\n>\n> To test the performance of this, I tried using the attached script\n> which creates a table where the first column is a variable length\n> column and the final column is an int. The query I ran to test the\n> performance inserted 1 million rows into this table and performed a\n> sum() on the final column. The attached graph shows that the query is\n> 30% faster than master with 15 columns between the first and last\n> column.\n\nYet another a wonderful optimization! I just want to know how did you\nfind this optimization (CPU cache hit) case and think it worths some\ntime. because before we invest our time to optimize something, it is\nbetter know that we can get some measurable improvement after our time\nis spend. At for this case, 30% is really a huge number even it is a\nartificial case.\n\nAnother case is Andrew introduced NullableDatum 5 years ago and said using\nit in TupleTableSlot could be CPU cache friendly, I can follow that, but\nhow much it can improve in an ideal case, is it possible to forecast it\nsomehow? I ask it here because both cases are optimizing for CPU cache..\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 01 Jul 2024 17:17:00 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, 1 Jul 2024 at 21:17, Andy Fan <[email protected]> wrote:\n> Yet another a wonderful optimization! I just want to know how did you\n> find this optimization (CPU cache hit) case and think it worths some\n> time. because before we invest our time to optimize something, it is\n> better know that we can get some measurable improvement after our time\n> is spend. At for this case, 30% is really a huge number even it is a\n> artificial case.\n>\n> Another case is Andrew introduced NullableDatum 5 years ago and said using\n> it in TupleTableSlot could be CPU cache friendly, I can follow that, but\n> how much it can improve in an ideal case, is it possible to forecast it\n> somehow? I ask it here because both cases are optimizing for CPU cache..\n\nHave a look at:\n\nperf stat --pid=<backend pid>\n\nOn my AMD Zen4 machine running the 16 extra column test from the\nscript in my last email, I see:\n\n$ echo master && perf stat --pid=389510 sleep 10\nmaster\n\n Performance counter stats for process id '389510':\n\n 9990.65 msec task-clock:u # 0.999 CPUs utilized\n 0 context-switches:u # 0.000 /sec\n 0 cpu-migrations:u # 0.000 /sec\n 0 page-faults:u # 0.000 /sec\n 49407204156 cycles:u # 4.945 GHz\n 18529494 stalled-cycles-frontend:u # 0.04% frontend\ncycles idle\n 8505168 stalled-cycles-backend:u # 0.02% backend cycles idle\n 165442142326 instructions:u # 3.35 insn per cycle\n # 0.00 stalled\ncycles per insn\n 39409877343 branches:u # 3.945 G/sec\n 146350275 branch-misses:u # 0.37% of all branches\n\n 10.001012132 seconds time elapsed\n\n$ echo patched && perf stat --pid=380216 sleep 10\npatched\n\n Performance counter stats for process id '380216':\n\n 9989.14 msec task-clock:u # 0.998 CPUs utilized\n 0 context-switches:u # 0.000 /sec\n 0 cpu-migrations:u # 0.000 /sec\n 0 page-faults:u # 0.000 /sec\n 49781280456 cycles:u # 4.984 GHz\n 22922276 stalled-cycles-frontend:u # 0.05% frontend\ncycles idle\n 24259785 stalled-cycles-backend:u # 0.05% backend cycles idle\n 213688149862 instructions:u # 4.29 insn per cycle\n # 0.00 stalled\ncycles per insn\n 44147675129 branches:u # 4.420 G/sec\n 14282567 branch-misses:u # 0.03% of all branches\n\n 10.005034271 seconds time elapsed\n\nYou can see the branch predictor has done a *much* better job in the\npatched code vs master with about 10x fewer misses. This should have\nhelped contribute to the \"insn per cycle\" increase. 4.29 is quite\ngood for postgres. I often see that around 0.5. According to [1]\n(relating to Zen4), \"We get a ridiculous 12 NOPs per cycle out of the\nmicro-op cache\". I'm unsure how micro-ops translate to \"insn per\ncycle\" that's shown in perf stat. I thought 4-5 was about the maximum\npipeline size from today's era of CPUs. Maybe someone else can explain\nbetter than I can. In more simple terms, generally, the higher the\n\"insn per cycle\", the better. Also, the lower all of the idle and\nbranch miss percentages are that's generally also better. However,\nyou'll notice that the patched version has more front and backend\nstalls. I assume this is due to performing more instructions per cycle\nfrom improved branch prediction causing memory and instruction stalls\nto occur more frequently, effectively (I think) it's just hitting the\nnext bottleneck(s) - memory and instruction decoding. At least, modern\nCPUs should be able to out-pace RAM in many workloads, so perhaps it's\nnot that surprising that \"backend cycles idle\" has gone up due to such\na large increase in instructions per cycle due to improved branch\nprediction.\n\nIt would be nice to see this tested on some modern Intel CPU. A 13th\nseries or 14th series, for example, or even any intel from the past 5\nyears would be better than nothing.\n\nDavid\n\n[1] https://chipsandcheese.com/2022/11/05/amds-zen-4-part-1-frontend-and-execution-engine/\n\n\n", "msg_date": "Mon, 1 Jul 2024 22:07:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, 1 Jul 2024 at 10:56, David Rowley <[email protected]> wrote:\n>\n> Currently, TupleDescData contains the descriptor's attributes in a\n> variable length array of FormData_pg_attribute allocated within the\n> same allocation as the TupleDescData. According to my IDE,\n> sizeof(FormData_pg_attribute) == 104 bytes. It's that large mainly due\n> to attname being 64 bytes. The TupleDescData.attrs[] array could end\n> up quite large on tables with many columns and that could result in\n> poor CPU cache hit ratios when deforming tuples.\n>\n> Instead, we could make TupleDescData contain an out-of-line pointer to\n> the array of FormData_pg_attribute and have a much more compact\n> inlined array of some other struct that much more densely contains the\n> fields required for tuple deformation. attname and many of the other\n> fields are not required to deform a tuple.\n\n+1\n\n> I've attached a patch series which does this.\n>\n> 0001: Just fixes up some missing usages of TupleDescAttr(). (mostly\n> missed by me, apparently :-( )\n> 0002: Adjusts the TupleDescData.attrs array to make it out of line. I\n> wanted to make sure nothing weird happened by doing this before doing\n> the bulk of the other changes to add the new struct.\n> 0003: Adds a very compact 8-byte struct named TupleDescDeformAttr,\n> which can be used for tuple deformation. 8 columns fits on a 64-byte\n> cacheline rather than 13 cachelines.\n\nCool, that's similar to, but even better than, my patch from 2021 over at [0].\n\nOne thing I'm slightly concerned about is that this allocates another\n8 bytes for each attribute in the tuple descriptor. While that's not a\nlot when compared with the ->attrs array, it's still quite a lot when\nwe might not care at all about this data; e.g. in temporary tuple\ndescriptors during execution, in intermediate planner nodes.\n\nDid you test for performance gains (or losses) with an out-of-line\nTupleDescDeformAttr array? One benefit from this would be that we\ncould reuse the deform array for suffix truncated TupleDescs, reuse of\nwhich currently would require temporarily updating TupleDesc->natts\nwith a smaller value; but with out-of-line ->attrs and ->deform_attrs,\nwe could reuse these arrays between TupleDescs if one is shorter than\nthe other, but has otherwise fully matching attributes. I know that\nbtree split code would benefit from this, as it wouldn't have to\nconstruct a full new TupleDesc when it creates a suffix-truncated\ntuple during page splits.\n\n> 0004: Adjusts the attalign to change it from char to uint8. See below.\n>\n> The 0004 patch changes the TupleDescDeformAttr.attalign to a uint8\n> rather than a char containing 'c', 's', 'i' or 'd'. This allows much\n> more simple code in the att_align_nominal() macro. What's in master is\n> quite a complex expression to evaluate every time we deform a column\n> as it much translate: 'c' -> 1, 's' -> 2, 'i' -> 4, 'd' -> 8. If we\n> just store that numeric value in the struct that macro can become a\n> simple TYPEALIGN() so the operation becomes simple bit masking rather\n> than a poorly branch predictable series of compare and jump.\n\n+1, that's something I'd missed in my patches, and is probably the\nlargest contributor to the speedup.\n\n> I'll stick this in the July CF. It would be good to get some feedback\n> on the idea and feedback on whether more work on this is worthwhile.\n\nDo you plan to remove the ->attcacheoff catalog field from the\nFormData_pg_attribute, now that (with your patch) it isn't used\nanymore as a placeholder field for fast (de)forming of tuples?\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2Wh8-metSryZX_Ubj-uv6kb%2B2YnzHAejmEdubjhmGusBAg%40mail.gmail.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 12:07:11 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, 1 Jul 2024 at 22:07, Matthias van de Meent\n<[email protected]> wrote:\n> Cool, that's similar to, but even better than, my patch from 2021 over at [0].\n\nI'll have a read of that. Thanks for pointing it out.\n\n> One thing I'm slightly concerned about is that this allocates another\n> 8 bytes for each attribute in the tuple descriptor. While that's not a\n> lot when compared with the ->attrs array, it's still quite a lot when\n> we might not care at all about this data; e.g. in temporary tuple\n> descriptors during execution, in intermediate planner nodes.\n\nI've not done it in the patch, but one way to get some of that back is\nto ditch pg_attribute.attcacheoff. There's no need for it after this\npatch. That's only 4 out of 8 bytes, however. I think in most cases\ndue to FormData_pg_attribute being so huge, the aset.c power-of-2\nroundup behaviour will be unlikely to cross a power-of-2 boundary.\n\nThe following demonstrates which column counts that actually makes a\ndifference with:\n\nselect c as n_cols,old_bytes, new_bytes from (select c,24+104*c as\nold_bytes, 32+100*c+8*c as new_bytes from generate_series(1,1024) c)\nwhere position('1' in old_bytes::bit(32)::text) != position('1' in\nnew_bytes::bit(32)::text);\n\nThat returns just 46 column counts out of 1024 where we cross a power\nof 2 boundaries with the patched code that we didn't cross in master.\nOf course, larger pallocs will result in a malloc() directly, so\nperhaps that's not a good measure. At least for smaller column counts\nit should be mainly the same amount of memory used. There are only 6\nrows in there for column counts below 100. I think if we were worried\nabout memory there are likely 100 other things we could do to reclaim\nsome. It would only take some shuffling of fields in RelationData. I\ncount 50 bytes of holes in that struct out of the 488 bytes. There are\nprobably a few that could be moved without upsetting the\nstruct-field-order-lords too much.\n\n> Did you test for performance gains (or losses) with an out-of-line\n> TupleDescDeformAttr array? One benefit from this would be that we\n> could reuse the deform array for suffix truncated TupleDescs, reuse of\n> which currently would require temporarily updating TupleDesc->natts\n> with a smaller value; but with out-of-line ->attrs and ->deform_attrs,\n> we could reuse these arrays between TupleDescs if one is shorter than\n> the other, but has otherwise fully matching attributes. I know that\n> btree split code would benefit from this, as it wouldn't have to\n> construct a full new TupleDesc when it creates a suffix-truncated\n> tuple during page splits.\n\nNo, but it sounds easy to test as patch 0002 moves that out of line\nand does nothing else.\n\n> > 0004: Adjusts the attalign to change it from char to uint8. See below.\n> >\n> > The 0004 patch changes the TupleDescDeformAttr.attalign to a uint8\n> > rather than a char containing 'c', 's', 'i' or 'd'. This allows much\n> > more simple code in the att_align_nominal() macro. What's in master is\n> > quite a complex expression to evaluate every time we deform a column\n> > as it much translate: 'c' -> 1, 's' -> 2, 'i' -> 4, 'd' -> 8. If we\n> > just store that numeric value in the struct that macro can become a\n> > simple TYPEALIGN() so the operation becomes simple bit masking rather\n> > than a poorly branch predictable series of compare and jump.\n>\n> +1, that's something I'd missed in my patches, and is probably the\n> largest contributor to the speedup.\n\nI think so too and I did consider if we should try and do that to\npg_attribute, renaming the column to attalignby. I started but didn't\nfinish a patch for that.\n\n> > I'll stick this in the July CF. It would be good to get some feedback\n> > on the idea and feedback on whether more work on this is worthwhile.\n>\n> Do you plan to remove the ->attcacheoff catalog field from the\n> FormData_pg_attribute, now that (with your patch) it isn't used\n> anymore as a placeholder field for fast (de)forming of tuples?\n\nYes, I plan to do that once I get more confidence I'm on to a winner here.\n\nThanks for having a look at this.\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 22:49:07 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, 1 Jul 2024 at 12:49, David Rowley <[email protected]> wrote:\n>\n> On Mon, 1 Jul 2024 at 22:07, Matthias van de Meent\n> <[email protected]> wrote:\n> > One thing I'm slightly concerned about is that this allocates another\n> > 8 bytes for each attribute in the tuple descriptor. While that's not a\n> > lot when compared with the ->attrs array, it's still quite a lot when\n> > we might not care at all about this data; e.g. in temporary tuple\n> > descriptors during execution, in intermediate planner nodes.\n>\n> I've not done it in the patch, but one way to get some of that back is\n> to ditch pg_attribute.attcacheoff. There's no need for it after this\n> patch. That's only 4 out of 8 bytes, however.\n\nFormData_pg_attribute as a C struct has 4-byte alignment; AFAIK it\ndoesn't have any fields that require 8-byte alignment? Only on 64-bit\nsystems we align the tuples on pages with 8-byte alignment, but\nin-memory arrays of the struct wouldn't have to deal with that, AFAIK.\n\n> I think in most cases\n> due to FormData_pg_attribute being so huge, the aset.c power-of-2\n> roundup behaviour will be unlikely to cross a power-of-2 boundary.\n\nASet isn't the only allocator, but default enough for this to make sense, yes.\n\n> The following demonstrates which column counts that actually makes a\n> difference with:\n>\n> select c as n_cols,old_bytes, new_bytes from (select c,24+104*c as\n> old_bytes, 32+100*c+8*c as new_bytes from generate_series(1,1024) c)\n> where position('1' in old_bytes::bit(32)::text) != position('1' in\n> new_bytes::bit(32)::text);\n>\n> That returns just 46 column counts out of 1024 where we cross a power\n> of 2 boundaries with the patched code that we didn't cross in master.\n> Of course, larger pallocs will result in a malloc() directly, so\n> perhaps that's not a good measure. At least for smaller column counts\n> it should be mainly the same amount of memory used. There are only 6\n> rows in there for column counts below 100. I think if we were worried\n> about memory there are likely 100 other things we could do to reclaim\n> some. It would only take some shuffling of fields in RelationData. I\n> count 50 bytes of holes in that struct out of the 488 bytes. There are\n> probably a few that could be moved without upsetting the\n> struct-field-order-lords too much.\n\nI'd love for RelationData to be split into IndexRelation,\nTableRelation, ForeignTableRelation, etc., as there's a lot of wastage\ncaused by exclusive fields, too.\n\n> > > 0004: Adjusts the attalign to change it from char to uint8. See below.\n> > >\n> > > The 0004 patch changes the TupleDescDeformAttr.attalign to a uint8\n> > > rather than a char containing 'c', 's', 'i' or 'd'. This allows much\n> > > more simple code in the att_align_nominal() macro. What's in master is\n> > > quite a complex expression to evaluate every time we deform a column\n> > > as it much translate: 'c' -> 1, 's' -> 2, 'i' -> 4, 'd' -> 8. If we\n> > > just store that numeric value in the struct that macro can become a\n> > > simple TYPEALIGN() so the operation becomes simple bit masking rather\n> > > than a poorly branch predictable series of compare and jump.\n> >\n> > +1, that's something I'd missed in my patches, and is probably the\n> > largest contributor to the speedup.\n>\n> I think so too and I did consider if we should try and do that to\n> pg_attribute, renaming the column to attalignby. I started but didn't\n> finish a patch for that.\n\nI'm not sure we have a pg_type entry that that supports numeric\nattalign values without increasing the size of the field, as the one\nsingle-byte integer-like type (char) is always used as a printable\ncharacter, and implied to always be printable through its usage in\ne.g. nodeToString infrastructure.\n\nI'd love to have a better option, but we don't seem to have one yet.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 1 Jul 2024 13:42:02 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, 1 Jul 2024 at 23:42, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 1 Jul 2024 at 12:49, David Rowley <[email protected]> wrote:\n> >\n> > On Mon, 1 Jul 2024 at 22:07, Matthias van de Meent\n> > <[email protected]> wrote:\n> > > One thing I'm slightly concerned about is that this allocates another\n> > > 8 bytes for each attribute in the tuple descriptor. While that's not a\n> > > lot when compared with the ->attrs array, it's still quite a lot when\n> > > we might not care at all about this data; e.g. in temporary tuple\n> > > descriptors during execution, in intermediate planner nodes.\n> >\n> > I've not done it in the patch, but one way to get some of that back is\n> > to ditch pg_attribute.attcacheoff. There's no need for it after this\n> > patch. That's only 4 out of 8 bytes, however.\n>\n> FormData_pg_attribute as a C struct has 4-byte alignment; AFAIK it\n> doesn't have any fields that require 8-byte alignment? Only on 64-bit\n> systems we align the tuples on pages with 8-byte alignment, but\n> in-memory arrays of the struct wouldn't have to deal with that, AFAIK.\n\nYeah, 4-byte alignment. \"out of 8 bytes\" I was talking about is\nsizeof(TupleDescDeformAttr), which I believe is the same \"another 8\nbytes\" you had mentioned. What I meant was that deleting attcacheoff\nonly reduces FormData_pg_attribute by 4 bytes per column and adding\nTupleDescDeformAttr adds 8 per column, so we still use 4 more bytes\nper column with the patch.\n\nI really doubt the 4 bytes extra memory is a big concern here. It\nwould be more concerning for patch that wanted to do something like\nchange NAMEDATALEN to 128, but I think the main concern with that\nwould be even slower tuple deforming. Additional memory would also be\nconcerning, but I doubt that's more important than the issue of making\nall queries slower due to slower tuple deformation, which is what such\na patch would result in.\n\n> > I think in most cases\n> > due to FormData_pg_attribute being so huge, the aset.c power-of-2\n> > roundup behaviour will be unlikely to cross a power-of-2 boundary.\n>\n> ASet isn't the only allocator, but default enough for this to make sense, yes.\n\nIt's the only allocator we use for allocating TupleDescs, so other\ntypes and their behaviour are not relevant here.\n\n> I'm not sure we have a pg_type entry that that supports numeric\n> attalign values without increasing the size of the field, as the one\n> single-byte integer-like type (char) is always used as a printable\n> character, and implied to always be printable through its usage in\n> e.g. nodeToString infrastructure.\n>\n> I'd love to have a better option, but we don't seem to have one yet.\n\nyeah, select typname from pg_Type where typalign = 'c' and typlen=1;\nhas just bool and char.\n\nI'm happy to keep going with this version of the patch unless someone\npoints out some good reason that we should go with the alternative\ninstead.\n\nDavid\n\n\n", "msg_date": "Tue, 2 Jul 2024 12:23:39 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "David Rowley <[email protected]> writes:\n\n> You can see the branch predictor has done a *much* better job in the\n> patched code vs master with about 10x fewer misses. This should have\n> helped contribute to the \"insn per cycle\" increase. 4.29 is quite\n> good for postgres. I often see that around 0.5. According to [1]\n> (relating to Zen4), \"We get a ridiculous 12 NOPs per cycle out of the\n> micro-op cache\". I'm unsure how micro-ops translate to \"insn per\n> cycle\" that's shown in perf stat. I thought 4-5 was about the maximum\n> pipeline size from today's era of CPUs. Maybe someone else can explain\n> better than I can. In more simple terms, generally, the higher the\n> \"insn per cycle\", the better. Also, the lower all of the idle and\n> branch miss percentages are that's generally also better. However,\n> you'll notice that the patched version has more front and backend\n> stalls. I assume this is due to performing more instructions per cycle\n> from improved branch prediction causing memory and instruction stalls\n> to occur more frequently, effectively (I think) it's just hitting the\n> next bottleneck(s) - memory and instruction decoding. At least, modern\n> CPUs should be able to out-pace RAM in many workloads, so perhaps it's\n> not that surprising that \"backend cycles idle\" has gone up due to such\n> a large increase in instructions per cycle due to improved branch\n> prediction.\n\nThanks for the answer, just another area desvers to exploring.\n\n> It would be nice to see this tested on some modern Intel CPU. A 13th\n> series or 14th series, for example, or even any intel from the past 5\n> years would be better than nothing.\n\nI have two kind of CPUs.\n\na). Intel Xeon Processor (Icelake) for my ECS\nb). Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz at Mac.\n\nMy ECS reports \"<not supported> branch-misses\", probabaly because it\nruns in virtualization software , and Mac doesn't support perf yet :( \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 02 Jul 2024 09:07:23 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Tue, 2 Jul 2024 at 02:23, David Rowley <[email protected]> wrote:\n>\n> On Mon, 1 Jul 2024 at 23:42, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Mon, 1 Jul 2024 at 12:49, David Rowley <[email protected]> wrote:\n> > >\n> > > On Mon, 1 Jul 2024 at 22:07, Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > > One thing I'm slightly concerned about is that this allocates another\n> > > > 8 bytes for each attribute in the tuple descriptor. While that's not a\n> > > > lot when compared with the ->attrs array, it's still quite a lot when\n> > > > we might not care at all about this data; e.g. in temporary tuple\n> > > > descriptors during execution, in intermediate planner nodes.\n> > >\n> > > I've not done it in the patch, but one way to get some of that back is\n> > > to ditch pg_attribute.attcacheoff. There's no need for it after this\n> > > patch. That's only 4 out of 8 bytes, however.\n> >\n> > FormData_pg_attribute as a C struct has 4-byte alignment; AFAIK it\n> > doesn't have any fields that require 8-byte alignment? Only on 64-bit\n> > systems we align the tuples on pages with 8-byte alignment, but\n> > in-memory arrays of the struct wouldn't have to deal with that, AFAIK.\n>\n> Yeah, 4-byte alignment. \"out of 8 bytes\" I was talking about is\n> sizeof(TupleDescDeformAttr), which I believe is the same \"another 8\n> bytes\" you had mentioned. What I meant was that deleting attcacheoff\n> only reduces FormData_pg_attribute by 4 bytes per column and adding\n> TupleDescDeformAttr adds 8 per column, so we still use 4 more bytes\n> per column with the patch.\n\nI see I was confused, thank you for clarifying. As I said, the\nconcerns were only small; 4 more bytes only in memory shouldn't matter\nmuch in the grand scheme of things.\n\n> I'm happy to keep going with this version of the patch\n\n+1, go for it.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 15 Jul 2024 14:12:54 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Mon, Jul 1, 2024 at 5:07 PM David Rowley <[email protected]> wrote:\n\n> cycles idle\n> 8505168 stalled-cycles-backend:u # 0.02% backend cycles idle\n> 165442142326 instructions:u # 3.35 insn per cycle\n> # 0.00 stalled\n> cycles per insn\n> 39409877343 branches:u # 3.945 G/sec\n> 146350275 branch-misses:u # 0.37% of all branches\n\n> patched\n\n> cycles idle\n> 24259785 stalled-cycles-backend:u # 0.05% backend cycles idle\n> 213688149862 instructions:u # 4.29 insn per cycle\n> # 0.00 stalled\n> cycles per insn\n> 44147675129 branches:u # 4.420 G/sec\n> 14282567 branch-misses:u # 0.03% of all branches\n\n> You can see the branch predictor has done a *much* better job in the\n> patched code vs master with about 10x fewer misses. This should have\n\nNice!\n\n> helped contribute to the \"insn per cycle\" increase. 4.29 is quite\n> good for postgres. I often see that around 0.5. According to [1]\n> (relating to Zen4), \"We get a ridiculous 12 NOPs per cycle out of the\n> micro-op cache\". I'm unsure how micro-ops translate to \"insn per\n> cycle\" that's shown in perf stat. I thought 4-5 was about the maximum\n> pipeline size from today's era of CPUs.\n\n\"ins per cycle\" is micro-ops retired (i.e. excludes those executed\nspeculatively on a mispredicted branch).\n\nThat article mentions that 6 micro-ops per cycle can enter the backend\nfrom the frontend, but that can happen only with internally cached\nops, since only 4 instructions per cycle can be decoded. In specific\ncases, CPUs can fuse multiple front-end instructions into a single\nmacro-op, which I think means a pair of micro-ops that can \"travel\ntogether\" as one. The authors concluded further down that \"Zen 4’s\nreorder buffer is also special, because each entry can hold up to 4\nNOPs. Pairs of NOPs are likely fused by the decoders, and pairs of\nfused NOPs are fused again at the rename stage.\"\n\n\n", "msg_date": "Thu, 25 Jul 2024 10:18:15 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Tue, 16 Jul 2024 at 00:13, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Tue, 2 Jul 2024 at 02:23, David Rowley <[email protected]> wrote:\n> > I'm happy to keep going with this version of the patch\n>\n> +1, go for it.\n\nI've attached an updated patch series which are a bit more polished\nthan the last set. I've attempted to document the distinction between\nFormData_pg_attribute and the abbreviated struct and tried to give an\nindication of which one should be used.\n\nApart from that, changes include:\n\n* I pushed the v1-0001 patch, so that's removed from the patch series.\n* Rename TupleDescDeformAttr struct. It's now called CompactAttribute.\n* Rename TupleDescDeformAttr() macro. It's now called TupleDescCompactAttr()\n* Other macro renaming. e.g. ATT_IS_PACKABLE_FAST to COMPACT_ATTR_IS_PACKABLE\n* In 0003, renamed CompactAttribute.attalign to attalignby to make it\neasier to understand the distinction between the align char and the\nnumber of bytes.\n* Added 0004 patch to remove pg_attribute.attcacheoff.\n\nThere are a few more things that could be done to optimise a few more\nthings. For example, a bunch of places still use att_align_nominal().\nWith a bit more work, these could use att_nominal_alignby(). I'm not\nyet sure of the cleanest way to do this. Adding alignby to the\ntypecache might be one way, or maybe just a function that converts the\nattalign to the number of bytes. This would be useful in all places\nwhere att_align_nominal() is used in loops, as converting the char to\nthe number of bytes would only be done once rather than once per loop.\nI feel like this patch series is probably big enough for now, so I'd\nlike to opt for those improvements to take place as follow-on work.\n\nI'll put this up for the CF bot to run with for a bit as the patch has\nneeded a rebase since I pushed the v1-0001 patch.\n\nDavid", "msg_date": "Tue, 6 Aug 2024 13:11:34 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make tuple deformation faster" }, { "msg_contents": "On Tue, 6 Aug 2024 at 13:11, David Rowley <[email protected]> wrote:\n> I'll put this up for the CF bot to run with for a bit as the patch has\n> needed a rebase since I pushed the v1-0001 patch.\n\nI've been doing more work on this patch set as I'd been concerned\nthere wasn't any validation to ensure the TupleDesc's\nFormData_pg_attribute and the CompactAttribute are kept in sync when\nthe TupleDesc is altered in various places around the codebase. To\nmake this more robust, in USE_ASSERT_CHECKING builds, I made it so the\nTupleDescCompactAttr() macro is turned into an inline function with an\nAssert to validate the stored CompactAttribute vs one freshly\npopulated from the FormData_pg_attribute. Doing this caused me to find\na missed call to populate_compact_attribute(), so was worth the\neffort. There's no apparent performance difference when running all\nthe tests with and without this extra checking.\n\nI also spent time doing performance tests using 3 different machines.\nI didn't document the previous performance tests, but I expect I ran\nthem on my AMD 7945hx laptop. On testing again today, I used that\nZen4 laptop plus an AMD 3990x (Zen2) and a 10-core Apple M2. I found\nthat it was only the 7945hx laptop that was showing any decent gains\nfrom this patch :(. After thinking for a bit, I decided to expand the\nCompactAttribute.attflags where I'd been bit packing in 5 boolean\nfields from pg_attribute and expand those into bool fields. This made\nthe performance much better. The 0005 contains this change\nindependently.\n\nPlease see the attached \"patches-0001-0005_results.png\". This shows\nthe test query running 25% faster on the 7945hx laptop with gcc. The\nleast gains were from the Apple M2 at about a 9-10% increase.\n\nThe \"patches-0001-0004_results.png\" shows the results with the smaller\nbit-packed CompactAttriubute struct. You can see that without the 0005\npatch, there are some performance regressions, so I propose including\n0005 which widens CompactAttribute from 8 bytes to 16 bytes.\n\nDavid", "msg_date": "Tue, 3 Sep 2024 17:05:55 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make tuple deformation faster" } ]
[ { "msg_contents": "Hi All,\nWhile reviewing Richard's patch for grouping sets, I stumbled upon\nfollowing explain output\n\nexplain (costs off)\nselect distinct on (a, b) a, b\nfrom (values (1, 1), (2, 2)) as t (a, b) where a = b\ngroup by grouping sets((a, b), (a))\norder by a, b;\n QUERY PLAN\n----------------------------------------------------------------\n Unique\n -> Sort\n Sort Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n -> HashAggregate\n Hash Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n Hash Key: \"*VALUES*\".column1\n -> Values Scan on \"*VALUES*\"\n Filter: (column1 = column2)\n(8 rows)\n\nThere is no VALUES.column1 and VALUES.column2 in the query. The alias t.a\nand t.b do not appear anywhere in the explain output. I think explain\noutput should look like\nexplain (costs off)\nselect distinct on (a, b) a, b\nfrom (values (1, 1), (2, 2)) as t (a, b) where a = b\ngroup by grouping sets((a, b), (a))\norder by a, b;\n QUERY PLAN\n----------------------------------------------------------------\n Unique\n -> Sort\n Sort Key: t.a, t.b\n -> HashAggregate\n Hash Key: t.a, t.b\n Hash Key: t.a\n -> Values Scan on \"*VALUES*\" t\n Filter: (a = b)\n(8 rows)\n\nI didn't get time to figure out the reason behind this, nor the history.\nBut I thought I would report it nonetheless.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi All,While reviewing Richard's patch for grouping sets, I stumbled upon following explain outputexplain (costs off)select distinct on (a, b) a, bfrom (values (1, 1), (2, 2)) as t (a, b) where a = bgroup by grouping sets((a, b), (a))order by a, b;                           QUERY PLAN                           ---------------------------------------------------------------- Unique   ->  Sort         Sort Key: \"*VALUES*\".column1, \"*VALUES*\".column2         ->  HashAggregate               Hash Key: \"*VALUES*\".column1, \"*VALUES*\".column2               Hash Key: \"*VALUES*\".column1               ->  Values Scan on \"*VALUES*\"                     Filter: (column1 = column2)(8 rows)There is no VALUES.column1 and VALUES.column2 in the query. The alias t.a and t.b do not appear anywhere in the explain output. I think explain output should look likeexplain (costs off)select distinct on (a, b) a, bfrom (values (1, 1), (2, 2)) as t (a, b) where a = bgroup by grouping sets((a, b), (a))order by a, b;                           QUERY PLAN                           ---------------------------------------------------------------- Unique   ->  Sort         Sort Key: t.a, t.b         ->  HashAggregate               Hash Key: t.a, t.b               Hash Key: t.a               ->  Values Scan on \"*VALUES*\" t                     Filter: (a = b)(8 rows)I didn't get time to figure out the reason behind this, nor the history. But I thought I would report it nonetheless.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 1 Jul 2024 15:47:03 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Alias of VALUES RTE in explain plan" }, { "msg_contents": "On Mon, Jul 1, 2024 at 3:17 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n> Hi All,\n> While reviewing Richard's patch for grouping sets, I stumbled upon\n> following explain output\n>\n> explain (costs off)\n> select distinct on (a, b) a, b\n> from (values (1, 1), (2, 2)) as t (a, b) where a = b\n> group by grouping sets((a, b), (a))\n> order by a, b;\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Unique\n> -> Sort\n> Sort Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n> -> HashAggregate\n> Hash Key: \"*VALUES*\".column1, \"*VALUES*\".column2\n> Hash Key: \"*VALUES*\".column1\n> -> Values Scan on \"*VALUES*\"\n> Filter: (column1 = column2)\n> (8 rows)\n>\n> There is no VALUES.column1 and VALUES.column2 in the query. The alias t.a\n> and t.b do not appear anywhere in the explain output. I think explain\n> output should look like\n> explain (costs off)\n> select distinct on (a, b) a, b\n> from (values (1, 1), (2, 2)) as t (a, b) where a = b\n> group by grouping sets((a, b), (a))\n> order by a, b;\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Unique\n> -> Sort\n> Sort Key: t.a, t.b\n> -> HashAggregate\n> Hash Key: t.a, t.b\n> Hash Key: t.a\n> -> Values Scan on \"*VALUES*\" t\n> Filter: (a = b)\n> (8 rows)\n>\n> I didn't get time to figure out the reason behind this, nor the history.\n> But I thought I would report it nonetheless.\n>\n\nI have looked into the issue and found that when subqueries are pulled up,\na modifiable copy of the subquery is created for modification in the\npull_up_simple_subquery() function. During this process,\nflatten_join_alias_vars() is called to flatten any join alias variables in\nthe subquery's target list. However at this point, we lose subquery's alias.\nIf you/hackers agree with my findings, I can provide a working patch soon.\n\n\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nOn Mon, Jul 1, 2024 at 3:17 PM Ashutosh Bapat <[email protected]> wrote:Hi All,While reviewing Richard's patch for grouping sets, I stumbled upon following explain outputexplain (costs off)select distinct on (a, b) a, bfrom (values (1, 1), (2, 2)) as t (a, b) where a = bgroup by grouping sets((a, b), (a))order by a, b;                           QUERY PLAN                           ---------------------------------------------------------------- Unique   ->  Sort         Sort Key: \"*VALUES*\".column1, \"*VALUES*\".column2         ->  HashAggregate               Hash Key: \"*VALUES*\".column1, \"*VALUES*\".column2               Hash Key: \"*VALUES*\".column1               ->  Values Scan on \"*VALUES*\"                     Filter: (column1 = column2)(8 rows)There is no VALUES.column1 and VALUES.column2 in the query. The alias t.a and t.b do not appear anywhere in the explain output. I think explain output should look likeexplain (costs off)select distinct on (a, b) a, bfrom (values (1, 1), (2, 2)) as t (a, b) where a = bgroup by grouping sets((a, b), (a))order by a, b;                           QUERY PLAN                           ---------------------------------------------------------------- Unique   ->  Sort         Sort Key: t.a, t.b         ->  HashAggregate               Hash Key: t.a, t.b               Hash Key: t.a               ->  Values Scan on \"*VALUES*\" t                     Filter: (a = b)(8 rows)I didn't get time to figure out the reason behind this, nor the history. But I thought I would report it nonetheless.I have looked into the issue and found that when subqueries are pulled up, a modifiable copy of the subquery is created for modification in the pull_up_simple_subquery() function. During this process, flatten_join_alias_vars() is called to flatten any join alias variables in the subquery's target list. However at this point, we lose subquery's alias.If you/hackers agree with my findings, I can provide a working patch soon.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 15 Aug 2024 19:13:29 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Alias of VALUES RTE in explain plan" } ]
[ { "msg_contents": "Attached is a patch adding support for the gamma and log-gamma\nfunctions. See, for example:\n\nhttps://en.wikipedia.org/wiki/Gamma_function\n\nI think these are very useful general-purpose mathematical functions.\nThey're part of C99, and they're commonly included in other\nmathematical libraries, such as the python math module, so I think\nit's useful to make them available from SQL.\n\nThe error-handling for these functions seems to be a little trickier\nthan most, so that might need further discussion.\n\nRegards,\nDean", "msg_date": "Mon, 1 Jul 2024 11:33:35 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "gamma() and lgamma() functions" }, { "msg_contents": "On Mon, Jul 1, 2024 at 5:33 PM Dean Rasheed <[email protected]>\nwrote:\n\n> Attached is a patch adding support for the gamma and log-gamma\n> functions. See, for example:\n>\n> https://en.wikipedia.org/wiki/Gamma_function\n>\n> I think these are very useful general-purpose mathematical functions.\n> They're part of C99, and they're commonly included in other\n> mathematical libraries, such as the python math module, so I think\n> it's useful to make them available from SQL.\n>\n> The error-handling for these functions seems to be a little trickier\n> than most, so that might need further discussion.\n>\n> Regards,\n> Dean\n>\n\nHi! The patch file seems broken.\ngit apply gamma-and-lgamma.patch\nerror: git apply: bad git-diff — exptec /dev/null in line 2\nBest regards, Stepan Neretin.\n\nOn Mon, Jul 1, 2024 at 5:33 PM Dean Rasheed <[email protected]> wrote:Attached is a patch adding support for the gamma and log-gamma\nfunctions. See, for example:\n\nhttps://en.wikipedia.org/wiki/Gamma_function\n\nI think these are very useful general-purpose mathematical functions.\nThey're part of C99, and they're commonly included in other\nmathematical libraries, such as the python math module, so I think\nit's useful to make them available from SQL.\n\nThe error-handling for these functions seems to be a little trickier\nthan most, so that might need further discussion.\n\nRegards,\nDeanHi! The patch file seems broken.git apply gamma-and-lgamma.patch error: git apply: bad git-diff  — exptec /dev/null in line 2Best regards, Stepan Neretin.", "msg_date": "Mon, 1 Jul 2024 21:20:08 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "> On 1 Jul 2024, at 16:20, Stepan Neretin <[email protected]> wrote:\n\n> The patch file seems broken.\n> git apply gamma-and-lgamma.patch error: git apply: bad git-diff — exptec /dev/null in line 2\n\nIt's a plain patch file, if you apply it with patch and not git it will work fine:\n\n$ patch -p1 < gamma-and-lgamma.patch\npatching file 'doc/src/sgml/func.sgml'\npatching file 'src/backend/utils/adt/float.c'\npatching file 'src/include/catalog/pg_proc.dat'\npatching file 'src/test/regress/expected/float8.out'\npatching file 'src/test/regress/sql/float8.sql'\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 16:22:32 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On Mon, Jul 1, 2024 at 5:33 PM Dean Rasheed <[email protected]>\nwrote:\n\n> Attached is a patch adding support for the gamma and log-gamma\n> functions. See, for example:\n>\n> https://en.wikipedia.org/wiki/Gamma_function\n>\n> I think these are very useful general-purpose mathematical functions.\n> They're part of C99, and they're commonly included in other\n> mathematical libraries, such as the python math module, so I think\n> it's useful to make them available from SQL.\n>\n> The error-handling for these functions seems to be a little trickier\n> than most, so that might need further discussion.\n>\n> Regards,\n> Dean\n>\n\nI tried to review the patch without applying it. It looks good to me, but I\nhave one notice:\nERROR: value out of range: overflow. I think we need to add information\nabout the available ranges in the error message\n\nOn Mon, Jul 1, 2024 at 5:33 PM Dean Rasheed <[email protected]> wrote:Attached is a patch adding support for the gamma and log-gamma\nfunctions. See, for example:\n\nhttps://en.wikipedia.org/wiki/Gamma_function\n\nI think these are very useful general-purpose mathematical functions.\nThey're part of C99, and they're commonly included in other\nmathematical libraries, such as the python math module, so I think\nit's useful to make them available from SQL.\n\nThe error-handling for these functions seems to be a little trickier\nthan most, so that might need further discussion.\n\nRegards,\nDeanI tried to review the patch without applying it. It looks good to me, but I have one notice:ERROR:  value out of range: overflow. I think we need to add information about the available ranges in the error message", "msg_date": "Mon, 1 Jul 2024 21:31:59 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On 2024-Jul-01, Stepan Neretin wrote:\n\n> I have one notice:\n> ERROR: value out of range: overflow. I think we need to add information\n> about the available ranges in the error message\n\nI think this is a project of its own. The error comes from calling\nfloat_overflow_error(), which is a generic routine used in several\nfunctions which have different overflow conditions. It's not clear to\nme that throwing errors listing the specific range for each case is\nworth the effort.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)\n\n\n", "msg_date": "Mon, 1 Jul 2024 17:04:16 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On Mon, 1 Jul 2024 at 16:04, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jul-01, Stepan Neretin wrote:\n>\n> > I have one notice:\n> > ERROR: value out of range: overflow. I think we need to add information\n> > about the available ranges in the error message\n>\n> I think this is a project of its own. The error comes from calling\n> float_overflow_error(), which is a generic routine used in several\n> functions which have different overflow conditions. It's not clear to\n> me that throwing errors listing the specific range for each case is\n> worth the effort.\n>\n\nRight. It's also worth noting that gamma() has several distinct ranges\nof validity for which it doesn't overflow, so it'd be hard to codify\nthat succinctly in an error message.\n\nSomething that bothers me about float.c is that there is precious\nlittle consistency as to whether functions raise overflow errors or\nreturn infinity. For example, exp() gives an overflow error for\nsufficiently large (finite) inputs, whereas sinh() and cosh() (very\nclosely related) return infinity. I think raising an error is the more\ntechnically correct thing to do, but returning infinity is sometimes\nperhaps more practically useful.\n\nHowever, given how much more quickly the result from gamma() explodes,\nI think it's more useful for it to raise an error so that you know you\nhave a problem, and that you probably want to use lgamma() instead.\n\n(As an aside, I've seen people (and ChatGPT) write algorithms to solve\nthe Birthday Problem using the gamma() function. That doesn't work\nbecause gamma(366) overflows, so you have to use lgamma().)\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 2 Jul 2024 08:57:40 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On 01.07.24 12:33, Dean Rasheed wrote:\n> Attached is a patch adding support for the gamma and log-gamma\n> functions. See, for example:\n> \n> https://en.wikipedia.org/wiki/Gamma_function\n> \n> I think these are very useful general-purpose mathematical functions.\n> They're part of C99, and they're commonly included in other\n> mathematical libraries, such as the python math module, so I think\n> it's useful to make them available from SQL.\n\nWhat are examples of where this would be useful in a database context?\n\n> The error-handling for these functions seems to be a little trickier\n> than most, so that might need further discussion.\n\nYeah, this is quite something.\n\nI'm not sure why you are doing the testing for special values (NaN etc.) \nyourself when the C library function already does it. For example, if I \nremove the isnan(arg1) check in your dlgamma(), then it still behaves \nthe same in all tests. However, the same does not happen in your \ndgamma(). The overflow checks after the C library call are written \ndifferently for the two functions. dgamma() does not check errno for \nERANGE for example. It might also be good if dgamma() checked errno for \nEDOM, because other the result of gamma(-1) being \"overflow\" seems a bit \nwrong.\n\nI'm also not clear why you are turning a genuine result overflow into \ninfinity in lgamma().\n\nI think this could use some kind of chart or something about all the \npossible behaviors and how the C library reports them and what we intend \nto do with them.\n\nBtw., I'm reading that lgamma() in the C library is not necessarily \nthread-safe. Is that a possible problem?\n\n\n\n", "msg_date": "Fri, 23 Aug 2024 14:40:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On Fri, 23 Aug 2024 at 13:40, Peter Eisentraut <[email protected]> wrote:\n>\n> What are examples of where this would be useful in a database context?\n\ngamma() and lgamma() are the kinds of functions that are generally\nuseful for a variety of tasks like statistical analysis and\ncombinatorial computations, and having them in the database allows\nthose sort of computations to be performed without the need to export\nthe data to an external tool. We discussed adding them in a thread\nlast year [1], and there has been at least a little prior interest\n[2].\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKJAcB8Q5qziKTTSnkA4Mnv_6f%2B7-_XUgbh9jFjSdEFQg%40mail.gmail.com\n[2] https://stackoverflow.com/questions/58884066/how-can-i-run-the-equivalent-of-excels-gammaln-function-in-postgres\n\nOf course, there's a somewhat fuzzy line between what is generally\nuseful enough, and what is too specialised for core Postgres, but I\nwould argue that these qualify, since they are part of C99, and\ncommonly appear in other general-purpose math libraries like the\nPython math module.\n\n> > The error-handling for these functions seems to be a little trickier\n> > than most, so that might need further discussion.\n>\n> Yeah, this is quite something.\n>\n> I'm not sure why you are doing the testing for special values (NaN etc.)\n> yourself when the C library function already does it. For example, if I\n> remove the isnan(arg1) check in your dlgamma(), then it still behaves\n> the same in all tests.\n\nIt's useful to do that so that we don't need to assume that every\nplatform conforms to the POSIX standard, and because it simplifies the\nlater overflow checks. This is consistent with the approach taken in\nother functions, such as dexp(), dsin(), dcos(), etc.\n\n> The overflow checks after the C library call are written\n> differently for the two functions. dgamma() does not check errno for\n> ERANGE for example. It might also be good if dgamma() checked errno for\n> EDOM, because other the result of gamma(-1) being \"overflow\" seems a bit\n> wrong.\n\nThey're intentionally different because the functions themselves are\ndifferent. In this case:\n\nselect gamma(-1);\nERROR: value out of range: overflow\n\nit is correct to throw an error, because gamma(-1) is undefined (it\ngoes to -Inf as x goes to -1 from above, and +Inf as x goes to -1 from\nbelow, so there is no well-defined limit).\n\nI've updated the patch to give a more specific error message for\nnegative integer inputs, as opposed to other overflow cases.\n\nRelying on errno being ERANGE or EDOM doesn't seem possible though,\nbecause the docs indicate that, while its behaviour is one thing\ntoday, per POSIX, that will change in the future.\n\nBy contrast, lgamma() does not raise an error for such inputs:\n\nselect lgamma(-1);\n lgamma\n----------\n Infinity\n\nThis is correct because lgamma() is the log of the absolute value of\nthe gamma function, so the limit is +Inf as x goes to -1 from both\nsides.\n\n> I'm also not clear why you are turning a genuine result overflow into\n> infinity in lgamma().\n\nOK, I've changed that to only return infinity if the input is\ninfinite, zero, or a negative integer. Otherwise, it now throws an\noverflow error.\n\n> Btw., I'm reading that lgamma() in the C library is not necessarily\n> thread-safe. Is that a possible problem?\n\nIt's not quite clear what to do about that. Some platforms may define\nthe lgamma_r() function, but not all. Do we have a standard pattern\nfor dealing with functions for which there is no thread-safe\nalternative?\n\nRegards,\nDean", "msg_date": "Wed, 4 Sep 2024 18:55:39 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Fri, 23 Aug 2024 at 13:40, Peter Eisentraut <[email protected]> wrote:\n>> What are examples of where this would be useful in a database context?\n\n> Of course, there's a somewhat fuzzy line between what is generally\n> useful enough, and what is too specialised for core Postgres, but I\n> would argue that these qualify, since they are part of C99, and\n> commonly appear in other general-purpose math libraries like the\n> Python math module.\n\nYeah, I think any math function that's part of C99 or POSIX is\narguably of general interest.\n\n>> I'm not sure why you are doing the testing for special values (NaN etc.)\n>> yourself when the C library function already does it. For example, if I\n>> remove the isnan(arg1) check in your dlgamma(), then it still behaves\n>> the same in all tests.\n\n> It's useful to do that so that we don't need to assume that every\n> platform conforms to the POSIX standard, and because it simplifies the\n> later overflow checks. This is consistent with the approach taken in\n> other functions, such as dexp(), dsin(), dcos(), etc.\n\ndexp() and those other functions date from the late stone age, before\nit was safe to assume that libm's behavior matched the POSIX specs.\nToday I think we can assume that, at least till proven differently.\nThere's not necessarily anything wrong with what you've coded, but\nI don't buy this argument for it.\n\n>> Btw., I'm reading that lgamma() in the C library is not necessarily\n>> thread-safe. Is that a possible problem?\n\n> It's not quite clear what to do about that.\n\nPer the Linux man page, the reason lgamma() isn't thread-safe is\n\n The lgamma(), lgammaf(), and lgammal() functions return the natural\n logarithm of the absolute value of the Gamma function. The sign of the\n Gamma function is returned in the external integer signgam declared in\n <math.h>. It is 1 when the Gamma function is positive or zero, -1 when\n it is negative.\n\nAFAICS this patch doesn't inspect signgam, so whether it gets\noverwritten by a concurrent thread wouldn't matter. However,\nit'd be a good idea to add a comment noting the hazard.\n\n(Presumably, the reason POSIX says \"need not be thread-safe\"\nis that an implementation could make it thread-safe by\nmaking signgam thread-local, but the standard doesn't wish\nto mandate that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Sep 2024 14:21:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "I wrote:\n> AFAICS this patch doesn't inspect signgam, so whether it gets\n> overwritten by a concurrent thread wouldn't matter. However,\n> it'd be a good idea to add a comment noting the hazard.\n\nFurther to that ... I looked at POSIX issue 8 (I had been reading 7)\nand found this illuminating discussion:\n\n Earlier versions of this standard did not require lgamma(),\n lgammaf(), and lgammal() to be thread-safe because signgam was a\n global variable. They are now required to be thread-safe to align\n with the ISO C standard (which, since the introduction of threads\n in 2011, requires that they avoid data races), with the exception\n that they need not avoid data races when storing a value in the\n signgam variable. Since signgam is not specified by the ISO C\n standard, this exception is not a conflict with that standard.\n\nSo the other reason to avoid using signgam is that it might\nnot exist at all in some libraries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Sep 2024 14:34:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On Wed, 4 Sept 2024 at 19:21, Tom Lane <[email protected]> wrote:\n>\n> >> I'm not sure why you are doing the testing for special values (NaN etc.)\n> >> yourself when the C library function already does it. For example, if I\n> >> remove the isnan(arg1) check in your dlgamma(), then it still behaves\n> >> the same in all tests.\n>\n> > It's useful to do that so that we don't need to assume that every\n> > platform conforms to the POSIX standard, and because it simplifies the\n> > later overflow checks. This is consistent with the approach taken in\n> > other functions, such as dexp(), dsin(), dcos(), etc.\n>\n> dexp() and those other functions date from the late stone age, before\n> it was safe to assume that libm's behavior matched the POSIX specs.\n> Today I think we can assume that, at least till proven differently.\n> There's not necessarily anything wrong with what you've coded, but\n> I don't buy this argument for it.\n>\n\nOK, thinking about this some more, I think we should reserve overflow\nerrors for genuine overflows, which I take to mean cases where the\nexact mathematical result should be finite, but is too large to be\nrepresented in a double.\n\nIn particular, this means that zero and negative integer inputs are\nnot genuine overflows, but should return NaN or +/-Inf, as described\nin the POSIX spec.\n\nDoing that, and assuming that tgamma() and lgamma() behave according\nto spec, leads to the attached, somewhat simpler patch.\n\nRegards,\nDean", "msg_date": "Fri, 6 Sep 2024 10:42:57 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gamma() and lgamma() functions" }, { "msg_contents": "On Fri, 6 Sept 2024 at 10:42, Dean Rasheed <[email protected]> wrote:\n>\n> ... assuming that tgamma() and lgamma() behave according to spec ...\n\nNope, that was too much to hope for. Let's see if this fares any better.\n\nRegards,\nDean", "msg_date": "Fri, 6 Sep 2024 12:58:10 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: gamma() and lgamma() functions" } ]
[ { "msg_contents": "Hi,\n\nI would like to propose to add a new field to psql's \\dAo+ meta-command\nto show whether the underlying function of an operator is leak-proof.\n\nThis idea is inspired from [1] that claims some indexes uses non-LEAKPROOF\nfunctions under the associated operators, as a result, it can not be selected\nfor queries with security_barrier views or row-level security policies.\nThe original proposal was to add a query over system catalogs for looking up\nnon-leakproof operators to the documentation, but I thought it is useful\nto improve \\dAo results rather than putting such query to the doc.\n\nThe attached patch adds the field to \\dAo+ and also a description that\nexplains the relation between indexes and security quals with referencing\n\\dAo+ meta-command.\n\n[1] https://www.postgresql.org/message-id/raw/5af3bf0c-5e0c-4128-81dc-084c5258b1af%40code406.com\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Mon, 1 Jul 2024 22:08:17 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "psql: Add leakproof field to \\dAo+ meta-command results" }, { "msg_contents": "On 2024-07-01 15:08 +0200, Yugo NAGATA wrote:\n> I would like to propose to add a new field to psql's \\dAo+ meta-command\n> to show whether the underlying function of an operator is leak-proof.\n\n+1 for making that info easily accessible.\n\n> This idea is inspired from [1] that claims some indexes uses non-LEAKPROOF\n> functions under the associated operators, as a result, it can not be selected\n> for queries with security_barrier views or row-level security policies.\n> The original proposal was to add a query over system catalogs for looking up\n> non-leakproof operators to the documentation, but I thought it is useful\n> to improve \\dAo results rather than putting such query to the doc.\n> \n> The attached patch adds the field to \\dAo+ and also a description that\n> explains the relation between indexes and security quals with referencing\n> \\dAo+ meta-command.\n> \n> [1] https://www.postgresql.org/message-id/raw/5af3bf0c-5e0c-4128-81dc-084c5258b1af%40code406.com\n\n\\dAo+ output looks good.\n\nBut this patch fails regression tests in src/test/regress/sql/psql.sql\n(\\dAo+ btree float_ops) because of the new leak-proof column. I think\nthis could even be changed to \"\\dAo+ btree array_ops|float_ops\" to also\ncover operators that are not leak-proof.\n\n+<para>\n+ For example, an index scan can not be selected for queries with\n\nI check the docs and \"cannot\" is more commonly used than \"can not\".\n\n+ <literal>security_barrier</literal> views or row-level security policies if an\n+ operator used in the <literal>WHERE</literal> clause is associated with the\n+ operator family of the index, but its underlying function is not marked\n+ <literal>LEAKPROOF</literal>. The <xref linkend=\"app-psql\"/> program's\n+ <command>\\dAo+</command> meta-command is useful for listing the operators\n+ with associated operator families and whether it is leak-proof.\n+</para>\n\nI think the last sentence can be improved. How about: \"Use psql's \\dAo+\ncommand to list operator families and tell which of their operators are\nmarked as leak-proof.\"? Should something similar be added to [1] which\nalso talks about leak-proof operators?\n\nThe rest is just formatting nitpicks:\n\n+ \", ofs.opfname AS \\\"%s\\\"\\n,\"\n\nThe trailing comma should come before the newline.\n\n+ \" CASE\\n\"\n+ \" WHEN p.proleakproof THEN '%s'\\n\"\n+ \" ELSE '%s'\\n\"\n+ \" END AS \\\"%s\\\"\\n\",\n\nWHEN/ELSE/END should be intended with one additional space to be\nconsistent with the other CASE expressions in this query.\n\n[1] https://www.postgresql.org/docs/devel/planner-stats-security.html\n\n-- \nErik\n\n\n", "msg_date": "Tue, 30 Jul 2024 01:36:55 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Add leakproof field to \\dAo+ meta-command results" }, { "msg_contents": "Hi,\n\nOn Tue, 30 Jul 2024 01:36:55 +0200\nErik Wienhold <[email protected]> wrote:\n\n> On 2024-07-01 15:08 +0200, Yugo NAGATA wrote:\n> > I would like to propose to add a new field to psql's \\dAo+ meta-command\n> > to show whether the underlying function of an operator is leak-proof.\n> \n> +1 for making that info easily accessible.\n> \n> > This idea is inspired from [1] that claims some indexes uses non-LEAKPROOF\n> > functions under the associated operators, as a result, it can not be selected\n> > for queries with security_barrier views or row-level security policies.\n> > The original proposal was to add a query over system catalogs for looking up\n> > non-leakproof operators to the documentation, but I thought it is useful\n> > to improve \\dAo results rather than putting such query to the doc.\n> > \n> > The attached patch adds the field to \\dAo+ and also a description that\n> > explains the relation between indexes and security quals with referencing\n> > \\dAo+ meta-command.\n> > \n> > [1] https://www.postgresql.org/message-id/raw/5af3bf0c-5e0c-4128-81dc-084c5258b1af%40code406.com\n> \n> \\dAo+ output looks good.\n\nThank you for looking into this.\nI attached a patch updated with your suggestions.\n\n> \n> But this patch fails regression tests in src/test/regress/sql/psql.sql\n> (\\dAo+ btree float_ops) because of the new leak-proof column. I think\n> this could even be changed to \"\\dAo+ btree array_ops|float_ops\" to also\n> cover operators that are not leak-proof.\n\nThank you for pointing out this. I fixed it with you suggestion to cover\nnon leak-proof operators, too.\n\n> +<para>\n> + For example, an index scan can not be selected for queries with\n> \n> I check the docs and \"cannot\" is more commonly used than \"can not\".\n\nFixed.\n\n> \n> + <literal>security_barrier</literal> views or row-level security policies if an\n> + operator used in the <literal>WHERE</literal> clause is associated with the\n> + operator family of the index, but its underlying function is not marked\n> + <literal>LEAKPROOF</literal>. The <xref linkend=\"app-psql\"/> program's\n> + <command>\\dAo+</command> meta-command is useful for listing the operators\n> + with associated operator families and whether it is leak-proof.\n> +</para>\n> \n> I think the last sentence can be improved. How about: \"Use psql's \\dAo+\n> command to list operator families and tell which of their operators are\n> marked as leak-proof.\"? Should something similar be added to [1] which\n> also talks about leak-proof operators?\n\nI agree, so I fixed the sentence as your suggestion and also add the\nsame description to the planner-stats-security doc.\n\n> The rest is just formatting nitpicks:\n> \n> + \", ofs.opfname AS \\\"%s\\\"\\n,\"\n> \n> The trailing comma should come before the newline.\n> \n> + \" CASE\\n\"\n> + \" WHEN p.proleakproof THEN '%s'\\n\"\n> + \" ELSE '%s'\\n\"\n> + \" END AS \\\"%s\\\"\\n\",\n> \n> WHEN/ELSE/END should be intended with one additional space to be\n> consistent with the other CASE expressions in this query.\n\nFixed both.\n\nRegards,\nYugo Nagata\n\n> \n> [1] https://www.postgresql.org/docs/devel/planner-stats-security.html\n> \n> -- \n> Erik\n\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Tue, 30 Jul 2024 15:30:57 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Add leakproof field to \\dAo+ meta-command results" }, { "msg_contents": "On 2024-07-30 08:30 +0200, Yugo NAGATA wrote:\n> On Tue, 30 Jul 2024 01:36:55 +0200\n> Erik Wienhold <[email protected]> wrote:\n> \n> > On 2024-07-01 15:08 +0200, Yugo NAGATA wrote:\n> > > I would like to propose to add a new field to psql's \\dAo+ meta-command\n> > > to show whether the underlying function of an operator is leak-proof.\n> > \n> > +1 for making that info easily accessible.\n> > \n> > > This idea is inspired from [1] that claims some indexes uses non-LEAKPROOF\n> > > functions under the associated operators, as a result, it can not be selected\n> > > for queries with security_barrier views or row-level security policies.\n> > > The original proposal was to add a query over system catalogs for looking up\n> > > non-leakproof operators to the documentation, but I thought it is useful\n> > > to improve \\dAo results rather than putting such query to the doc.\n> > > \n> > > The attached patch adds the field to \\dAo+ and also a description that\n> > > explains the relation between indexes and security quals with referencing\n> > > \\dAo+ meta-command.\n> > > \n> > > [1] https://www.postgresql.org/message-id/raw/5af3bf0c-5e0c-4128-81dc-084c5258b1af%40code406.com\n> > \n> > \\dAo+ output looks good.\n> \n> Thank you for looking into this.\n> I attached a patch updated with your suggestions.\n\nLGTM, thanks.\n\n> > \n> > But this patch fails regression tests in src/test/regress/sql/psql.sql\n> > (\\dAo+ btree float_ops) because of the new leak-proof column. I think\n> > this could even be changed to \"\\dAo+ btree array_ops|float_ops\" to also\n> > cover operators that are not leak-proof.\n> \n> Thank you for pointing out this. I fixed it with you suggestion to cover\n> non leak-proof operators, too.\n> \n> > +<para>\n> > + For example, an index scan can not be selected for queries with\n> > \n> > I check the docs and \"cannot\" is more commonly used than \"can not\".\n> \n> Fixed.\n> \n> > \n> > + <literal>security_barrier</literal> views or row-level security policies if an\n> > + operator used in the <literal>WHERE</literal> clause is associated with the\n> > + operator family of the index, but its underlying function is not marked\n> > + <literal>LEAKPROOF</literal>. The <xref linkend=\"app-psql\"/> program's\n> > + <command>\\dAo+</command> meta-command is useful for listing the operators\n> > + with associated operator families and whether it is leak-proof.\n> > +</para>\n> > \n> > I think the last sentence can be improved. How about: \"Use psql's \\dAo+\n> > command to list operator families and tell which of their operators are\n> > marked as leak-proof.\"? Should something similar be added to [1] which\n> > also talks about leak-proof operators?\n> \n> I agree, so I fixed the sentence as your suggestion and also add the\n> same description to the planner-stats-security doc.\n> \n> > The rest is just formatting nitpicks:\n> > \n> > + \", ofs.opfname AS \\\"%s\\\"\\n,\"\n> > \n> > The trailing comma should come before the newline.\n> > \n> > + \" CASE\\n\"\n> > + \" WHEN p.proleakproof THEN '%s'\\n\"\n> > + \" ELSE '%s'\\n\"\n> > + \" END AS \\\"%s\\\"\\n\",\n> > \n> > WHEN/ELSE/END should be intended with one additional space to be\n> > consistent with the other CASE expressions in this query.\n> \n> Fixed both.\n> \n> Regards,\n> Yugo Nagata\n> \n> > \n> > [1] https://www.postgresql.org/docs/devel/planner-stats-security.html\n> > \n> > -- \n> > Erik\n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n-- \nErik\n\n\n", "msg_date": "Sun, 4 Aug 2024 23:23:03 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Add leakproof field to \\dAo+ meta-command results" } ]
[ { "msg_contents": "Hackers,\n\nThere’s an odd difference in the behavior of timestamp_tz() outputs. Running with America/New_York as my TZ, it looks fine for a full timestamptz, identical to how casting the types directly works:\n\ndavid=# set time zone 'America/New_York';\nSET\n\ndavid=# select '2024-08-15 12:34:56-04'::timestamptz;\n timestamptz \n------------------------\n 2024-08-15 12:34:56-04\n(1 row)\n\ndavid=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-04\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2024-08-15T12:34:56-04:00\"\n\nBoth show the time in America/New_York, which is great. But when casting from a date, the behavior differs. Casting directly:\n\ndavid=# select '2024-08-15'::date::timestamptz;\n timestamptz \n------------------------\n 2024-08-15 00:00:00-04\n\nIt stringifies to the current zone setting again, as expected. But look at the output from a path query:\n\ndavid=# select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2023-08-15T04:00:00+00:00\"\n\nIt’s using UTC for the display output! Shouldn’t it be using America/New_York?\n\nNote that I’m comparing a cast from date to timestamptz because that’s how the jsonpath parsing works[1]: it ultimately uses date2timestamptz_opt_overflow()[2] to make the conversion, which appears to set the offset from the time zone GUC, so I’m not sure where it’s converted to UTC before stringifying.\n\nMaybe an argument is missing from the stringification path?\n\nFWIW, explicitly calling the string() jsonpath method does produce a result in the current TZ:\n\ndavid=# select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz().string()');\n jsonb_path_query_tz \n--------------------------\n \"2023-08-15 00:00:00-04\"\n\nThat bit uses timestamptz_out to format the output, but JSONB has its own stringification[4] (called here[5]), but I can’t tell what might be different between a timestamptz cast from a date and one not cast from a date.\n\nNote the same divergency in behavior occurs when the source value is a timestamp, too. Compare:\n\ndavid=# select '2024-08-15 12:34:56'::timestamp::timestamptz;\n timestamptz \n------------------------\n 2024-08-15 12:34:56-04\n(1 row)\n\ndavid=# select jsonb_path_query_tz('\"2023-08-15 12:34:56\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2023-08-15T16:34:56+00:00\"\n(1 row)\n\nAnyway, should the output of timestamptz JSONB values be made more consistent? I’m happy to make a patch to do so, but could use a hand figuring out where the behavior varies.\n\nBest,\n\nDavid\n\n[1]: https://github.com/postgres/postgres/blob/3497c87/src/backend/utils/adt/jsonpath_exec.c#L2708-L2718\n[2]: https://github.com/postgres/postgres/blob/3497c87/src/backend/utils/adt/date.c#L613-L698\n[3]: https://github.com/postgres/postgres/blob/3fb59e789dd9f21610101d1ec106ad58095e24f3/src/backend/utils/adt/jsonpath_exec.c#L1650-L1653\n[4]: https://github.com/postgres/postgres/blob/3fb59e789dd9f21610101d1ec106ad58095e24f3/src/backend/utils/adt/json.c#L369-L407\n[5]: https://github.com/postgres/postgres/blob/3fb59e7/src/backend/utils/adt/jsonb.c#L743-L748\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:02:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 1, 2024, at 11:02, David E. Wheeler <[email protected]> wrote:\n\n> Anyway, should the output of timestamptz JSONB values be made more consistent? I’m happy to make a patch to do so, but could use a hand figuring out where the behavior varies.\n\nI think if the formatting was more consistent, the test output would be:\n\n\n``` patch\n--- a/src/test/regress/expected/jsonb_jsonpath.out\n+++ b/src/test/regress/expected/jsonb_jsonpath.out\n@@ -2914,7 +2914,7 @@ HINT: Use *_tz() function for time zone support.\n select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz()'); -- should work\n jsonb_path_query_tz \n -----------------------------\n- \"2023-08-15T07:00:00+00:00\"\n+ \"2023-08-15T00:00:00-07:00\"\n (1 row)\n \n select jsonb_path_query('\"12:34:56\"', '$.timestamp_tz()');\n@@ -3003,7 +3003,7 @@ HINT: Use *_tz() function for time zone support.\n select jsonb_path_query_tz('\"2023-08-15 12:34:56\"', '$.timestamp_tz()'); -- should work\n jsonb_path_query_tz \n -----------------------------\n- \"2023-08-15T12:34:56+00:00\"\n+ \"2023-08-15T12:34:56+10:00\"\n (1 row)\n \n select jsonb_path_query('\"10-03-2017 12:34\"', '$.datetime(\"dd-mm-yyyy HH24:MI\")');\n```\n\nThat second example is a bit different than I noticed up-thread, not just a formatting issue but the offset is never applied!. That test run under tz +10, and this is how it works with the non-JSONB data types:\n\ndavid=# set time zone '+10';\nSET\nTime: 0.689 ms\ndavid=# select '2023-08-15 12:34:56'::timestamptz;\n timestamptz \n------------------------\n 2023-08-15 12:34:56+10\n(1 row)\n\nTime: 0.491 ms\ndavid=# select '2023-08-15 12:34:56'::timestamp::timestamptz;\n timestamptz \n------------------------\n 2023-08-15 12:34:56+10\n(1 row)\n\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:53:24 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 2, 2024, at 10:53, David E. Wheeler <[email protected]> wrote:\n\n> ``` patch\n> --- a/src/test/regress/expected/jsonb_jsonpath.out\n> +++ b/src/test/regress/expected/jsonb_jsonpath.out\n> @@ -2914,7 +2914,7 @@ HINT: Use *_tz() function for time zone support.\n> select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz()'); -- should work\n> jsonb_path_query_tz \n> -----------------------------\n> - \"2023-08-15T07:00:00+00:00\"\n> + \"2023-08-15T00:00:00-07:00\"\n> (1 row)\n> \n> select jsonb_path_query('\"12:34:56\"', '$.timestamp_tz()');\n> @@ -3003,7 +3003,7 @@ HINT: Use *_tz() function for time zone support.\n> select jsonb_path_query_tz('\"2023-08-15 12:34:56\"', '$.timestamp_tz()'); -- should work\n> jsonb_path_query_tz \n> -----------------------------\n> - \"2023-08-15T12:34:56+00:00\"\n> + \"2023-08-15T12:34:56+10:00\"\n> (1 row)\n> \n> select jsonb_path_query('\"10-03-2017 12:34\"', '$.datetime(\"dd-mm-yyyy HH24:MI\")');\n> ```\n\nFWIW I fixed this issue in my jsonpath port, which I released over the weekend.[1] You can see what I think should be the proper output for these two examples in these Playground links, where the response will use your browser’s time zone: [2], [3].\n\nBest,\n\nDavid\n\n\n[1]: https://justatheory.com/2024/07/go-sqljson-path/\n[2]: https://theory.github.io/sqljson/playground/?p=%2524.timestamp_tz%28%29&j=%25222023-08-15%2522&a=&o=49\n[3]: https://theory.github.io/sqljson/playground/?p=%2524.timestamp_tz%28%29&j=%25222023-08-15%252012%253A34%253A56%2522&a=&o=49\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:34:45 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Mon, Jul 1, 2024 at 11:02 PM David E. Wheeler <[email protected]> wrote:\n>\n> Hackers,\n>\n> There’s an odd difference in the behavior of timestamp_tz() outputs. Running with America/New_York as my TZ, it looks fine for a full timestamptz, identical to how casting the types directly works:\n>\n> david=# set time zone 'America/New_York';\n> SET\n>\n> david=# select '2024-08-15 12:34:56-04'::timestamptz;\n> timestamptz\n> ------------------------\n> 2024-08-15 12:34:56-04\n> (1 row)\n>\n> david=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-04\"', '$.timestamp_tz()');\n> jsonb_path_query_tz\n> -----------------------------\n> \"2024-08-15T12:34:56-04:00\"\n\n# select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n\nDo you also expect this to show the time in America/New_York?\n\nThis is what I get:\n\n[local] postgres@postgres:5432-28176=# select\njsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n┌─────────────────────────────┐\n│ jsonb_path_query_tz │\n├─────────────────────────────┤\n│ \"2024-08-15T12:34:56-05:00\" │\n└─────────────────────────────┘\n(1 row)\n\nThe logic in executeDateTimeMethod seems to convert the input to a UTC\ntimestamp base on the session TZ,\nthe output seems not cast based on the TZ.\n\n>\n> Both show the time in America/New_York, which is great. But when casting from a date, the behavior differs. Casting directly:\n>\n> david=# select '2024-08-15'::date::timestamptz;\n> timestamptz\n> ------------------------\n> 2024-08-15 00:00:00-04\n>\n> It stringifies to the current zone setting again, as expected. But look at the output from a path query:\n>\n> david=# select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz()');\n> jsonb_path_query_tz\n> -----------------------------\n> \"2023-08-15T04:00:00+00:00\"\n>\n> It’s using UTC for the display output! Shouldn’t it be using America/New_York?\n>\n> Note that I’m comparing a cast from date to timestamptz because that’s how the jsonpath parsing works[1]: it ultimately uses date2timestamptz_opt_overflow()[2] to make the conversion, which appears to set the offset from the time zone GUC, so I’m not sure where it’s converted to UTC before stringifying.\n>\n> Maybe an argument is missing from the stringification path?\n>\n> FWIW, explicitly calling the string() jsonpath method does produce a result in the current TZ:\n>\n> david=# select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz().string()');\n> jsonb_path_query_tz\n> --------------------------\n> \"2023-08-15 00:00:00-04\"\n>\n> That bit uses timestamptz_out to format the output, but JSONB has its own stringification[4] (called here[5]), but I can’t tell what might be different between a timestamptz cast from a date and one not cast from a date.\n>\n> Note the same divergency in behavior occurs when the source value is a timestamp, too. Compare:\n>\n> david=# select '2024-08-15 12:34:56'::timestamp::timestamptz;\n> timestamptz\n> ------------------------\n> 2024-08-15 12:34:56-04\n> (1 row)\n>\n> david=# select jsonb_path_query_tz('\"2023-08-15 12:34:56\"', '$.timestamp_tz()');\n> jsonb_path_query_tz\n> -----------------------------\n> \"2023-08-15T16:34:56+00:00\"\n> (1 row)\n>\n> Anyway, should the output of timestamptz JSONB values be made more consistent? I’m happy to make a patch to do so, but could use a hand figuring out where the behavior varies.\n>\n> Best,\n>\n> David\n>\n> [1]: https://github.com/postgres/postgres/blob/3497c87/src/backend/utils/adt/jsonpath_exec.c#L2708-L2718\n> [2]: https://github.com/postgres/postgres/blob/3497c87/src/backend/utils/adt/date.c#L613-L698\n> [3]: https://github.com/postgres/postgres/blob/3fb59e789dd9f21610101d1ec106ad58095e24f3/src/backend/utils/adt/jsonpath_exec.c#L1650-L1653\n> [4]: https://github.com/postgres/postgres/blob/3fb59e789dd9f21610101d1ec106ad58095e24f3/src/backend/utils/adt/json.c#L369-L407\n> [5]: https://github.com/postgres/postgres/blob/3fb59e7/src/backend/utils/adt/jsonb.c#L743-L748\n>\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:44:00 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "\n\n> On Jul 8, 2024, at 21:44, Junwang Zhao <[email protected]> wrote:\n> \n> # select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n> \n> Do you also expect this to show the time in America/New_York?\n> \n> This is what I get:\n> \n> [local] postgres@postgres:5432-28176=# select\n> jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n> ┌─────────────────────────────┐\n> │ jsonb_path_query_tz │\n> ├─────────────────────────────┤\n> │ \"2024-08-15T12:34:56-05:00\" │\n> └─────────────────────────────┘\n> (1 row)\n> \n> The logic in executeDateTimeMethod seems to convert the input to a UTC\n> timestamp base on the session TZ,\n> the output seems not cast based on the TZ.\n\nRight, which now that I think about it seems odd, because timestamptz does not actually store an offset. As you say, it converts the time to UTC and stores only that, then displays the offset relative to the current time zone setting.\n\nSo in plain SQL it always displays relative to the current TZ offset:\n\ndavid=# set time zone 'America/New_York';\nSET\ndavid=# select '2023-08-15 12:34:56-05'::timestamptz;\n timestamptz \n------------------------\n 2023-08-15 13:34:56-04\n(1 row)\n\ndavid=# select '2023-08-15 12:34:56'::timestamptz;\n timestamptz \n------------------------\n 2023-08-15 12:34:56-04\n(1 row)\n\nIn jsopath expressions, however, it does not, as your example demonstrates:\n\ndavid=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2024-08-15T12:34:56-05:00\"\n\nHow is it retaining the offset? Should it?\n\nThe display is properly adjusted when using string():\n\ndavid=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz().string()');\n jsonb_path_query_tz \n--------------------------\n \"2024-08-15 13:34:56-04\"\n(1 row)\n\nSo perhaps I had things reversed before. Maybe it’s actually doing the right then when it converts a timestamp to a timestamptz, but not when it the input contains an offset, as in your example.\n\nD\n\n\n\n", "msg_date": "Tue, 9 Jul 2024 10:07:48 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 9, 2024, at 10:07, David E. Wheeler <[email protected]> wrote:\n\n> So perhaps I had things reversed before. Maybe it’s actually doing the right then when it converts a timestamp to a timestamptz, but not when it the input contains an offset, as in your example.\n\nTo clarify, there’s an inconsistency in the output of timestamp_tz() depending on whether the input has an offset or not. With offset:\n\ndavid=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2024-08-15T12:34:56-05:00\"\n\nAnd without:\n\ndavid=# select jsonb_path_query_tz('\"2024-08-15 12:34:56\"', '$.timestamp_tz()');\n jsonb_path_query_tz \n-----------------------------\n \"2024-08-15T16:34:56+00:00\"\n\nI suspect the latter is correct, given that the timestamptz type appears to be an int64, presumably always in UTC. I don’t understand where the first example stores the offset.\n\nBest,\n\nDavid\n\n\n\n\n\n", "msg_date": "Tue, 9 Jul 2024 10:22:03 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Tue, Jul 9, 2024 at 10:22 PM David E. Wheeler <[email protected]> wrote:\n>\n> On Jul 9, 2024, at 10:07, David E. Wheeler <[email protected]> wrote:\n>\n> > So perhaps I had things reversed before. Maybe it’s actually doing the right then when it converts a timestamp to a timestamptz, but not when it the input contains an offset, as in your example.\n>\n> To clarify, there’s an inconsistency in the output of timestamp_tz() depending on whether the input has an offset or not. With offset:\n>\n> david=# select jsonb_path_query_tz('\"2024-08-15 12:34:56-05\"', '$.timestamp_tz()');\n> jsonb_path_query_tz\n> -----------------------------\n> \"2024-08-15T12:34:56-05:00\"\n>\n> And without:\n>\n> david=# select jsonb_path_query_tz('\"2024-08-15 12:34:56\"', '$.timestamp_tz()');\n> jsonb_path_query_tz\n> -----------------------------\n> \"2024-08-15T16:34:56+00:00\"\n>\n> I suspect the latter is correct, given that the timestamptz type appears to be an int64, presumably always in UTC. I don’t understand where the first example stores the offset.\n\nIn JsonbValue.val.datatime, there is a tz field, I think that's where\nthe offset stored, it is 18000 in the first example\n\nstruct\n{\n Datum value;\n Oid typid;\n int32 typmod;\n int tz; /* Numeric time zone, in seconds, for\n * TimestampTz data type */\n} datetime;\n\n>\n> Best,\n>\n> David\n>\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 9 Jul 2024 23:08:52 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 9, 2024, at 11:08, Junwang Zhao <[email protected]> wrote:\n\n> In JsonbValue.val.datatime, there is a tz field, I think that's where\n> the offset stored, it is 18000 in the first example\n> \n> struct\n> {\n> Datum value;\n> Oid typid;\n> int32 typmod;\n> int tz; /* Numeric time zone, in seconds, for\n> * TimestampTz data type */\n> } datetime;\n\nOooh, okay, so it’s a jsonb variant of the type. Interesting. Ah, and it’s assigned here[1]:\n\n jb->val.datetime.tz = tz;\n\nIt seems like JSONB timestamptz values want to display the recorded time zone, so I suspect we need to set it when the converting from a non-tz to a local tz setting, something like this:\n\n``` patch\ndiff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c\nindex d79c929822..f63b3b9330 100644\n--- a/src/backend/utils/adt/jsonpath_exec.c\n+++ b/src/backend/utils/adt/jsonpath_exec.c\n@@ -2707,12 +2707,16 @@ executeDateTimeMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,\n\t\t\tbreak;\n\t\tcase jpiTimestampTz:\n\t\t\t{\n+\t\t\t\tstruct pg_tm *tm;\n\t\t\t\t/* Convert result type to timestamp with time zone */\n\t\t\t\tswitch (typid)\n\t\t\t\t{\n\t\t\t\t\tcase DATEOID:\n\t\t\t\t\t\tcheckTimezoneIsUsedForCast(cxt->useTz,\n\t\t\t\t\t\t\t\t\t\t\t\t \"date\", \"timestamptz\");\n+\t\t\t\t\t\tif (timestamp2tm(DatumGetTimestamp(value), NULL, tm, NULL, NULL, NULL) == 0) {\n+\t\t\t\t\t\t\ttz = DetermineTimeZoneOffset(tm, session_timezone);\n+\t\t\t\t\t\t}\n\t\t\t\t\t\tvalue = DirectFunctionCall1(date_timestamptz,\n\t\t\t\t\t\t\t\t\t\t\t\t\tvalue);\n\t\t\t\t\t\tbreak;\n@@ -2726,6 +2730,9 @@ executeDateTimeMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,\n\t\t\t\t\tcase TIMESTAMPOID:\n\t\t\t\t\t\tcheckTimezoneIsUsedForCast(cxt->useTz,\n\t\t\t\t\t\t\t\t\t\t\t\t \"timestamp\", \"timestamptz\");\n+\t\t\t\t\t\tif (timestamp2tm(DatumGetTimestamp(value), NULL, tm, NULL, NULL, NULL) == 0) {\n+\t\t\t\t\t\t\ttz = DetermineTimeZoneOffset(tm, session_timezone);\n+\t\t\t\t\t\t}\n\t\t\t\t\t\tvalue = DirectFunctionCall1(timestamp_timestamptz,\n\t\t\t\t\t\t\t\t\t\t\t\t\tvalue);\n\t\t\t\t\t\tbreak;\n```\n\nOnly, you know, doesn’t crash the server.\n\nBest,\n\nDavid\n\n\n[1]: https://github.com/postgres/postgres/blob/629520be5f9da9d0192c7f6c8796bfddb4746760/src/backend/utils/adt/jsonpath_exec.c#L2784\n\n\n\n\n", "msg_date": "Tue, 9 Jul 2024 11:38:02 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Tue, Jul 9, 2024 at 11:38 PM David E. Wheeler <[email protected]> wrote:\n>\n> On Jul 9, 2024, at 11:08, Junwang Zhao <[email protected]> wrote:\n>\n> > In JsonbValue.val.datatime, there is a tz field, I think that's where\n> > the offset stored, it is 18000 in the first example\n> >\n> > struct\n> > {\n> > Datum value;\n> > Oid typid;\n> > int32 typmod;\n> > int tz; /* Numeric time zone, in seconds, for\n> > * TimestampTz data type */\n> > } datetime;\n>\n> Oooh, okay, so it’s a jsonb variant of the type. Interesting. Ah, and it’s assigned here[1]:\n>\n> jb->val.datetime.tz = tz;\n>\n> It seems like JSONB timestamptz values want to display the recorded time zone, so I suspect we need to set it when the converting from a non-tz to a local tz setting, something like this:\n>\n> ``` patch\n> diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c\n> index d79c929822..f63b3b9330 100644\n> --- a/src/backend/utils/adt/jsonpath_exec.c\n> +++ b/src/backend/utils/adt/jsonpath_exec.c\n> @@ -2707,12 +2707,16 @@ executeDateTimeMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,\n> break;\n> case jpiTimestampTz:\n> {\n> + struct pg_tm *tm;\n> /* Convert result type to timestamp with time zone */\n> switch (typid)\n> {\n> case DATEOID:\n> checkTimezoneIsUsedForCast(cxt->useTz,\n> \"date\", \"timestamptz\");\n> + if (timestamp2tm(DatumGetTimestamp(value), NULL, tm, NULL, NULL, NULL) == 0) {\n> + tz = DetermineTimeZoneOffset(tm, session_timezone);\n> + }\n> value = DirectFunctionCall1(date_timestamptz,\n> value);\n> break;\n> @@ -2726,6 +2730,9 @@ executeDateTimeMethod(JsonPathExecContext *cxt, JsonPathItem *jsp,\n> case TIMESTAMPOID:\n> checkTimezoneIsUsedForCast(cxt->useTz,\n> \"timestamp\", \"timestamptz\");\n> + if (timestamp2tm(DatumGetTimestamp(value), NULL, tm, NULL, NULL, NULL) == 0) {\n> + tz = DetermineTimeZoneOffset(tm, session_timezone);\n> + }\n> value = DirectFunctionCall1(timestamp_timestamptz,\n> value);\n> break;\n> ```\n>\n> Only, you know, doesn’t crash the server.\n\nI apply your patch with some minor change(to make the server not crash):\n\ndiff --git a/src/backend/utils/adt/jsonpath_exec.c\nb/src/backend/utils/adt/jsonpath_exec.c\nindex d79c9298227..87a695ef633 100644\n--- a/src/backend/utils/adt/jsonpath_exec.c\n+++ b/src/backend/utils/adt/jsonpath_exec.c\n@@ -2708,6 +2708,8 @@ executeDateTimeMethod(JsonPathExecContext *cxt,\nJsonPathItem *jsp,\n case jpiTimestampTz:\n {\n /* Convert result type to timestamp\nwith time zone */\n+ struct pg_tm tm;\n+ fsec_t fsec;\n switch (typid)\n {\n case DATEOID:\n@@ -2726,6 +2728,9 @@ executeDateTimeMethod(JsonPathExecContext *cxt,\nJsonPathItem *jsp,\n case TIMESTAMPOID:\n\ncheckTimezoneIsUsedForCast(cxt->useTz,\n\n \"timestamp\", \"timestamptz\");\n+ if\n(timestamp2tm(DatumGetTimestamp(value), NULL, &tm, &fsec, NULL, NULL)\n== 0) {\n+ tz =\nDetermineTimeZoneOffset(&tm, session_timezone);\n+ }\n value =\nDirectFunctionCall1(timestamp_timestamptz,\n\n value);\n break;\n\nIt now gives the local tz:\n\n[local] postgres@postgres:5432-54960=# set time zone 'America/New_York';\nSET\nTime: 2.894 ms\n[local] postgres@postgres:5432-54960=# select\njsonb_path_query_tz('\"2024-08-15 12:34:56\"', '$.timestamp_tz()');\n┌─────────────────────────────┐\n│ jsonb_path_query_tz │\n├─────────────────────────────┤\n│ \"2024-08-15T12:34:56-04:00\" │\n└─────────────────────────────┘\n(1 row)\n\nTime: 293813.022 ms (04:53.813)\n\nI'm not sure whether the SQL/JSON standard mentioned this, I searched a\nlittle bit, but found no clue :(\n\n>\n> Best,\n>\n> David\n>\n>\n> [1]: https://github.com/postgres/postgres/blob/629520be5f9da9d0192c7f6c8796bfddb4746760/src/backend/utils/adt/jsonpath_exec.c#L2784\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 10 Jul 2024 13:48:17 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 10, 2024, at 01:48, Junwang Zhao <[email protected]> wrote:\n\n> I apply your patch with some minor change(to make the server not crash):\n\nOh, thank you! Kicking myself for not catching the obvious.\n\n> It now gives the local tz:\n> \n> [local] postgres@postgres:5432-54960=# set time zone 'America/New_York';\n> SET\n> Time: 2.894 ms\n> [local] postgres@postgres:5432-54960=# select\n> jsonb_path_query_tz('\"2024-08-15 12:34:56\"', '$.timestamp_tz()');\n> ┌─────────────────────────────┐\n> │ jsonb_path_query_tz │\n> ├─────────────────────────────┤\n> │ \"2024-08-15T12:34:56-04:00\" │\n> └─────────────────────────────┘\n> (1 row)\n> \n> Time: 293813.022 ms (04:53.813)\n\nYes, and I think that’s what we want, since it preserves and displays the offset for strings that contain them:\n\n david=# set time zone 'America/New_York';\n SET\n david=# select jsonb_path_query_tz('\"2024-08-15 12:34:56+10\"', '$.timestamp_tz()');\n jsonb_path_query_tz\n -----------------------------\n \"2024-08-15T12:34:56+10:00\"\n\n> I'm not sure whether the SQL/JSON standard mentioned this, I searched a\n> little bit, but found no clue :(\n\nYeah I don’t know either, but now at least it’s consistent. I’ve attached a patch to fix it.\n\nIdeally, I think, we wouldn’t convert the value and determine the offset twice, but teach date_timestamptz and timestamp_timestamptz (or date2timestamptz and timestamp2timestamptz?) how to return the offset, or create alternate functions that do so. Not sure what calling style should be adopted here, but this at least addresses the issue. Happy to resubmit something more efficient upon function design feedback.\n\nBest,\n\nDavid", "msg_date": "Wed, 10 Jul 2024 10:33:05 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 10, 2024, at 10:33, David E. Wheeler <[email protected]> wrote:\n\n> Yeah I don’t know either, but now at least it’s consistent. I’ve attached a patch to fix it.\n> \n> Ideally, I think, we wouldn’t convert the value and determine the offset twice, but teach date_timestamptz and timestamp_timestamptz (or date2timestamptz and timestamp2timestamptz?) how to return the offset, or create alternate functions that do so. Not sure what calling style should be adopted here, but this at least addresses the issue. Happy to resubmit something more efficient upon function design feedback.\n\nHere’s a September CommitFest item, though I think it should be fixed before the next beta.\n\n https://commitfest.postgresql.org/49/5119/\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 10 Jul 2024 10:35:17 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 10, 2024, at 10:33, David E. Wheeler <[email protected]> wrote:\n\n> Yeah I don’t know either, but now at least it’s consistent. I’ve attached a patch to fix it.\n\nActually I think there’s a subtlety still missing here:\n\n@@ -2914,7 +2914,7 @@ HINT: Use *_tz() function for time zone support.\n select jsonb_path_query_tz('\"2023-08-15\"', '$.timestamp_tz()'); -- should work\n jsonb_path_query_tz -----------------------------\n- \"2023-08-15T07:00:00+00:00\"\n+ \"2023-08-14T23:00:00-08:00\"\n (1 row)\n\nThis test runs while the time zone is set to “PST8PDT”, but it’s got the PST offset when it should be PDT:\n\ndavid=# set time zone 'PST8PDT';\nSET\ndavid=# select '2023-08-15'::timestamptz;\n timestamptz \n------------------------\n 2023-08-15 00:00:00-07\n\nSo it should be -7, not -8. Not sure where to tell it to pay proper attention to daylight savings time.\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Wed, 10 Jul 2024 10:54:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 10, 2024, at 10:54, David E. Wheeler <[email protected]> wrote:\n\n> So it should be -7, not -8. Not sure where to tell it to pay proper attention to daylight savings time.\n\nOh, and the time and date were wrong, too, because I blindly used the same conversion for dates as for timestamps. Fixed in v2.\n\nPR: https://github.com/theory/postgres/pull/7\nCF: https://commitfest.postgresql.org/49/5119/\n\nBest,\n\nDavid", "msg_date": "Wed, 10 Jul 2024 11:19:32 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 10, 2024, at 11:19, David E. Wheeler <[email protected]> wrote:\n\n> Oh, and the time and date were wrong, too, because I blindly used the same conversion for dates as for timestamps. Fixed in v2.\n> \n> PR: https://github.com/theory/postgres/pull/7\n> CF: https://commitfest.postgresql.org/49/5119/\n\nRebase on 5784a49. No other changes. I would consider this a bug in features added for 17.\n\nBest,\n\nDavid", "msg_date": "Fri, 19 Jul 2024 10:05:25 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Fri, Jul 19, 2024 at 7:35 PM David E. Wheeler <[email protected]>\nwrote:\n\n> On Jul 10, 2024, at 11:19, David E. Wheeler <[email protected]> wrote:\n>\n> > Oh, and the time and date were wrong, too, because I blindly used the\n> same conversion for dates as for timestamps. Fixed in v2.\n> >\n> > PR: https://github.com/theory/postgres/pull/7\n> > CF: https://commitfest.postgresql.org/49/5119/\n>\n> Rebase on 5784a49. No other changes. I would consider this a bug in\n> features added for 17.\n>\n\nI agree with David that we need to set the tz explicitly as the JsonbValue\nstruct maintains that separately.\n\nHowever, in the attached version, I have added some comments and also,\nfixed some indentation.\n\nThanks\n\n\n>\n> Best,\n>\n> David\n>\n>\n\n-- \n\n\n\n*Jeevan Chalke*\n\n*Principal, ManagerProduct Development*\n\nenterprisedb.com <https://www.enterprisedb.com>", "msg_date": "Mon, 22 Jul 2024 12:42:21 +0530", "msg_from": "Jeevan Chalke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 22, 2024, at 03:12, Jeevan Chalke <[email protected]> wrote:\n\n> I agree with David that we need to set the tz explicitly as the JsonbValue struct maintains that separately.\n> \n> However, in the attached version, I have added some comments and also, fixed some indentation.\n\nThank you for the review. I changed a single word in your comments (which are welcome). Thank you!\n\nJust to reiterate, this is not an ideal fix, as the `date_timestamptz` and `timestamp_timestamptz` perform the same calculations. It would be nice to do it only once.\n\nBest,\n\nDavid", "msg_date": "Mon, 22 Jul 2024 13:29:02 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On 2024-07-22 Mo 3:12 AM, Jeevan Chalke wrote:\n>\n>\n> On Fri, Jul 19, 2024 at 7:35 PM David E. Wheeler \n> <[email protected]> wrote:\n>\n> On Jul 10, 2024, at 11:19, David E. Wheeler\n> <[email protected]> wrote:\n>\n> > Oh, and the time and date were wrong, too, because I blindly\n> used the same conversion for dates as for timestamps. Fixed in v2.\n> >\n> > PR: https://github.com/theory/postgres/pull/7\n> > CF: https://commitfest.postgresql.org/49/5119/\n>\n> Rebase on 5784a49. No other changes. I would consider this a bug\n> in features added for 17.\n>\n>\n> I agree with David that we need to set the tz explicitly as the \n> JsonbValue struct maintains that separately.\n>\n> However, in the attached version, I have added some comments and also, \n> fixed some indentation.\n>\n\nI have pushed this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-22 Mo 3:12 AM, Jeevan Chalke\n wrote:\n\n\n\n\n\n\n\n\nOn Fri, Jul 19, 2024 at\n 7:35 PM David E. Wheeler <[email protected]>\n wrote:\n\nOn\n Jul 10, 2024, at 11:19, David E. Wheeler <[email protected]>\n wrote:\n\n > Oh, and the time and date were wrong, too, because I\n blindly used the same conversion for dates as for\n timestamps. Fixed in v2.\n > \n > PR: https://github.com/theory/postgres/pull/7\n > CF: https://commitfest.postgresql.org/49/5119/\n\n Rebase on 5784a49. No other changes. I would consider this a\n bug in features added for 17.\n\n\n\n I agree with David that we need to set the tz explicitly as\n the JsonbValue struct maintains that separately.\n\n However, in the attached version, I have added some comments\n and also, fixed some indentation.\n\n\n\n\n\n\n\nI have pushed this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 30 Jul 2024 07:59:26 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" }, { "msg_contents": "On Jul 30, 2024, at 07:59, Andrew Dunstan <[email protected]> wrote:\n\n> I have pushed this.\n\nThank you, Andrew!\n\nD\n\n\n\n", "msg_date": "Tue, 30 Jul 2024 09:42:14 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: jsonpath: Inconsistency of timestamp_tz() Output" } ]
[ { "msg_contents": "Our docs currently state this regarding the perl requirement for \nbuilding on Windows:\n\n\nActiveState Perl\n\n ActiveState Perl is required to run the build generation scripts.\n MinGW or Cygwin Perl will not work. It must also be present in the\n PATH. Binaries can be downloaded from https://www.activestate.com\n <https://www.activestate.com> (Note: version 5.14 or later is\n required, the free Standard Distribution is sufficient).\n\n\nThis really hasn't been a true requirement for quite some time. I \nstopped using AS perl quite a few years ago due to possible licensing \nissues, and have been building with MSVC using Strawberry Perl ever \nsince. Andres recently complained that Strawberry was somewhat out of \ndate, but that's no longer really the case - it's on 5.38.2, which is \nthe latest in that series, and perl 5.40.0 was only releases a few weeks \nago.\n\nWe recommend AS Tcl/Tk, which I have not used, but which I am wary of \nfor the same reasons. There is a BSD licensed up to date windows binary \ninstallation called Magicsplat which I haven't tried but which might \nwell be suitable for our purposes.\n\nI suggest we remove these references to AS and replace them with \nreferences to Windows distros that are more appropriate.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOur docs currently state this regarding\n the perl requirement for building on Windows:\n\n\n\nActiveState Perl\n\nActiveState Perl is required to run the build generation\n scripts. MinGW or Cygwin Perl will not work. It must also be\n present in the PATH. Binaries can be downloaded from https://www.activestate.com\n (Note: version 5.14 or later is required, the free Standard\n Distribution is sufficient).\n\n\n\n\nThis really hasn't been a true requirement for quite some time. I\n stopped using AS perl quite a few years ago due to possible\n licensing issues, and have been building with MSVC using\n Strawberry Perl ever since. Andres recently complained that\n Strawberry was somewhat out of date, but that's no longer really\n the case - it's on 5.38.2, which is the latest in that series, and\n perl 5.40.0 was only releases a few weeks ago.\nWe recommend AS Tcl/Tk, which I have not used, but which I am\n wary of for the same reasons. There is a BSD licensed up to date\n windows binary installation called Magicsplat which I haven't\n tried but which might well be suitable for our purposes.\n\nI suggest we remove these references to AS and replace them with\n references to Windows distros that are more appropriate.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 1 Jul 2024 11:27:26 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Windows perl/tcl requirement documentation" }, { "msg_contents": "On Mon, Jul 01, 2024 at 11:27:26AM -0400, Andrew Dunstan wrote:\n> Our docs currently state this regarding the perl requirement for building on\n> Windows:\n> \n> \n> ActiveState Perl\n> \n> ActiveState Perl is required to run the build generation scripts.\n> MinGW or Cygwin Perl will not work. It must also be present in the\n> PATH. Binaries can be downloaded from https://www.activestate.com\n> <https://www.activestate.com> (Note: version 5.14 or later is\n> required, the free Standard Distribution is sufficient).\n> \n> \n> This really hasn't been a true requirement for quite some time. I stopped\n> using AS perl quite a few years ago due to possible licensing issues, and\n> have been building with MSVC using Strawberry Perl ever since. Andres\n> recently complained that Strawberry was somewhat out of date, but that's no\n> longer really the case - it's on 5.38.2, which is the latest in that series,\n> and perl 5.40.0 was only releases a few weeks ago.\n\nThis is an area where I have proposed a set of changes in the last\ncommit fest of March, but it led nowehere as, at least it seems to me,\nthere was no strong consensus about what to mention as main\nreferences:\nhttps://commitfest.postgresql.org/47/4745/\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nNot everything should be gone, but I was wondering back on this thread\nif it would make most sense to replace all these references to point\nto the popular packaging systems used in the buildfarm.\n\n> We recommend AS Tcl/Tk, which I have not used, but which I am wary of for\n> the same reasons. There is a BSD licensed up to date windows binary\n> installation called Magicsplat which I haven't tried but which might well be\n> suitable for our purposes.\n\nInteresting.\n\n> I suggest we remove these references to AS and replace them with references\n> to Windows distros that are more appropriate.\n\nAgreed. \n--\nMichael", "msg_date": "Wed, 17 Jul 2024 08:21:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows perl/tcl requirement documentation" }, { "msg_contents": "\nOn 2024-07-16 Tu 7:21 PM, Michael Paquier wrote:\n> On Mon, Jul 01, 2024 at 11:27:26AM -0400, Andrew Dunstan wrote:\n>> Our docs currently state this regarding the perl requirement for building on\n>> Windows:\n>>\n>>\n>> ActiveState Perl\n>>\n>> ActiveState Perl is required to run the build generation scripts.\n>> MinGW or Cygwin Perl will not work. It must also be present in the\n>> PATH. Binaries can be downloaded from https://www.activestate.com\n>> <https://www.activestate.com> (Note: version 5.14 or later is\n>> required, the free Standard Distribution is sufficient).\n>>\n>>\n>> This really hasn't been a true requirement for quite some time. I stopped\n>> using AS perl quite a few years ago due to possible licensing issues, and\n>> have been building with MSVC using Strawberry Perl ever since. Andres\n>> recently complained that Strawberry was somewhat out of date, but that's no\n>> longer really the case - it's on 5.38.2, which is the latest in that series,\n>> and perl 5.40.0 was only releases a few weeks ago.\n> This is an area where I have proposed a set of changes in the last\n> commit fest of March, but it led nowehere as, at least it seems to me,\n> there was no strong consensus about what to mention as main\n> references:\n> https://commitfest.postgresql.org/47/4745/\n> https://www.postgresql.org/message-id/flat/[email protected]\n>\n> Not everything should be gone, but I was wondering back on this thread\n> if it would make most sense to replace all these references to point\n> to the popular packaging systems used in the buildfarm.\n\n\nAt the very least we should stop recommending things we don't use or \ntest with. Our default position of doing nothing is getting increasingly \nuntenable here. We're actively misleading people IMNSHO.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 10:53:36 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows perl/tcl requirement documentation" } ]
[ { "msg_contents": "Hello everyone!\n\nAs Michael noted in e26810d01d4 [0] hacking for Postgres 17 has begun.\nI’ve skimmed through hackers@ list. And it seems that so far role of the commitfest manager is vacant.\n\nIs anyone up for doing this community work? Make things moving :)\n\n\nBest regards, Andrey Borodin.\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e26810d01d4\n[1] https://commitfest.postgresql.org/48/\n\n", "msg_date": "Mon, 1 Jul 2024 22:40:37 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest manager for July 2024" }, { "msg_contents": "> Postgres 17\n18\n\n\n", "msg_date": "Tue, 2 Jul 2024 01:16:31 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "I'll give it a shot.\n\nOn Mon, Jul 1, 2024 at 4:16 PM Kirill Reshke <[email protected]> wrote:\n\n> > Postgres 17\n> 18\n>\n>\n>\n\nI'll give it a shot.On Mon, Jul 1, 2024 at 4:16 PM Kirill Reshke <[email protected]> wrote:> Postgres 17\n18", "msg_date": "Tue, 2 Jul 2024 16:08:53 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "Hi,\n\nThis reminded me that one of the changes proposed at pgconf.dev was\nhaving multiple CF managers, each responsible for a subset of the CF\nentries. Do we want to do try that?\n\nIIRC the idea was that it's not really feasible to shepherd ~400 patches\n(the current count for 2024-07), especially if the person has other\nthings to do too, and spreading this over multiple people would make it\nmore manageable - perhaps even allowing more people to participate.\n\nIf we wanted to try this, would that require some improvements in the CF\napp, e.g. to track which CF manager is responsible for each entry?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:21:23 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "\n\n> On 3 Jul 2024, at 01:08, Corey Huinker <[email protected]> wrote:\n> \n> I'll give it a shot.\n\nGreat, thank you! Do you have extended access to CF? Like activity log and mass-mail functions?\nIf no I think someone from PG_INFRA can grant you necessary access.\n\n\n\n> On 3 Jul 2024, at 20:21, Tomas Vondra <[email protected]> wrote:\n> \n> This reminded me that one of the changes proposed at pgconf.dev was\n> having multiple CF managers, each responsible for a subset of the CF\n> entries. Do we want to do try that?\n> \n> IIRC the idea was that it's not really feasible to shepherd ~400 patches\n> (the current count for 2024-07), especially if the person has other\n> things to do too, and spreading this over multiple people would make it\n> more manageable - perhaps even allowing more people to participate.\n> \n> If we wanted to try this, would that require some improvements in the CF\n> app, e.g. to track which CF manager is responsible for each entry?\n> \n\nMaybe can we just coordinate here? If Corey does not mind, I'd happily help with 2-3 sections. I particularly like \"Replication and recovery\" and \"Server features\", but would be glad to take any.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Jul 2024 21:51:23 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "On 7/3/24 18:51, Andrey M. Borodin wrote:\n> \n> \n>> On 3 Jul 2024, at 01:08, Corey Huinker <[email protected]>\n>> wrote:\n>> \n>> I'll give it a shot.\n> \n> Great, thank you! Do you have extended access to CF? Like activity\n> log and mass-mail functions? If no I think someone from PG_INFRA can\n> grant you necessary access.\n> \n> \n> \n>> On 3 Jul 2024, at 20:21, Tomas Vondra\n>> <[email protected]> wrote:\n>> \n>> This reminded me that one of the changes proposed at pgconf.dev\n>> was having multiple CF managers, each responsible for a subset of\n>> the CF entries. Do we want to do try that?\n>> \n>> IIRC the idea was that it's not really feasible to shepherd ~400\n>> patches (the current count for 2024-07), especially if the person\n>> has other things to do too, and spreading this over multiple people\n>> would make it more manageable - perhaps even allowing more people\n>> to participate.\n>> \n>> If we wanted to try this, would that require some improvements in\n>> the CF app, e.g. to track which CF manager is responsible for each\n>> entry?\n>> \n> \n> Maybe can we just coordinate here? If Corey does not mind, I'd\n> happily help with 2-3 sections. I particularly like \"Replication and\n> recovery\" and \"Server features\", but would be glad to take any.\n> \n\nSure, why not? If you're both OK with splitting the CFM work, feel free\nto decide how to divide the patches. I have imagined would be randomized\nin some way, to make it more likely the amount of work is more even\n(maybe some of the sections are much larger / more active?), but it's up\nto the people doing the work really - no one has the authority to tell\nyou how to do things ;-)\n\nMaybe there are more people who'd like to help with this - more people\nmeans less daunting amount of work per person ...\n\n\nAnother thing suggested in the \"commitfest thread\" nearby was that maybe\nwe should organize a \"status update\" pass over the entries, where the\nmore senior developers go over the patches and write some short summary\nof what they think needs to happen for each patch to move it forward\n\nI think the conclusion was that it needs to happen early in the cycle,\nbut that summer won't work because people tend to take vacation etc. So\nI guess we shall leave that for the September CF.\n\nBut if either of you (as CF managers) thinks some patch is stuck and\nwould benefit from such feedback, maybe ping me and I'll take a look and\nsee if I can suggest something.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2024 19:31:54 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "On 7/3/24 12:51, Andrey M. Borodin wrote:\n>> On 3 Jul 2024, at 01:08, Corey Huinker <[email protected]> wrote:\n>> \n>> I'll give it a shot.\n> \n> Great, thank you! Do you have extended access to CF? Like activity log and mass-mail functions?\n> If no I think someone from PG_INFRA can grant you necessary access.\n\nI can do that, although given that Corey and I are colleagues, it might \nbe better if someone else on pginfra did.\n\nOr at least if a few other hackers tell me to \"just do it\"...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:38:54 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" }, { "msg_contents": "On 7/8/24 11:38, Joe Conway wrote:\n> On 7/3/24 12:51, Andrey M. Borodin wrote:\n>>> On 3 Jul 2024, at 01:08, Corey Huinker <[email protected]> wrote:\n>>> \n>>> I'll give it a shot.\n>> \n>> Great, thank you! Do you have extended access to CF? Like activity log and mass-mail functions?\n>> If no I think someone from PG_INFRA can grant you necessary access.\n> \n> I can do that, although given that Corey and I are colleagues, it might\n> be better if someone else on pginfra did.\n> \n> Or at least if a few other hackers tell me to \"just do it\"...\n\n\nBased on off-list (sysadmin telegram channel) confirmation from Magnus \nand Dave Page -- done\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Tue, 9 Jul 2024 15:24:51 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager for July 2024" } ]
[ { "msg_contents": "I like to add CREATE OR REPLACE MATERIALIZED VIEW with the attached\npatches.\n\nPatch 0001 adds CREATE OR REPLACE MATERIALIZED VIEW similar to CREATE OR\nREPLACE VIEW. It also includes regression tests and changes to docs.\n\nPatch 0002 deprecates CREATE MATERIALIZED VIEW IF NOT EXISTS because it\nno longer seems necessary with patch 0001. Tom Lane commented[1] about\nthe general dislike of IF NOT EXISTS, to which I agree, but maybe this\nwas meant only in response to adding new commands. Anyway, my idea is\nto deprecate that usage in PG18 and eventually remove it in PG19, if\nthere's consensus for it. We can drop that clause without violating any\nstandard because matviews are a Postgres extension. I'm not married to\nthe idea, just want to put it on the table for discussion.\n\nMotivation\n----------\n\nAt $JOB we use materialized views for caching a couple of expensive\nviews. But every now and then those views have to be changed, e.g., new\nlogic, new columns, etc. The matviews have to be dropped and re-created\nto include new columns. (Just changing the underlying view logic\nwithout adding new columns is trivial because the matviews are just thin\nwrappers that just have to be refreshed.)\n\nWe also have several views that depend on those matviews. The views\nmust also be dropped in order to re-create the matviews. We've already\nautomated this with two procedures that stash and re-create dependent\nview definitions.\n\nNative support for replacing matviews would simplify our setup and it\nwould make CREATE MATERIALIZED VIEW more complete when compared to\nCREATE VIEW.\n\nI searched the lists for previous discussions on this topic but couldn't\nfind any. So, I don't know if this was ever tried, but rejected for\nsome reason. I've found slides[2] from 2013 (when matviews landed in\n9.3) which have OR REPLACE on the roadmap:\n\n> Materialised Views roadmap\n>\n> * CREATE **OR REPLACE** MATERIALIZED VIEW\n> * Just an oversight that it wasn't added\n> [...]\n\nReplacing Matviews\n------------------\n\nWith patch 0001, a matview can be replaced without having to drop it and\nits dependent objects. In our use case it is no longer necessary to\ndefine the actual query in a separate view. Replacing a matview works\nanalogous to CREATE OR REPLACE VIEW:\n\n* the new query may change SELECT list expressions of existing columns\n* new columns can be added to the end of the SELECT list\n* existing columns cannot be renamed\n* the data type of existing columns cannot be changed\n\nIn addition to that, CREATE OR REPLACE MATERIALIZED VIEW also replaces\naccess method, tablespace, and storage parameters if specified. The\nclause WITH [NO] DATA works as expected: it either populates the matview\nor leaves it in an unscannable state.\n\nIt is an error to specify both OR REPLACE and IF NOT EXISTS.\n\nExample\n-------\n\npostgres=# CREATE MATERIALIZED VIEW test AS SELECT 1 AS a;\nSELECT 1\npostgres=# SELECT * FROM test;\n a\n---\n 1\n(1 row)\n\npostgres=# CREATE OR REPLACE MATERIALIZED VIEW test AS SELECT 2 AS a, 3 AS b;\nCREATE MATERIALIZED VIEW\npostgres=# SELECT * FROM test;\n a | b\n---+---\n 2 | 3\n(1 row)\n\nImplementation Details\n----------------------\n\nPatch 0001 extends create_ctas_internal in order to adapt an existing\nmatview to the new tuple descriptor, access method, tablespace, and\nstorage parameters. This logic is mostly based on DefineViewRelation.\nThis also reuses checkViewColumns, but adds argument is_matview in order\nto tell if we want error messages for a matview (true) or view (false).\nI'm not sure if that flag is the correct way to do that, or if I should\njust create a separate function just for matviews with the same logic.\nDo we even need to distinguish between view and matview in those error\nmessages?\n\nThe patch also adds tab completion in psql for CREATE OR REPLACE\nMATERIALIZED VIEW.\n\n[1] https://www.postgresql.org/message-id/226806.1693430777%40sss.pgh.pa.us\n[2] https://wiki.postgresql.org/images/a/ad/Materialised_views_now_and_the_future-pgconfeu_2013.pdf#page=23\n\n-- \nErik", "msg_date": "Tue, 2 Jul 2024 03:22:00 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "Hi,\n\n> Patch 0002 deprecates CREATE MATERIALIZED VIEW IF NOT EXISTS because it\n> no longer seems necessary with patch 0001. Tom Lane commented[1] about\n> the general dislike of IF NOT EXISTS, to which I agree, but maybe this\n> was meant only in response to adding new commands. Anyway, my idea is\n> to deprecate that usage in PG18 and eventually remove it in PG19, if\n> there's consensus for it. We can drop that clause without violating any\n> standard because matviews are a Postgres extension. I'm not married to\n> the idea, just want to put it on the table for discussion.\n\nI can imagine how this may impact many applications and upset many\nsoftware developers worldwide. Was there even a precedent (in the\nrecent decade or so) when PostgreSQL broke the SQL syntax?\n\nTo clarify, I'm not opposed to this idea. If we are fine with breaking\nbackward compatibility on the SQL level, this would allow dropping the\nsupport of inherited tables some day, a feature that in my humble\nopinion shouldn't exist (I realize this is another and very debatable\nquestion though). I just don't think this is something we ever do in\nthis project. But I admit that this information may be incorrect\nand/or outdated.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 2 Jul 2024 13:46:21 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "> On 2 Jul 2024, at 03:22, Erik Wienhold <[email protected]> wrote:\n\n> Patch 0002 deprecates CREATE MATERIALIZED VIEW IF NOT EXISTS because it\n> no longer seems necessary with patch 0001. Tom Lane commented[1] about\n> the general dislike of IF NOT EXISTS, to which I agree, but maybe this\n> was meant only in response to adding new commands. Anyway, my idea is\n> to deprecate that usage in PG18 and eventually remove it in PG19, if\n> there's consensus for it.\n\nConsidering the runway we typically give for deprecations, that seems like a\nfairly short timeframe for a SQL level command which isn't unlikely to exist\nin application code.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 14:27:27 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "I wrote:\n> Patch 0002 deprecates CREATE MATERIALIZED VIEW IF NOT EXISTS because it\n> no longer seems necessary with patch 0001. Tom Lane commented[1] about\n> the general dislike of IF NOT EXISTS, to which I agree, but maybe this\n> was meant only in response to adding new commands.\n\nOne could also argue that since matviews are a hybrid of tables and\nviews, that CREATE MATERIALIZED VIEW should accept both OR REPLACE (as\nin CREATE VIEW) and IF NOT EXISTS (as in CREATE TABLE). But not in the\nsame invocation of course.\n\nOn 2024-07-02 12:46 +0200, Aleksander Alekseev wrote:\n> > Anyway, my idea is to deprecate that usage in PG18 and eventually\n> > remove it in PG19, if there's consensus for it. We can drop that\n> > clause without violating any standard because matviews are a\n> > Postgres extension. I'm not married to the idea, just want to put\n> > it on the table for discussion.\n> \n> I can imagine how this may impact many applications and upset many\n> software developers worldwide. Was there even a precedent (in the\n> recent decade or so) when PostgreSQL broke the SQL syntax?\n\nA quick spelunking through the changelog with\n\n git log --grep deprecat -i --since '10 years ago'\n\nturned up two commits:\n\n578b229718 \"Remove WITH OIDS support, change oid catalog column visibility.\"\ne8d016d819 \"Remove deprecated COMMENT ON RULE syntax\"\n\nBoth were committed more than 10 years after deprecating the respective\nfeature. My proposed one-year window seems a bit harsh in comparison.\n\nOn 2024-07-02 14:27 +0200, Daniel Gustafsson wrote:\n> Considering the runway we typically give for deprecations, that seems like a\n> fairly short timeframe for a SQL level command which isn't unlikely to exist\n> in application code.\n\nIs there some general agreed upon timeframe, or is decided on a\ncase-by-case basis? I can imagine waiting at least until the last\nrelease without the deprecation reaches EOL. That would be 5 years with\nthe current versioning policy.\n\n-- \nErik\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:58:06 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "> On 2 Jul 2024, at 15:58, Erik Wienhold <[email protected]> wrote:\n> On 2024-07-02 14:27 +0200, Daniel Gustafsson wrote:\n>> Considering the runway we typically give for deprecations, that seems like a\n>> fairly short timeframe for a SQL level command which isn't unlikely to exist\n>> in application code.\n> \n> Is there some general agreed upon timeframe, or is decided on a\n> case-by-case basis? I can imagine waiting at least until the last\n> release without the deprecation reaches EOL. That would be 5 years with\n> the current versioning policy.\n\nAFAIK it's all decided on a case-by-case basis depending on impact. There are\nfor example the removals you listed, and there are functions in libpq which\nwere deprecated in the postgres 6.x days which are still around to avoid\nbreaking ABI.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 16:01:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "Hi,\n\n\n+1 for this feature.\n\n> Replacing Matviews\n> ------------------\n>\n> With patch 0001, a matview can be replaced without having to drop it and\n> its dependent objects. In our use case it is no longer necessary to\n> define the actual query in a separate view. Replacing a matview works\n> analogous to CREATE OR REPLACE VIEW:\n>\n> * the new query may change SELECT list expressions of existing columns\n> * new columns can be added to the end of the SELECT list\n> * existing columns cannot be renamed\n> * the data type of existing columns cannot be changed\n>\n> In addition to that, CREATE OR REPLACE MATERIALIZED VIEW also replaces\n> access method, tablespace, and storage parameters if specified. The\n> clause WITH [NO] DATA works as expected: it either populates the matview\n> or leaves it in an unscannable state.\n>\n> It is an error to specify both OR REPLACE and IF NOT EXISTS.\n\nI noticed replacing the materialized view is blocking all reads. Is that \nexpected ? Even if there is a unique index ?\n\n\nBest,\nSa_ïd_\n\n\n\n\n\n\nHi,\n\n\n+1 for this feature.\n\n\n\nReplacing Matviews\n------------------\n\nWith patch 0001, a matview can be replaced without having to drop it and\nits dependent objects. In our use case it is no longer necessary to\ndefine the actual query in a separate view. Replacing a matview works\nanalogous to CREATE OR REPLACE VIEW:\n\n* the new query may change SELECT list expressions of existing columns\n* new columns can be added to the end of the SELECT list\n* existing columns cannot be renamed\n* the data type of existing columns cannot be changed\n\nIn addition to that, CREATE OR REPLACE MATERIALIZED VIEW also replaces\naccess method, tablespace, and storage parameters if specified. The\nclause WITH [NO] DATA works as expected: it either populates the matview\nor leaves it in an unscannable state.\n\nIt is an error to specify both OR REPLACE and IF NOT EXISTS.\n\n\nI noticed replacing the materialized view is blocking all reads.\n Is that expected ? Even if there is a unique index ?\n\n\n\nBest,\n Saïd", "msg_date": "Thu, 4 Jul 2024 16:18:11 -0400", "msg_from": "Said Assemlal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "On 2024-07-04 22:18 +0200, Said Assemlal wrote:\n> +1 for this feature.\n\nThanks!\n\n> I noticed replacing the materialized view is blocking all reads. Is that\n> expected ? Even if there is a unique index ?\n\nThat is expected because AccessExclusiveLock is acquired on the existing\nmatview. This is also the case for CREATE OR REPLACE VIEW.\n\nMy initial idea, while writing the patch, was that one could replace the\nmatview without populating it and then run the concurrent refresh, like\nthis:\n\n CREATE OR REPLACE MATERIALIZED VIEW foo AS ... WITH NO DATA;\n REFRESH MATERIALIZED VIEW CONCURRENTLY foo;\n\nBut that won't work because concurrent refresh requires an already\npopulated matview.\n\nRight now the patch either populates the replaced matview or leaves it\nin an unscannable state. Technically, it's also possible to skip the\nrefresh and leave the old data in place, perhaps by specifying\nWITH *OLD* DATA. New columns would just be null. Of course you can't\ntell if you got stale data without knowing how the matview was replaced.\nThoughts?\n\n-- \nErik\n\n\n", "msg_date": "Sat, 6 Jul 2024 01:42:48 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "\n> That is expected because AccessExclusiveLock is acquired on the existing\n> matview. This is also the case for CREATE OR REPLACE VIEW.\n\nRight, had this case many times.\n\n>\n> My initial idea, while writing the patch, was that one could replace the\n> matview without populating it and then run the concurrent refresh, like\n> this:\n>\n> CREATE OR REPLACE MATERIALIZED VIEW foo AS ... WITH NO DATA;\n> REFRESH MATERIALIZED VIEW CONCURRENTLY foo;\n>\n> But that won't work because concurrent refresh requires an already\n> populated matview.\n>\n> Right now the patch either populates the replaced matview or leaves it\n> in an unscannable state. Technically, it's also possible to skip the\n> refresh and leave the old data in place, perhaps by specifying\n> WITH *OLD* DATA. New columns would just be null. Of course you can't\n> tell if you got stale data without knowing how the matview was replaced.\n> Thoughts?\n\n\nI believe the expectation is to get materialized views updated whenever \nit gets replaced so likely to confuse users ?\n\n\n\n\n\n\n", "msg_date": "Fri, 12 Jul 2024 10:49:14 -0400", "msg_from": "Said Assemlal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "On 2024-07-12 16:49 +0200, Said Assemlal wrote:\n> > My initial idea, while writing the patch, was that one could replace the\n> > matview without populating it and then run the concurrent refresh, like\n> > this:\n> > \n> > CREATE OR REPLACE MATERIALIZED VIEW foo AS ... WITH NO DATA;\n> > REFRESH MATERIALIZED VIEW CONCURRENTLY foo;\n> > \n> > But that won't work because concurrent refresh requires an already\n> > populated matview.\n> > \n> > Right now the patch either populates the replaced matview or leaves it\n> > in an unscannable state. Technically, it's also possible to skip the\n> > refresh and leave the old data in place, perhaps by specifying\n> > WITH *OLD* DATA. New columns would just be null. Of course you can't\n> > tell if you got stale data without knowing how the matview was replaced.\n> > Thoughts?\n> \n> I believe the expectation is to get materialized views updated whenever it\n> gets replaced so likely to confuse users ?\n\nI agree, that could be confusing -- unless it's well documented. The\nattached 0003 implements WITH OLD DATA and states in the docs that this\nis intended to be used before a concurrent refresh.\n\nPatch 0001 now covers all matview cases in psql's tab completion. I\nmissed some of them with v1.\n\n-- \nErik", "msg_date": "Sat, 27 Jul 2024 02:45:15 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" }, { "msg_contents": "On 2024-07-27 02:45 +0200, Erik Wienhold wrote:\n> On 2024-07-12 16:49 +0200, Said Assemlal wrote:\n> > > My initial idea, while writing the patch, was that one could replace the\n> > > matview without populating it and then run the concurrent refresh, like\n> > > this:\n> > > \n> > > CREATE OR REPLACE MATERIALIZED VIEW foo AS ... WITH NO DATA;\n> > > REFRESH MATERIALIZED VIEW CONCURRENTLY foo;\n> > > \n> > > But that won't work because concurrent refresh requires an already\n> > > populated matview.\n> > > \n> > > Right now the patch either populates the replaced matview or leaves it\n> > > in an unscannable state. Technically, it's also possible to skip the\n> > > refresh and leave the old data in place, perhaps by specifying\n> > > WITH *OLD* DATA. New columns would just be null. Of course you can't\n> > > tell if you got stale data without knowing how the matview was replaced.\n> > > Thoughts?\n> > \n> > I believe the expectation is to get materialized views updated whenever it\n> > gets replaced so likely to confuse users ?\n> \n> I agree, that could be confusing -- unless it's well documented. The\n> attached 0003 implements WITH OLD DATA and states in the docs that this\n> is intended to be used before a concurrent refresh.\n> \n> Patch 0001 now covers all matview cases in psql's tab completion. I\n> missed some of them with v1.\n\nHere's a rebased version due to conflicts with f683d3a4ca and\n1e35951e71. No other changes since v2.\n\n-- \nErik", "msg_date": "Thu, 5 Sep 2024 22:33:51 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE OR REPLACE MATERIALIZED VIEW" } ]
[ { "msg_contents": "Hi,\n\nCurrently, there are many has_*_privilege functions for table, column,\nfunction, type, role, database, schema, language, server, foreign data\nwrapper, parameter, and so on. However, large object is not supported yet.\n\nI can find a way to check the privilege on a large object in the regression\ntest, in which whether a function call such as lo_open(lowrite(..)) raises\nan error or not is checked. However, I think it is not good that we need to\ntry to write to a large object to check we can write it, and also the\ntransaction will be aborted due to a permission error when the user doesn't\nhave the privilege. So, I would like to propose to add\nhas_large_object_function for checking if a user has the privilege on a large\nobject.\n\nI attached two files of patches. \n\n0001 makes a bit refactoring on large object codes. To check if a large\nobject exists, myLargeObjectExists() function has to be used rather than\npublic LargeObjectExists(), because we need to use different snapshots between\nread and write cases to make the behavior compatible to lo_open. However,\nmyLargeObjectExists() was static function, so I made it public and renamed it\nto LargeObjectExistsWIthSnapshot(). Also, since these two functions are almost\nsame except to whether snapshot can be specified, I rewrote LargeObjectExists to\ncall LargeObjectExistsWIthSnapshot internally. I am not sure why these\nduplicated codes have been left for long time, and there might be some reasons.\nHowever, otherwise, I think this deduplication also could reduce possible\nmaintenance cost in future.\n\n0002 adds has_large_object_privilege function.There are three variations whose\narguments are combinations of large object OID with user name, user OID, or\nimplicit user (current_user). It returns NULL if not-existing large object id is\nspecified, and false if non-existing user id is specified, and raises an error if\nnon-existing user name is specified. These behavior is similar with has_table_privilege. \nThe regression test is also included.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Tue, 2 Jul 2024 16:34:44 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Add has_large_object_privilege function" }, { "msg_contents": "\n\nOn 2024/07/02 16:34, Yugo NAGATA wrote:\n> So, I would like to propose to add\n> has_large_object_function for checking if a user has the privilege on a large\n> object.\n\n+1\n\nBTW since we already have pg_largeobject, using has_largeobject_privilege might\noffer better consistency. However, I'm okay with the current name for now.\nEven after committing the patch, we can rename it if others prefer has_largeobject_privilege.\n\n\n> I am not sure why these\n> duplicated codes have been left for long time, and there might be some reasons.\n> However, otherwise, I think this deduplication also could reduce possible\n> maintenance cost in future.\n\nI couldn't find the discussion that mentioned that reason either,\nbut I agree with simplifying the code.\n\nAs for 0001.patch, should we also remove the inclusion of \"access/genam.h\" and\n\"access/htup_details.h\" since they're no longer needed?\n\n\n> 0002 adds has_large_object_privilege function.There are three variations whose\n> arguments are combinations of large object OID with user name, user OID, or\n> implicit user (current_user).\n\nAs for 0002.patch, as the code in these three functions is mostly the same,\nit might be more efficient to create a common internal function and have\nthe three functions call it for simplicity.\n\nHere are other review comments for 0002.patch.\n\n+ <row>\n+ <entry role=\"func_table_entry\"><para role=\"func_signature\">\n+ <indexterm>\n+ <primary>has_large_object_privilege</primary>\n\nIn the documentation, access privilege inquiry functions are listed\nalphabetically. So, this function's description should come right\nafter has_language_privilege.\n\n\n+ * has_large_objec_privilege variants\n\nTypo: s/objec/object\n\n\n+ *\t\tThe result is a boolean value: true if user has been granted\n+ *\t\tthe indicated privilege or false if not.\n\nThe comment should clarify that NULL is returned if the specified\nlarge object doesn’t exist. For example,\n--------------\nThe result is a boolean value: true if user has the indicated\nprivilege, false if not, or NULL if object doesn't exist.\n--------------\n\n\n+convert_large_object_priv_string(text *priv_text)\n\nIt would be better to use \"priv_type_text\" instead of \"priv_text\"\nfor consistency with similar functions.\n\n\n+\tstatic const priv_map parameter_priv_map[] = {\n+\t\t{\"SELECT\", ACL_SELECT},\n+\t\t{\"UPDATE\", ACL_UPDATE},\n\nparameter_priv_map should be large_object_priv_map.\n\n\nAdditionally, the entries for \"WITH GRANT OPTION\" should be included here.\n\n\n+-- not-existing user\n+SELECT has_large_object_privilege(-99999, 1001, 'SELECT');\t-- false\n+ has_large_object_privilege\n+----------------------------\n+ t\n+(1 row)\n\n\nThe comment states the result should be false, but the actual result is true.\nOne of them seems incorrect.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Mon, 9 Sep 2024 22:59:34 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Mon, 9 Sep 2024 22:59:34 +0900\nFujii Masao <[email protected]> wrote:\n\n> \n> \n> On 2024/07/02 16:34, Yugo NAGATA wrote:\n> > So, I would like to propose to add\n> > has_large_object_function for checking if a user has the privilege on a large\n> > object.\n> \n> +1\n\nThank you for your looking into this!\nI've attached a updated patch.\n\n> \n> BTW since we already have pg_largeobject, using has_largeobject_privilege might\n> offer better consistency. However, I'm okay with the current name for now.\n> Even after committing the patch, we can rename it if others prefer has_largeobject_privilege.\n\nI was also wandering which is better, so I renamed it because it seems at least one person,\nyou, have an idea that has_largeobject_privilege might be better. If it is found that majority\nprefer the previous name, I'll get it back.\n\n> \n> As for 0001.patch, should we also remove the inclusion of \"access/genam.h\" and\n> \"access/htup_details.h\" since they're no longer needed?\n\nRemoved.\n \n> \n> > 0002 adds has_large_object_privilege function.There are three variations whose\n> > arguments are combinations of large object OID with user name, user OID, or\n> > implicit user (current_user).\n> \n> As for 0002.patch, as the code in these three functions is mostly the same,\n> it might be more efficient to create a common internal function and have\n> the three functions call it for simplicity.\n\nI made a new internal function \"has_lo_priv_byid\" that is called from these\nfunctions.\n \n> Here are other review comments for 0002.patch.\n> \n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + <indexterm>\n> + <primary>has_large_object_privilege</primary>\n> \n> In the documentation, access privilege inquiry functions are listed\n> alphabetically. So, this function's description should come right\n> after has_language_privilege.\n\nFixed.\n\n> \n> + * has_large_objec_privilege variants\n> \n> Typo: s/objec/object\n\nFixed.\n\n> \n> + *\t\tThe result is a boolean value: true if user has been granted\n> + *\t\tthe indicated privilege or false if not.\n> \n> The comment should clarify that NULL is returned if the specified\n> large object doesn’t exist. For example,\n> --------------\n> The result is a boolean value: true if user has the indicated\n> privilege, false if not, or NULL if object doesn't exist.\n> --------------\n\nFixed.\n\n> \n> +convert_large_object_priv_string(text *priv_text)\n> \n> It would be better to use \"priv_type_text\" instead of \"priv_text\"\n> for consistency with similar functions.\n> \n> \n> +\tstatic const priv_map parameter_priv_map[] = {\n> +\t\t{\"SELECT\", ACL_SELECT},\n> +\t\t{\"UPDATE\", ACL_UPDATE},\n> \n> parameter_priv_map should be large_object_priv_map.\n\nFixed.\n\n> Additionally, the entries for \"WITH GRANT OPTION\" should be included here.\n\nFixed.\n\n> \n> +-- not-existing user\n> +SELECT has_large_object_privilege(-99999, 1001, 'SELECT');\t-- false\n> + has_large_object_privilege\n> +----------------------------\n> + t\n> +(1 row)\n> \n> \n> The comment states the result should be false, but the actual result is true.\n> One of them seems incorrect.\n\nI misunderstood that has_table_privilege always returns false for not-existing user, \nbut it was not correct. Actually, it returns true if the privilege is granted to public. \nI removed this check from the test at last because I don't think it is important.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Tue, 10 Sep 2024 17:45:57 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "\n\nOn 2024/09/10 17:45, Yugo NAGATA wrote:\n> On Mon, 9 Sep 2024 22:59:34 +0900\n> Fujii Masao <[email protected]> wrote:\n> \n>>\n>>\n>> On 2024/07/02 16:34, Yugo NAGATA wrote:\n>>> So, I would like to propose to add\n>>> has_large_object_function for checking if a user has the privilege on a large\n>>> object.\n>>\n>> +1\n> \n> Thank you for your looking into this!\n> I've attached a updated patch.\n\nThanks for updating the patches! LGTM, except for a couple of minor things:\n\n+okui chiba * has_largeobject_privilege_id\n\n\"okui chiba\" seems to be a garbage text accidentally added.\n\n+ *\t\trole by Oid, large object by id, and privileges as AclMode.\n\n\"Oid\" seems better than \"id\" in \"large object by id\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 19:07:22 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Thu, 12 Sep 2024 19:07:22 +0900\nFujii Masao <[email protected]> wrote:\n\n> \n> \n> On 2024/09/10 17:45, Yugo NAGATA wrote:\n> > On Mon, 9 Sep 2024 22:59:34 +0900\n> > Fujii Masao <[email protected]> wrote:\n> > \n> >>\n> >>\n> >> On 2024/07/02 16:34, Yugo NAGATA wrote:\n> >>> So, I would like to propose to add\n> >>> has_large_object_function for checking if a user has the privilege on a large\n> >>> object.\n> >>\n> >> +1\n> > \n> > Thank you for your looking into this!\n> > I've attached a updated patch.\n> \n> Thanks for updating the patches! LGTM, except for a couple of minor things:\n> \n> +okui chiba * has_largeobject_privilege_id\n> \n> \"okui chiba\" seems to be a garbage text accidentally added.\n> \n> + *\t\trole by Oid, large object by id, and privileges as AclMode.\n> \n> \"Oid\" seems better than \"id\" in \"large object by id\".\n\nThank you for pointing out it.\nI've fixed them and attached the updated patch, v3.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Thu, 12 Sep 2024 19:51:33 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Thu, 12 Sep 2024 19:51:33 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Thu, 12 Sep 2024 19:07:22 +0900\n> Fujii Masao <[email protected]> wrote:\n> \n> > \n> > \n> > On 2024/09/10 17:45, Yugo NAGATA wrote:\n> > > On Mon, 9 Sep 2024 22:59:34 +0900\n> > > Fujii Masao <[email protected]> wrote:\n> > > \n> > >>\n> > >>\n> > >> On 2024/07/02 16:34, Yugo NAGATA wrote:\n> > >>> So, I would like to propose to add\n> > >>> has_large_object_function for checking if a user has the privilege on a large\n> > >>> object.\n> > >>\n> > >> +1\n> > > \n> > > Thank you for your looking into this!\n> > > I've attached a updated patch.\n> > \n> > Thanks for updating the patches! LGTM, except for a couple of minor things:\n> > \n> > +okui chiba * has_largeobject_privilege_id\n> > \n> > \"okui chiba\" seems to be a garbage text accidentally added.\n> > \n> > + *\t\trole by Oid, large object by id, and privileges as AclMode.\n> > \n> > \"Oid\" seems better than \"id\" in \"large object by id\".\n> \n> Thank you for pointing out it.\n> I've fixed them and attached the updated patch, v3.\n\nI confirmed the patches are committed in the master branch.\nThank you!\n\nI've updated the commitfest status to \"committed\".\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Fri, 13 Sep 2024 15:56:11 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Fri, Sep 13, 2024 at 03:56:11PM +0900, Yugo Nagata wrote:\n> I confirmed the patches are committed in the master branch.\n> Thank you!\n> \n> I've updated the commitfest status to \"committed\".\n\nThis patch has been committed as of 4eada203a5a8, and introduced this\nblock in pg_proc.dat:\n\n{ oid => '4551', descr => 'user privilege on large objct by username, large object oid',\n proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n prorettype => 'bool', proargtypes => 'name oid text',\n prosrc => 'has_largeobject_privilege_name_id' },\n{ oid => '4552', descr => 'current privilege on large objct by large object oid',\n proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n prorettype => 'bool', proargtypes => 'oid text',\n prosrc => 'has_largeobject_privilege_id' },\n{ oid => '4553', descr => 'user privilege on large objct by user oid, large object oid',\n proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n prorettype => 'bool', proargtypes => 'oid oid text',\n prosrc => 'has_largeobject_privilege_id_id' },\n\nThis has a couple of mistakes:\n- New functions introduced during a development cycle should use OIDs in\nthe range 8000-9999. See 98eab30b93d5, consisting of running\n./unused_oids to get a random range.\n- The function descriptions are inconsistent and have the three same\ntypos:\n-- s/large objct/large object/.\n-- s/current privilege/current user privilege/ for the second entry.\n\nAnd while that's not mandatory for committers, I would apply a\nreformat-dat-files while on it, to reduce some diffs I'm seeing.\n\nThis results in the attached that I'd like to apply to fix all that. It\nneeds a catalog version bump, of course.\n--\nMichael", "msg_date": "Thu, 26 Sep 2024 17:16:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "\n\nOn 2024/09/26 17:16, Michael Paquier wrote:\n> On Fri, Sep 13, 2024 at 03:56:11PM +0900, Yugo Nagata wrote:\n>> I confirmed the patches are committed in the master branch.\n>> Thank you!\n>>\n>> I've updated the commitfest status to \"committed\".\n> \n> This patch has been committed as of 4eada203a5a8, and introduced this\n> block in pg_proc.dat:\n> \n> { oid => '4551', descr => 'user privilege on large objct by username, large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'name oid text',\n> prosrc => 'has_largeobject_privilege_name_id' },\n> { oid => '4552', descr => 'current privilege on large objct by large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'oid text',\n> prosrc => 'has_largeobject_privilege_id' },\n> { oid => '4553', descr => 'user privilege on large objct by user oid, large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'oid oid text',\n> prosrc => 'has_largeobject_privilege_id_id' },\n> \n> This has a couple of mistakes:\n> - New functions introduced during a development cycle should use OIDs in\n> the range 8000-9999. See 98eab30b93d5, consisting of running\n> ./unused_oids to get a random range.\n> - The function descriptions are inconsistent and have the three same\n> typos:\n> -- s/large objct/large object/.\n> -- s/current privilege/current user privilege/ for the second entry.\n\nI agree these issues need to be fixed. Thanks for the patch!\n\n\n> And while that's not mandatory for committers, I would apply a\n> reformat-dat-files while on it, to reduce some diffs I'm seeing.\n\nYes, that sounds better.\n\n\n> This results in the attached that I'd like to apply to fix all that. It\n> needs a catalog version bump, of course.\n\nSo, are you planning to commit the patch? If not, I'm happy to handle it!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Thu, 26 Sep 2024 19:51:06 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Thu, Sep 26, 2024 at 07:51:06PM +0900, Fujii Masao wrote:\n> So, are you planning to commit the patch? If not, I'm happy to handle it!\n\nI have some time to look at the buildfarm today, so I'll just go do it\nnow. Thanks for checking the patch.\n--\nMichael", "msg_date": "Fri, 27 Sep 2024 07:06:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Thu, 26 Sep 2024 17:16:07 +0900\nMichael Paquier <[email protected]> wrote:\n\n> On Fri, Sep 13, 2024 at 03:56:11PM +0900, Yugo Nagata wrote:\n> > I confirmed the patches are committed in the master branch.\n> > Thank you!\n> > \n> > I've updated the commitfest status to \"committed\".\n> \n> This patch has been committed as of 4eada203a5a8, and introduced this\n> block in pg_proc.dat:\n> \n> { oid => '4551', descr => 'user privilege on large objct by username, large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'name oid text',\n> prosrc => 'has_largeobject_privilege_name_id' },\n> { oid => '4552', descr => 'current privilege on large objct by large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'oid text',\n> prosrc => 'has_largeobject_privilege_id' },\n> { oid => '4553', descr => 'user privilege on large objct by user oid, large object oid',\n> proname => 'has_largeobject_privilege', procost => '10', provolatile => 's',\n> prorettype => 'bool', proargtypes => 'oid oid text',\n> prosrc => 'has_largeobject_privilege_id_id' },\n> \n> This has a couple of mistakes:\n> - New functions introduced during a development cycle should use OIDs in\n> the range 8000-9999. See 98eab30b93d5, consisting of running\n> ./unused_oids to get a random range.\n> - The function descriptions are inconsistent and have the three same\n> typos:\n> -- s/large objct/large object/.\n> -- s/current privilege/current user privilege/ for the second entry.\n\nThank you for pointing out them.\nI used unused_oids to look for available oids, but I missed to\nfollow the best practice suggested by it. I'll be careful.\n\nRegards,\nYugo Nagata\n\n> \n> And while that's not mandatory for committers, I would apply a\n> reformat-dat-files while on it, to reduce some diffs I'm seeing.\n> \n> This results in the attached that I'd like to apply to fix all that. It\n> needs a catalog version bump, of course.\n> --\n> Michael\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Fri, 27 Sep 2024 12:26:22 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> - New functions introduced during a development cycle should use OIDs in\n> the range 8000-9999. See 98eab30b93d5, consisting of running\n> ./unused_oids to get a random range.\n\nThere's been seen several fixups of this kind recently. Should we add a\nnote about this to the comment at the top of all of the pg_*.dat files\nthat have explicit oid assignments? People might be more likely to\nnotice that than the the section over in bki.sgml.\n\n- ilmari\n\n\n", "msg_date": "Fri, 27 Sep 2024 11:44:25 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" }, { "msg_contents": "On Fri, 27 Sep 2024 11:44:25 +0100\nDagfinn Ilmari Mannsåker <[email protected]> wrote:\n\n> Michael Paquier <[email protected]> writes:\n> \n> > - New functions introduced during a development cycle should use OIDs in\n> > the range 8000-9999. See 98eab30b93d5, consisting of running\n> > ./unused_oids to get a random range.\n> \n> There's been seen several fixups of this kind recently. Should we add a\n> note about this to the comment at the top of all of the pg_*.dat files\n> that have explicit oid assignments? People might be more likely to\n> notice that than the the section over in bki.sgml.\n\nHow about adding more to unused_oids output to explain the reason why\npatches should use consecutive OIDs in the range 8000-9999 and low OIDs\nshould not be used in patches(that is, this minimizes the risk of OID\ncollisions with other patches) instead of just saying it is the best practise.\nI think patch authors looking for OIDs they can use will run unused_oids,\nso more likely notice this.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Sun, 29 Sep 2024 12:46:35 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add has_large_object_privilege function" } ]
[ { "msg_contents": "Hello there,\n\nThe test src/test/modules/test_extensions/sql/test_extensions.sql don't \npass during make installcheck on the builds with ICU support (configure \n--with-icu) and with database created with this one's locale provider \n(initdb --locale-provider=icu --icu-locale=en-US).\n\nA root cause is unstable sorting order of the output of the psql command \n\\dx+, which is sorted according to the current locale rules, and then \ncompared to expected etalon in the test, which was sorted with different \nprovider rules.\n\nSince the characters ')' and ',' have different orders when sorted by \nthe ICU provider and ASCII sorted (--no-locale), the following output \nlines change their positions relative to each other:\n\n \"function varbitrange(bit varying,bit varying)\"\n\n \"function varbitrange(bit varying,bit varying,text)\"\n\nI tried to use the \\dx+ replacement function in this test with locale \nprovider-independent sorting, and it solves the problem:\n\nCREATE OR REPLACE FUNCTION dx_plus(schema_name char varying) RETURNS \nTABLE(\"Object description\" char varying) AS\n\n$$\n\nBEGIN\n\n RETURN QUERY SELECT obj_descr::varchar AS \"Object \ndescription\" FROM (SELECT regexp_replace(pg_describe_object(classid, \nobjid, objsubid)\n\n , 'pg_temp_\\d+'\n\n , 'pg_temp', 'g') AS obj_descr\n\n FROM pg_depend\n\n WHERE refclassid = 'pg_extension'::regclass\n\n AND deptype = 'e'\n\n AND refobjid = (SELECT oid FROM pg_extension WHERE \nextname = schema_name))\n\n ORDER BY length(obj_descr), obj_descr;\n\nEND;\n\n$$\n\n__\n\nPS:\n\nPatches for a master branch are attached.", "msg_date": "Tue, 02 Jul 2024 14:38:33 +0400", "msg_from": "Aleksei Fakeev <[email protected]>", "msg_from_op": true, "msg_subject": "Test_extensions installcheck fails with ICU provider, workaround" }, { "msg_contents": "UPD:\n\nWith the proposed changes there is no need to set NO_LOCALE to configure \nthe test build, so the attached patches remove this option from the \nrecipes.\n\n From: Aleksei Fakeev <[email protected]>\nSent: Tuesday, July 2, 2024 2:39 PM\nTo: [email protected]\nSubject: Test_extensions installcheck fails with ICU provider, \nworkaround\n\nHello there,\n\nThe test src/test/modules/test_extensions/sql/test_extensions.sql don't \npass during make installcheck on the builds with ICU support (configure \n--with-icu) and with database created with this one's locale provider \n(initdb --locale-provider=icu --icu-locale=en-US).\n\nA root cause is unstable sorting order of the output of the psql command \n\\dx+, which is sorted according to the current locale rules, and then \ncompared to expected etalon in the test, which was sorted with different \nprovider rules.\n\nSince the characters ')' and ',' have different orders when sorted by \nthe ICU provider and ASCII sorted (--no-locale), the following output \nlines change their positions relative to each other:\n\n\"function varbitrange(bit varying,bit varying)\"\n\n\"function varbitrange(bit varying,bit varying,text)\"\n\nI tried to use the \\dx+ replacement function in this test with locale \nprovider-independent sorting, and it solves the problem:\n\nCREATE OR REPLACE FUNCTION dx_plus(schema_name char varying) RETURNS \nTABLE(\"Object description\" char varying) AS\n\n$$\n\nBEGIN\n\n RETURN QUERY SELECT obj_descr::varchar AS \"Object \ndescription\" FROM (SELECT regexp_replace(pg_describe_object(classid, \nobjid, objsubid)\n\n , 'pg_temp_\\d+'\n\n , 'pg_temp', 'g') AS obj_descr\n\n FROM pg_depend\n\n WHERE refclassid = 'pg_extension'::regclass\n\n AND deptype = 'e'\n\n AND refobjid = (SELECT oid FROM pg_extension WHERE \nextname = schema_name))\n\n ORDER BY length(obj_descr), obj_descr;\n\nEND;\n\n$$\n\n__\n\nPS:\n\nPatches for a master branch are attached.", "msg_date": "Thu, 04 Jul 2024 15:16:07 +0400", "msg_from": "Aleksei Fakeev <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Test_extensions installcheck fails with ICU provider, workaround" } ]
[ { "msg_contents": "Hi,\n\nThis is a PoC that implements XMLCast (SQL/XML X025), which enables\nconversions between SQL and XML data type.\n\nIt basically does the following:\n\n* When casting an XML value to a SQL data type, XML values containing\nXSD literals will be converted to their equivalent SQL data type.\n* When casting from a SQL data type to XML, the cast operand will be\ntranslated to its corresponding XSD data type.\n\nSELECT xmlcast(now() AS xml);\n             xmlcast              \n----------------------------------\n 2024-07-02T17:03:11.189073+02:00\n(1 row)\n\nSELECT xmlcast('2024-07-02T17:03:11.189073+02:00'::xml AS timestamp with\ntime zone);\n            xmlcast            \n-------------------------------\n 2024-07-02 17:03:11.189073+02\n(1 row)\n\nSELECT xmlcast('P1Y2M3DT4H5M6S'::xml AS interval);\n            xmlcast            \n-------------------------------\n 1 year 2 mons 3 days 04:05:06\n(1 row)\n\nSELECT xmlcast('&lt;foo&amp;bar&gt;'::xml AS text);\n  xmlcast  \n-----------\n <foo&bar>\n(1 row)\n\nSELECT xmlcast('1 year 2 months 3 days 4 hours 5 minutes 6\nseconds'::interval AS xml) ;\n    xmlcast     \n----------------\n P1Y2M3DT4H5M6S\n(1 row)\n\nSELECT xmlcast('42.73'::xml AS numeric);\n xmlcast\n---------\n   42.73\n(1 row)\n\nSELECT xmlcast(42730102030405 AS xml);\n    xmlcast     \n----------------\n 42730102030405\n(1 row)\n\n\nIs it starting in the right direction? Any feedback would be much\nappreciated.\n\nBest,\nJim", "msg_date": "Tue, 2 Jul 2024 18:02:38 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": true, "msg_subject": "[PoC] XMLCast (SQL/XML X025)" }, { "msg_contents": "On 02.07.24 18:02, Jim Jones wrote:\n> It basically does the following:\n>\n> * When casting an XML value to a SQL data type, XML values containing\n> XSD literals will be converted to their equivalent SQL data type.\n> * When casting from a SQL data type to XML, the cast operand will be\n> translated to its corresponding XSD data type.\n>\nv2 attached adds missing return for NO_XML_SUPPORT control path in\nunescape_xml\n\n-- \nJim", "msg_date": "Fri, 5 Jul 2024 16:18:17 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PoC] XMLCast (SQL/XML X025)" }, { "msg_contents": "On 05.07.24 16:18, Jim Jones wrote:\n> On 02.07.24 18:02, Jim Jones wrote:\n>> It basically does the following:\n>>\n>> * When casting an XML value to a SQL data type, XML values containing\n>> XSD literals will be converted to their equivalent SQL data type.\n>> * When casting from a SQL data type to XML, the cast operand will be\n>> translated to its corresponding XSD data type.\n>>\n> v2 attached adds missing return for NO_XML_SUPPORT control path in\n> unescape_xml\n>\nv3 adds the missing XML passing mechanism BY VALUE and BY REF, as\ndescribed in the  XMLCast specification:\n\nXMLCAST (<XML cast operand> AS <XML cast target> [ <XML passing\nmechanism> ])\n\nTests and documentation were updated accordingly.\n\n-- \nJim", "msg_date": "Thu, 15 Aug 2024 23:02:23 +0200", "msg_from": "Jim Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PoC] XMLCast (SQL/XML X025)" } ]
[ { "msg_contents": "Hackers,\n\nIn fuzing around trying to work out what’s going on with the formatting of timestamptz values cast by the timestamp_tz() jsonpath method[1], I noticed that the formatting of the string() method applied to date and time objects was not fully tested, or how the output is determined by the DateStyle method.\n\nThe attached path aims to rectify this situation by adding tests that chain string() after the jsonpath date/time methods, both with the default testing “PostreSQL” DateStyle and “ISO”. It also mentions the impact of the DateStyle parameter in the string() documentation.\n\nAlso available to review as a pull request[2].\n\nBest,\n\nDavid\n\n[1]: https://www.postgresql.org/message-id/7DE080CE-6D8C-4794-9BD1-7D9699172FAB%40justatheory.com\n[2]: https://github.com/theory/postgres/pull/7/files", "msg_date": "Tue, 2 Jul 2024 12:51:19 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Wed, Jul 3, 2024 at 12:51 AM David E. Wheeler <[email protected]> wrote:\n>\n> Hackers,\n>\n> In fuzing around trying to work out what’s going on with the formatting of timestamptz values cast by the timestamp_tz() jsonpath method[1], I noticed that the formatting of the string() method applied to date and time objects was not fully tested, or how the output is determined by the DateStyle method.\n>\n> The attached path aims to rectify this situation by adding tests that chain string() after the jsonpath date/time methods, both with the default testing “PostreSQL” DateStyle and “ISO”. It also mentions the impact of the DateStyle parameter in the string() documentation.\n>\n> Also available to review as a pull request[2].\n>\n> Best,\n>\n> David\n>\n> [1]: https://www.postgresql.org/message-id/7DE080CE-6D8C-4794-9BD1-7D9699172FAB%40justatheory.com\n> [2]: https://github.com/theory/postgres/pull/7/files\n>\n>\n\n\n\n\n+set datestyle = 'ISO';\n+select jsonb_path_query_tz('\"2023-08-15 12:34:56\"',\n'$.timestamp_tz().string()'); -- should work\n+ jsonb_path_query_tz\n+--------------------------\n+ \"2023-08-15 12:34:56-07\"\n+(1 row)\n\nDo you need to reset the datestyle?\nalso the above query is time zone sensitive, maybe the time zone is\nset in another place, but that's not explicit?\n\n\n <para>\n- String value converted from a JSON boolean, number, string, or datetime\n+ String value converted from a JSON boolean, number, string, or\n+ datetime. Note that the string output of datetimes is determined by\n+ the <xref linkend=\"guc-datestyle\"/> parameter.\n </para>\nimho, your patch has just too many examples.\nfor explaining the above sentence, the following example should be enough.\n\nbegin;\nset local time zone +1;\nset local datestyle to postgres;\nselect jsonb_path_query_tz('\"2023-08-15 12:34:56\"',\n'$.timestamp_tz().string()');\nset local datestyle to iso;\nselect jsonb_path_query_tz('\"2023-08-15 12:34:56\"',\n'$.timestamp_tz().string()');\ncommit;\n\n\n", "msg_date": "Thu, 4 Jul 2024 16:28:42 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Jul 4, 2024, at 04:28, jian he <[email protected]> wrote:\n\n> Do you need to reset the datestyle?\n\nWouldn’t hurt but it’s not necessary, no. It’s set only for the execution of this file, and there are no more calls that rely on it.\n\n> also the above query is time zone sensitive, maybe the time zone is\n> set in another place, but that's not explicit?\n\nIt’s implicit in how PostgreSQL runs its test suite; other tests later change it.\n\n> <para>\n> - String value converted from a JSON boolean, number, string, or datetime\n> + String value converted from a JSON boolean, number, string, or\n> + datetime. Note that the string output of datetimes is determined by\n> + the <xref linkend=\"guc-datestyle\"/> parameter.\n> </para>\n> imho, your patch has just too many examples.\n\nI’m confused. There are no examples in my patch, or this bit you cite. \n\n> for explaining the above sentence, the following example should be enough.\n\nAre you referring to the tests? I made them comprehensive so that we reliably demonstrate the behavior of the string() method on all the date/time data types. They are not examples, not in the documentation sense at least.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 10:45:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Thu, Jul 4, 2024 at 10:45 PM David E. Wheeler <[email protected]> wrote:\n>\n> On Jul 4, 2024, at 04:28, jian he <[email protected]> wrote:\n>\n> > Do you need to reset the datestyle?\n>\n> Wouldn’t hurt but it’s not necessary, no. It’s set only for the execution of this file, and there are no more calls that rely on it.\n>\n> > also the above query is time zone sensitive, maybe the time zone is\n> > set in another place, but that's not explicit?\n>\n> It’s implicit in how PostgreSQL runs its test suite; other tests later change it.\n\nI inserted these two commands into it.\nshow time zone;\nshow datestyle;\n\nturns out in the test suite, the default data style is \"Postgres,\nMDY\", and the default time zone is \"PST8PDT\".\nThese two implicit settings weren't mentioned anywhere.\n\nyour tests look ok to me.\none tiny complaint would be maybe we need `reset datestyle`.\nBecause we are in line 600 of src/test/regress/sql/jsonb_jsonpath.sql,\nWe want to make sure the following test is not influenced by guc datestyle.\n\n\n> > <para>\n> > - String value converted from a JSON boolean, number, string, or datetime\n> > + String value converted from a JSON boolean, number, string, or\n> > + datetime. Note that the string output of datetimes is determined by\n> > + the <xref linkend=\"guc-datestyle\"/> parameter.\n> > </para>\n> > imho, your patch has just too many examples.\n>\n> I’m confused. There are no examples in my patch, or this bit you cite.\n>\n> > for explaining the above sentence, the following example should be enough.\n>\n> Are you referring to the tests? I made them comprehensive so that we reliably demonstrate the behavior of the string() method on all the date/time data types. They are not examples, not in the documentation sense at least.\n>\nI mean tests, sorry for the confusion. added several more tests should be fine.\n\noverall looks good to me.\n\n\n", "msg_date": "Tue, 9 Jul 2024 22:35:09 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Jul 9, 2024, at 10:35, jian he <[email protected]> wrote:\n\n> one tiny complaint would be maybe we need `reset datestyle`.\n\nThat’s fair. Done.\n\nD", "msg_date": "Tue, 9 Jul 2024 10:45:37 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Jul 9, 2024, at 10:45, David E. Wheeler <[email protected]> wrote:\n\n>> one tiny complaint would be maybe we need `reset datestyle`.\n> \n> That’s fair. Done.\n\nHere’s a rebase on 5784a49. I also updated the commitfest item[1] to link to a new pull request[2], since I seem to have turned the other one into the tz conversion bug fix.\n\nBest,\n\nDavid\n\n[1]: https://commitfest.postgresql.org/49/5101/\n[2]: https://github.com/theory/postgres/pull/8", "msg_date": "Fri, 19 Jul 2024 10:22:18 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Jul 19, 2024, at 10:22, David E. Wheeler <[email protected]> wrote:\n> \n> Here’s a rebase on 5784a49. I also updated the commitfest item[1] to link to a new pull request[2], since I seem to have turned the other one into the tz conversion bug fix.\n> \n> [1]: https://commitfest.postgresql.org/49/5101/\n> [2]: https://github.com/theory/postgres/pull/8\n\nRebase on 47c9803. I also changed the commitfest item[1] to “ready for committer”, since jian reviewed it, though I couldn’t see a way to add jian as a reviewer in the app. Hope that makes sense.\n\nBest,\n\nDavid", "msg_date": "Tue, 30 Jul 2024 09:47:22 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> Rebase on 47c9803. I also changed the commitfest item[1] to “ready for committer”, since jian reviewed it, though I couldn’t see a way to add jian as a reviewer in the app. Hope that makes sense.\n\nPushed with a little additional polishing.\n\nI thought the best way to address jian's complaint about DateStyle not\nbeing clearly locked down was to change horology.sql to verify the\nprevailing setting, as it has long done for TimeZone. That's the\nlead test script for related stuff, so it makes the most sense to\ndo it there. Having done that, I don't feel a need to duplicate\nthat elsewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 14:51:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 10, 2024, at 14:51, Tom Lane <[email protected]> wrote:\n\n> Pushed with a little additional polishing.\n\nThank you! Do you think it’d be worthwhile to back port to 17?\n\n> I thought the best way to address jian's complaint about DateStyle not\n> being clearly locked down was to change horology.sql to verify the\n> prevailing setting, as it has long done for TimeZone. That's the\n> lead test script for related stuff, so it makes the most sense to\n> do it there. Having done that, I don't feel a need to duplicate\n> that elsewhere.\n\nYeah, that will help, but I still bet next time I go to figure out what it is I’ll stick that line in some test to make it fail with clear output for what it’s set to 😂.\n\nD\n\n\n\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:43:09 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "Isn't this behavior actually a bug that should be fixed rather than \ndocumented?\n\nThese JSON path functions are specified by the SQL standard, so they \nshouldn't depend on PostgreSQL-specific settings. At least in new \nfunctionality we should avoid that, no?\n\n\nOn 10.09.24 21:43, David E. Wheeler wrote:\n> On Sep 10, 2024, at 14:51, Tom Lane <[email protected]> wrote:\n> \n>> Pushed with a little additional polishing.\n> \n> Thank you! Do you think it’d be worthwhile to back port to 17?\n> \n>> I thought the best way to address jian's complaint about DateStyle not\n>> being clearly locked down was to change horology.sql to verify the\n>> prevailing setting, as it has long done for TimeZone. That's the\n>> lead test script for related stuff, so it makes the most sense to\n>> do it there. Having done that, I don't feel a need to duplicate\n>> that elsewhere.\n> \n> Yeah, that will help, but I still bet next time I go to figure out what it is I’ll stick that line in some test to make it fail with clear output for what it’s set to 😂.\n\n\n\n", "msg_date": "Tue, 10 Sep 2024 22:10:47 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> These JSON path functions are specified by the SQL standard, so they \n> shouldn't depend on PostgreSQL-specific settings. At least in new \n> functionality we should avoid that, no?\n\nHmm ... but does the standard precisely define the output format?\n\nSince these conversions are built on our own timestamp I/O code,\nI rather imagine there is quite a lot of behavior there that's\nnot to be found in the standard. That doesn't really trouble\nme as long as the spec's behavior is a subset of it (i.e.,\nreachable as long as you've got the right parameter settings).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 16:16:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Sep 10, 2024, at 14:51, Tom Lane <[email protected]> wrote:\n>> Pushed with a little additional polishing.\n\n> Thank you! Do you think it’d be worthwhile to back port to 17?\n\nNot as things stand. If we adopt Peter's nearby position that\nthe current behavior is actually buggy, then probably back-patching\na corrected version would be worthwhile as a part of fixing it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 16:17:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 10, 2024, at 16:10, Peter Eisentraut <[email protected]> wrote:\n\n> These JSON path functions are specified by the SQL standard, so they shouldn't depend on PostgreSQL-specific settings. At least in new functionality we should avoid that, no?\n\nDoes that also apply to `datetime(template)`, where it uses the `to_timestamp()` templates? From the docs[1]:\n\n> The datetime() and datetime(template) methods use the same parsing rules as the to_timestamp SQL function does (see Section 9.8[2]), with three exceptions. First, these methods don't allow unmatched template patterns. Second, only the following separators are allowed in the template string: minus sign, period, solidus (slash), comma, apostrophe, semicolon, colon and space. Third, separators in the template string must exactly match the input string.\n\nDoes the standard specify a formatting language?\n\nBest,\n\nDavid\n\n[1]: https://www.postgresql.org/docs/devel/functions-json.html\n[2]: https://www.postgresql.org/docs/devel/functions-formatting.html\n\n", "msg_date": "Tue, 10 Sep 2024 16:48:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 10, 2024, at 16:17, Tom Lane <[email protected]> wrote:\n\n> Not as things stand. If we adopt Peter's nearby position that\n> the current behavior is actually buggy, then probably back-patching\n> a corrected version would be worthwhile as a part of fixing it.\n\nOh, I see now that my reply to him points out the same issue as yours.\n\nSo annoying that the standard is not publicly available for any one of us to go look.\n\nD\n\n\n\n", "msg_date": "Tue, 10 Sep 2024 16:50:06 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On 10.09.24 22:16, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> These JSON path functions are specified by the SQL standard, so they\n>> shouldn't depend on PostgreSQL-specific settings. At least in new\n>> functionality we should avoid that, no?\n> \n> Hmm ... but does the standard precisely define the output format?\n> \n> Since these conversions are built on our own timestamp I/O code,\n> I rather imagine there is quite a lot of behavior there that's\n> not to be found in the standard. That doesn't really trouble\n> me as long as the spec's behavior is a subset of it (i.e.,\n> reachable as long as you've got the right parameter settings).\n\nActually, the standard prohibits this call:\n\n\"\"\"\nXV) If JM specifies string, then:\n\n1) Forallj,1(one)≤j≤n,\nCase:\n\na) If Ij is not a character string, number, or Boolean value,\nthen let ST be data exception — non-string SQL/JSON item (2202X).\n\nb) Otherwise, let X be an SQL variable whose value is Ij. Let ML be an \nimplementation-defined (IL006) maximum\nlength of variable-length character strings. Let Vj be the result of\n\nCAST (X AS CHARACTER VARYING(ML)\n\nIf this conversion results in an exception condition, then\nlet ST be that exception condition.\n\"\"\"\n\nSo I guess we have extended this and the current behavior is consistent \nwith item b).\n\nWhat I'm concerned about is that this makes the behavior of JSON_QUERY \nnon-immutable. Maybe there are other reasons for it to be \nnon-immutable, in which case this isn't important. But it might be \nworth avoiding that?\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 12:20:41 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> What I'm concerned about is that this makes the behavior of JSON_QUERY \n> non-immutable. Maybe there are other reasons for it to be \n> non-immutable, in which case this isn't important. But it might be \n> worth avoiding that?\n\nFair point, but haven't we already bit that bullet with respect\nto timezones?\n\n[ looks... ] Hmm, it looks like jsonb_path_exists_tz is marked\nstable while jsonb_path_exists is claimed to be immutable.\nSo yeah, there's a problem here. I'm not 100% convinced that\njsonb_path_exists was truly immutable before, but for sure it\nis not now, and that's bad.\n\nregression=# select jsonb_path_query('\"2023-08-15 12:34:56\"', '$.timestamp().string()');\n jsonb_path_query \n-----------------------\n \"2023-08-15 12:34:56\"\n(1 row)\n\nregression=# set datestyle = postgres;\nSET\nregression=# select jsonb_path_query('\"2023-08-15 12:34:56\"', '$.timestamp().string()');\n jsonb_path_query \n----------------------------\n \"Tue Aug 15 12:34:56 2023\"\n(1 row)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Sep 2024 10:11:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 10:11, Tom Lane <[email protected]> wrote:\n\n> [ looks... ] Hmm, it looks like jsonb_path_exists_tz is marked\n> stable while jsonb_path_exists is claimed to be immutable.\n> So yeah, there's a problem here. I'm not 100% convinced that\n> jsonb_path_exists was truly immutable before, but for sure it\n> is not now, and that's bad.\n> \n> regression=# select jsonb_path_query('\"2023-08-15 12:34:56\"', '$.timestamp().string()');\n> jsonb_path_query \n> -----------------------\n> \"2023-08-15 12:34:56\"\n> (1 row)\n> \n> regression=# set datestyle = postgres;\n> SET\n> regression=# select jsonb_path_query('\"2023-08-15 12:34:56\"', '$.timestamp().string()');\n> jsonb_path_query \n> ----------------------------\n> \"Tue Aug 15 12:34:56 2023\"\n> (1 row)\n\nI wonder, then, whether .string() should be modified to use the ISO format in UTC, and therefore be immutable. That’s the format you get if you omit .string() and let result be stringified from a date/time/timestamp.\n\nFWIW, that’s how my Go port works, since I didn’t bother to replicate the DateStyle GUC (example[1]).\n\nBest,\n\nDavid\n\n[1]: https://theory.github.io/sqljson/playground/?p=%2524.timestamp%28%29.string%28%29&j=%25222023-08-15%252012%253A34%253A56%2522&a=&o=1\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 11:00:09 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> I wonder, then, whether .string() should be modified to use the ISO format in UTC, and therefore be immutable. That’s the format you get if you omit .string() and let result be stringified from a date/time/timestamp.\n\nWhat \"let result be stringified\" behavior are you thinking of,\nexactly? AFAICS there's not sensitivity to timezone unless you\nuse the _tz variant, otherwise it just regurgitates the input.\n\nI agree that we should force ISO datestyle, but I'm not quite sure\nabout whether we're in the clear with timezone handling. We already\nhad a bunch of specialized rules about timezone handling in the _tz\nand not-_tz variants of these functions. It seems to me that simply\nforcing UTC would not be consistent with that pre-existing behavior.\nHowever, I may not have absorbed enough caffeine yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Sep 2024 11:11:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 11:11, Tom Lane <[email protected]> wrote:\n\n> What \"let result be stringified\" behavior are you thinking of,\n> exactly? AFAICS there's not sensitivity to timezone unless you\n> use the _tz variant, otherwise it just regurgitates the input.\n\nThere is stringification of a time, date, or timestamp value, which has no TZ, but is still affected by DateStyle. Then there is stringification of timetz or timestamptz, which can be created by the .time_tz() and .timstamp_tz() functions, and therefore are impacted by both the DateStyle and TimeZone configs, even when not using the _tz variant:\n\ndavid=# set timezone = 'America/New_York';\nSET\ndavid=# select jsonb_path_query('\"2023-08-15 12:34:56-09\"', '$.timestamp_tz().string()');\n jsonb_path_query \n--------------------------\n \"2023-08-15 17:34:56-04\"\n\ndavid=# set timezone = 'America/Los_Angeles';\nSET\ndavid=# select jsonb_path_query('\"2023-08-15 12:34:56-09\"', '$.timestamp_tz().string()');\n jsonb_path_query \n--------------------------\n \"2023-08-15 14:34:56-07\"\n(1 row)\n\n> I agree that we should force ISO datestyle, but I'm not quite sure\n> about whether we're in the clear with timezone handling. We already\n> had a bunch of specialized rules about timezone handling in the _tz\n> and not-_tz variants of these functions. It seems to me that simply\n> forcing UTC would not be consistent with that pre-existing behavior.\n> However, I may not have absorbed enough caffeine yet.\n\nTrue, it would not be consistent with the existing behaviors, but I believe these are all new in Postgres 17.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 11:52:31 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Sep 11, 2024, at 11:11, Tom Lane <[email protected]> wrote:\n>> What \"let result be stringified\" behavior are you thinking of,\n>> exactly? AFAICS there's not sensitivity to timezone unless you\n>> use the _tz variant, otherwise it just regurgitates the input.\n\n> There is stringification of a time, date, or timestamp value, which\n> has no TZ, but is still affected by DateStyle.\n\nWhat I understood you to be referencing is what happens without\nstring(), which AFAICS does not result in any timezone rotation:\n\nregression=# set timezone = 'America/New_York';\nSET\nregression=# select jsonb_path_query('\"2023-08-15 12:34:56-09\"', '$.timestamp_tz()');\n jsonb_path_query \n-----------------------------\n \"2023-08-15T12:34:56-09:00\"\n(1 row)\n\nI think I'd be content to have string() duplicate that behavior\n--- in fact, it seems like it'd be odd if it doesn't match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Sep 2024 12:06:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "I wrote:\n> I think I'd be content to have string() duplicate that behavior\n> --- in fact, it seems like it'd be odd if it doesn't match.\n\nBuilding on that thought, maybe we could fix it as attached?\nThis changes the just-committed test cases of course, and I did\nnot look at whether there are documentation changes to make.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 11 Sep 2024 12:26:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 12:26, Tom Lane <[email protected]> wrote:\n\n> Building on that thought, maybe we could fix it as attached?\n> This changes the just-committed test cases of course, and I did\n> not look at whether there are documentation changes to make.\n\nIt looks like that’s what datum_to_json_internal() in json.c does, which IIUC is the default stringification for date and time values.\n\nDavid\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 13:06:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Sep 11, 2024, at 12:26, Tom Lane <[email protected]> wrote:\n>> Building on that thought, maybe we could fix it as attached?\n\n> It looks like that’s what datum_to_json_internal() in json.c does, which IIUC is the default stringification for date and time values.\n\nRight. I actually lifted the code from convertJsonbScalar in\njsonb_util.c.\n\nHere's a more fleshed-out patch with docs and regression test\nfixes. I figured we could shorten the tests a bit now that\nthe point is just to verify that datestyle *doesn't* affect it.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 11 Sep 2024 15:08:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 15:08, Tom Lane <[email protected]> wrote:\n\n> Right. I actually lifted the code from convertJsonbScalar in\n> jsonb_util.c.\n> \n> Here's a more fleshed-out patch with docs and regression test\n> fixes. I figured we could shorten the tests a bit now that\n> the point is just to verify that datestyle *doesn't* affect it.\n\nLooks good. Although…\n\nShould it use the database-native stringification standard or the jsonpath stringification standard? In the case of the former, output should omit the “T” time separator and simplify the time zone `07:00` to `07`. But if it’s the latter case, then it’s good as is.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 15:20:50 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> Should it use the database-native stringification standard or the jsonpath stringification standard? In the case of the former, output should omit the “T” time separator and simplify the time zone `07:00` to `07`. But if it’s the latter case, then it’s good as is.\n\nSeems to me it should be the jsonpath convention. If the spec\ndoes require any specific spelling, surely it must be that one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Sep 2024 15:43:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 15:43, Tom Lane <[email protected]> wrote:\n\n> Seems to me it should be the jsonpath convention. If the spec\n> does require any specific spelling, surely it must be that one.\n\nWFM, though now I’ll have to go change my port 😂.\n\nD\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 15:52:26 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 11, 2024, at 15:52, David E. Wheeler <[email protected]> wrote:\n\n> WFM, though now I’ll have to go change my port 😂.\n\nI saw this was committed in cb599b9. Thank you!\n\nBTW, will the back-patch to 17 (cc4fdfa) be included in 17.0 or 17.1?\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 16 Sep 2024 13:25:17 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> BTW, will the back-patch to 17 (cc4fdfa) be included in 17.0 or 17.1?\n\n17.0. If we were already past 17.0 I'd have a lot more angst\nabout changing this behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Sep 2024 13:29:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" }, { "msg_contents": "On Sep 16, 2024, at 13:29, Tom Lane <[email protected]> wrote:\n\n> 17.0. If we were already past 17.0 I'd have a lot more angst\n> about changing this behavior.\n\nGreat, very glad it made it in.\n\nD\n\n\n\n", "msg_date": "Mon, 16 Sep 2024 14:43:12 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Document DateStyle effect on jsonpath string()" } ]
[ { "msg_contents": "Hi.\n\nThis is a series of patches to change styles, in assorted sources.\nIMO, this improves a tiny bit and is worth trying.\n\n1. Avoid dereference iss_SortSupport if it has nulls.\n2. Avoid dereference plan_node_id if no dsm area.\n3. Avoid dereference spill partitions if zero ntuples.\n4. Avoid possible useless palloc call with zero size.\n5. Avoid redundant variable initialization in foreign.\n6. Check the cheap test first in ExecMain.\n7. Check the cheap test first in pathkeys.\n8. Delay possible expensive bms_is_empty call in sub_select.\n9. Reduce some scope in execPartition.\n10. Reduce some scope for TupleDescAttr array dereference.\n11. Remove useless duplicate test in ruleutils.\nThis is already checked at line 4566.\n\n12. Remove useless duplicate test in string_utils.\nThis is already checked at line 982.\n\nbest regards,\nRanier Vilela", "msg_date": "Tue, 2 Jul 2024 14:39:20 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Assorted style changes with a tiny improvement" } ]
[ { "msg_contents": "Hi,\nThe 'cur_skey' parameter in `IndexScanOK` funciton doesn't seem to be useful.\n\nThe function does not use cur_skey for any operation. Is there any other consideration\nfor retaining the cur_skey parameter here?\n\nBest wishes\nHugo zhang\n\n\n\n\n\n\n\n\n\nHi,\nThe 'cur_skey' parameter in `IndexScanOK` funciton doesn't seem to be useful.\n \nThe function does not use cur_skey for any operation. Is there any other consideration\nfor retaining the cur_skey parameter here?\n \nBest wishes\nHugo zhang", "msg_date": "Wed, 3 Jul 2024 02:48:37 +0000", "msg_from": "Hugo Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Useless parameter 'cur_skey' in IndexScanOK" }, { "msg_contents": "Hi,\n\n> The 'cur_skey' parameter in `IndexScanOK` funciton doesn't seem to be useful.\n\nGood catch. As I understand it is not used for anything since\na78fcfb51243 (dated 2006) and this is a static function, so we\nshouldn't worry about third-party extensions.\n\nI wonder why none of the compilers complained about this.\n\nHere is the patch.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 3 Jul 2024 16:41:21 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Useless parameter 'cur_skey' in IndexScanOK" }, { "msg_contents": "> On 3 Jul 2024, at 15:41, Aleksander Alekseev <[email protected]> wrote:\n>> The 'cur_skey' parameter in `IndexScanOK` funciton doesn't seem to be useful.\n\n> Good catch. As I understand it is not used for anything since\n> a78fcfb51243 (dated 2006) and this is a static function, so we\n> shouldn't worry about third-party extensions.\n\nAgreed, it seems reasonable to clean this up.\n\n> I wonder why none of the compilers complained about this.\n\nNot to mention static analyzers.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 3 Jul 2024 15:46:56 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Useless parameter 'cur_skey' in IndexScanOK" }, { "msg_contents": "On 03/07/2024 16:46, Daniel Gustafsson wrote:\n>> On 3 Jul 2024, at 15:41, Aleksander Alekseev <[email protected]> wrote:\n>>> The 'cur_skey' parameter in `IndexScanOK` funciton doesn't seem to be useful.\n> \n>> Good catch. As I understand it is not used for anything since\n>> a78fcfb51243 (dated 2006) and this is a static function, so we\n>> shouldn't worry about third-party extensions.\n> \n> Agreed, it seems reasonable to clean this up.\n> \n>> I wonder why none of the compilers complained about this.\n> \n> Not to mention static analyzers.\n\nCommitted, thanks.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:28:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Useless parameter 'cur_skey' in IndexScanOK" } ]
[ { "msg_contents": "After some off-list discussion about the desirability of this feature, where\nseveral hackers opined that it's something that we should have, I've decided to\nrebase this patch and submit it one more time. There are several (long)\nthreads covering the history of this patch [0][1], related work stemming from\nthis [2] as well as earlier attempts and discussions [3][4]. Below I try to\nrespond to a summary of points raised in those threads.\n\nThe mechanics of the patch hasn't changed since the last posted version, it has\nmainly been polished slightly. A high-level overview of the processing is:\nIt's using a launcher/worker model where the launcher will spawn a worker per\ndatabase which will traverse all pages and dirty them in order to calculate and\nset the checksum on them. During this inprogress state all backends calculated\nand write checksums but don't verify them on read. Once all pages have been\nchecksummed the state of the cluster will switch over to \"on\" synchronized\nacross all backends with a procsignalbarrier. At this point checksums are\nverified and processing is equal to checksums having been enabled initdb. When\na user disables checksums the cluster enters a state where all backends still\nwrite checksums until all backends have acknowledged that they have stopped\nverifying checksums (again using a procsignalbarrier). At this point the\ncluster switches to \"off\" and checksums are neither written nor verified. In\ncase the cluster is restarted, voluntarily or via a crash, processing will have\nto be restarted (more on that further down).\n\nThe user facing controls for this are two SQL level functions, for enabling and\ndisabling. The existing data_checksums GUC remains but is expanded with more\npossible states (with on/off retained).\n\n\nComplaints against earlier versions\n===================================\nSeasoned hackers might remember that this patch has been on -hackers before.\nThere has been a lot of review, and AFAICT all specific comments have been\naddressed. There are however a few larger more generic complaints:\n\n* Restartability - the initial version of the patch did not support stateful\nrestarts, a shutdown performed (or crash) before checksums were enabled would\nresult in a need to start over from the beginning. This was deemed the safe\norchestration method. The lack of this feature was seen as serious drawback,\nso it was added. Subsequent review instead found the patch to be too\ncomplicated with a too large featureset. I thihk there is merit to both of\nthese arguments: being able to restart is a great feature; and being able to\nreason about the correctness of a smaller patch is also great. As of this\nsubmission I have removed the ability to restart to keep the scope of the patch\nsmall (which is where the previous version was, which received no review after\nthe removal). The way I prefer to frame this is to first add scaffolding and\ninfrastructure (this patch) and leave refinements and add-on features\n(restartability, but also others like parallel workers, optimizing rare cases,\netc) for follow-up patches.\n\n* Complexity - it was brought up that this is a very complex patch for a niche\nfeature, and there is a lot of truth to that. It is inherently complex to\nchange a pg_control level state of a running cluster. There might be ways to\nmake the current patch less complex, while not sacrificing stability, and if so\nthat would be great. A lot of of the complexity came from being able to\nrestart processing, and that's not removed for this version, but it's clearly\nnot close to a one-line-diff even without it.\n\nOther complaints were addressed, in part by the invention of procsignalbarriers\nwhich makes this synchronization possible. In re-reading the threads I might\nhave missed something which is still left open, and if so I do apologize for\nthat.\n\n\nOpen TODO items:\n================\n* Immediate checkpoints - the code is currently using CHECKPOINT_IMMEDIATE in\norder to be able to run the tests in a timely manner on it. This is overly\naggressive and dialling it back while still being able to run fast tests is a\nTODO. Not sure what the best option is there.\n\n* Monitoring - an insightful off-list reviewer asked how the current progress\nof the operation is monitored. So far I've been using pg_stat_activity but I\ndon't disagree that it's not a very sharp tool for this. Maybe we need a\nspecific function or view or something? There clearly needs to be a way for a\nuser to query state and progress of a transition.\n\n* Throttling - right now the patch uses the vacuum access strategy, with the\nsame cost options as vacuum, in order to implement throttling. This is in part\ndue to the patch starting out modelled around autovacuum as a worker, but it\nmay not be the right match for throttling checksums.\n\n* Naming - the in-between states when data checksums are enabled or disabled\nare called inprogress-on and inprogress-off. The reason for this is simply\nthat early on there were only three states: inprogress, on and off, and the\nprocess of disabling wasn't labeled with a state. When this transition state\nwas added it seemed like a good idea to tack the end-goal onto the transition.\nThese state names make the code easily greppable but might not be the most\nobvious choices for anything user facing. Is \"Enabling\" and \"Disabling\" better\nterms to use (across the board or just user facing) or should we stick to the\ncurrent?\n\nThere are ways in which this processing can be optimized to achieve better\nperformance, but in order to keep goalposts in sight and patchsize down they\nare left as future work.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/flat/CABUevExz9hUUOLnJVr2kpw9Cx%3Do4MCr1SVKwbupzuxP7ckNutA%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CABUevEwE3urLtwxxqdgd5O2oQz9J717ZzMbh%2BziCSa5YLLU_BA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20181030051643.elbxjww5jjgnjaxg%40alap3.anarazel.de\n[3] https://www.postgresql.org/message-id/flat/FF393672-5608-46D6-9224-6620EC532693%40endpoint.com\n[4] https://www.postgresql.org/message-id/flat/CABUevEx8KWhZE_XkZQpzEkZypZmBp3GbM9W90JLp%3D-7OJWBbcg%40mail.gmail.com", "msg_date": "Wed, 3 Jul 2024 08:41:01 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Changing the state of data checksums in a running cluster" }, { "msg_contents": "Hi Daniel,\n\nThanks for rebasing the patch and submitting it again!\n\nOn 7/3/24 08:41, Daniel Gustafsson wrote:\n> After some off-list discussion about the desirability of this feature, where\n> several hackers opined that it's something that we should have, I've decided to\n> rebase this patch and submit it one more time. There are several (long)\n> threads covering the history of this patch [0][1], related work stemming from\n> this [2] as well as earlier attempts and discussions [3][4]. Below I try to\n> respond to a summary of points raised in those threads.\n> \n> The mechanics of the patch hasn't changed since the last posted version, it has\n> mainly been polished slightly. A high-level overview of the processing is:\n> It's using a launcher/worker model where the launcher will spawn a worker per\n> database which will traverse all pages and dirty them in order to calculate and\n> set the checksum on them. During this inprogress state all backends calculated\n> and write checksums but don't verify them on read. Once all pages have been\n> checksummed the state of the cluster will switch over to \"on\" synchronized\n> across all backends with a procsignalbarrier. At this point checksums are\n> verified and processing is equal to checksums having been enabled initdb. When\n> a user disables checksums the cluster enters a state where all backends still\n> write checksums until all backends have acknowledged that they have stopped\n> verifying checksums (again using a procsignalbarrier). At this point the\n> cluster switches to \"off\" and checksums are neither written nor verified. In\n> case the cluster is restarted, voluntarily or via a crash, processing will have\n> to be restarted (more on that further down).\n> \n> The user facing controls for this are two SQL level functions, for enabling and\n> disabling. The existing data_checksums GUC remains but is expanded with more\n> possible states (with on/off retained).\n> \n> \n> Complaints against earlier versions\n> ===================================\n> Seasoned hackers might remember that this patch has been on -hackers before.\n> There has been a lot of review, and AFAICT all specific comments have been\n> addressed. There are however a few larger more generic complaints:\n> \n> * Restartability - the initial version of the patch did not support stateful\n> restarts, a shutdown performed (or crash) before checksums were enabled would\n> result in a need to start over from the beginning. This was deemed the safe\n> orchestration method. The lack of this feature was seen as serious drawback,\n> so it was added. Subsequent review instead found the patch to be too\n> complicated with a too large featureset. I thihk there is merit to both of\n> these arguments: being able to restart is a great feature; and being able to\n> reason about the correctness of a smaller patch is also great. As of this\n> submission I have removed the ability to restart to keep the scope of the patch\n> small (which is where the previous version was, which received no review after\n> the removal). The way I prefer to frame this is to first add scaffolding and\n> infrastructure (this patch) and leave refinements and add-on features\n> (restartability, but also others like parallel workers, optimizing rare cases,\n> etc) for follow-up patches.\n> \n\nI 100% support this approach.\n\nSure, I'd like to have a restartable tool, but clearly that didn't go\nparticularly well, and we still have nothing to enable checksums online.\nThat doesn't seem to benefit anyone - to me it seems reasonable to get\nthe non-restartable tool in, and then maybe later someone can improve\nthis to make it restartable. Thanks to the earlier work we know it's\ndoable, even if it was too complex.\n\nThis way it's at least possible to enable checksums online with some\nadditional care (e.g. to make sure no one restarts the cluster etc.).\nI'd bet for vast majority of systems this will work just fine. Huge\nsystems with some occasional / forced restarts may not be able to make\nthis work - but then again, that's no worse than now.\n\n> * Complexity - it was brought up that this is a very complex patch for a niche\n> feature, and there is a lot of truth to that. It is inherently complex to\n> change a pg_control level state of a running cluster. There might be ways to\n> make the current patch less complex, while not sacrificing stability, and if so\n> that would be great. A lot of of the complexity came from being able to\n> restart processing, and that's not removed for this version, but it's clearly\n> not close to a one-line-diff even without it.\n> \n\nI'd push back on this a little bit - the patch looks like this:\n\n 50 files changed, 2691 insertions(+), 48 deletions(-)\n\nand if we ignore the docs / perl tests, then the two parts that stand\nout are\n\n src/backend/access/transam/xlog.c | 455 +++++-\n src/backend/postmaster/datachecksumsworker.c | 1353 +++++++++++++++++\n\nI don't think the worker code is exceptionally complex. Yes, it's not\ntrivial, but a lot of the 1353 inserts is comments (which is good) or\ngeneric infrastructure to start the worker etc.\n\n> Other complaints were addressed, in part by the invention of procsignalbarriers\n> which makes this synchronization possible. In re-reading the threads I might\n> have missed something which is still left open, and if so I do apologize for\n> that.\n> \n> \n> Open TODO items:\n> ================\n> * Immediate checkpoints - the code is currently using CHECKPOINT_IMMEDIATE in\n> order to be able to run the tests in a timely manner on it. This is overly\n> aggressive and dialling it back while still being able to run fast tests is a\n> TODO. Not sure what the best option is there.\n> \n\nWhy not to add a parameter to pg_enable_data_checksums(), specifying\nwhether to do immediate checkpoint or wait for the next one? AFAIK\nthat's what we do in pg_backup_start, for example.\n\n> * Monitoring - an insightful off-list reviewer asked how the current progress\n> of the operation is monitored. So far I've been using pg_stat_activity but I\n> don't disagree that it's not a very sharp tool for this. Maybe we need a\n> specific function or view or something? There clearly needs to be a way for a\n> user to query state and progress of a transition.\n> \n\nYeah, I think a view like pg_stat_progress_checksums would work.\n\n> * Throttling - right now the patch uses the vacuum access strategy, with the\n> same cost options as vacuum, in order to implement throttling. This is in part\n> due to the patch starting out modelled around autovacuum as a worker, but it\n> may not be the right match for throttling checksums.\n> \n\nIMHO it's reasonable to reuse the vacuum throttling. Even if it's not\nperfect, it does not seem great to invent something new and end up with\ntwo different ways to throttle stuff.\n\n> * Naming - the in-between states when data checksums are enabled or disabled\n> are called inprogress-on and inprogress-off. The reason for this is simply\n> that early on there were only three states: inprogress, on and off, and the\n> process of disabling wasn't labeled with a state. When this transition state\n> was added it seemed like a good idea to tack the end-goal onto the transition.\n> These state names make the code easily greppable but might not be the most\n> obvious choices for anything user facing. Is \"Enabling\" and \"Disabling\" better\n> terms to use (across the board or just user facing) or should we stick to the\n> current?\n> \n\nI think the naming is fine. In the worst case we can rename that later,\nseems more like a detail.\n\n> There are ways in which this processing can be optimized to achieve better\n> performance, but in order to keep goalposts in sight and patchsize down they\n> are left as future work.\n> \n\n+1\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jul 2024 13:20:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the state of data checksums in a running cluster" }, { "msg_contents": "On Wed, Jul 3, 2024 at 01:20:10PM +0200, Tomas Vondra wrote:\n> > * Restartability - the initial version of the patch did not support stateful\n> > restarts, a shutdown performed (or crash) before checksums were enabled would\n> > result in a need to start over from the beginning. This was deemed the safe\n> > orchestration method. The lack of this feature was seen as serious drawback,\n> > so it was added. Subsequent review instead found the patch to be too\n> > complicated with a too large featureset. I thihk there is merit to both of\n> > these arguments: being able to restart is a great feature; and being able to\n> > reason about the correctness of a smaller patch is also great. As of this\n> > submission I have removed the ability to restart to keep the scope of the patch\n> > small (which is where the previous version was, which received no review after\n> > the removal). The way I prefer to frame this is to first add scaffolding and\n> > infrastructure (this patch) and leave refinements and add-on features\n> > (restartability, but also others like parallel workers, optimizing rare cases,\n> > etc) for follow-up patches.\n> > \n> \n> I 100% support this approach.\n\nYes, I was very disappointed when restartability sunk the patch, and I\nsaw this as another case where saying \"yes\" to every feature improvement\ncan lead to failure.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 5 Jul 2024 22:23:35 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the state of data checksums in a running cluster" }, { "msg_contents": "> On 3 Jul 2024, at 13:20, Tomas Vondra <[email protected]> wrote:\n\n> Thanks for rebasing the patch and submitting it again!\n\nThanks for review, sorry for being so slow to pick this up again.\n\nThe attached version is a rebase with some level of cleanup and polish all\naround, and most importantly it adresses the two points raised below.\n\n>> * Immediate checkpoints - the code is currently using CHECKPOINT_IMMEDIATE in\n>> order to be able to run the tests in a timely manner on it. This is overly\n>> aggressive and dialling it back while still being able to run fast tests is a\n>> TODO. Not sure what the best option is there.\n> \n> Why not to add a parameter to pg_enable_data_checksums(), specifying\n> whether to do immediate checkpoint or wait for the next one? AFAIK\n> that's what we do in pg_backup_start, for example.\n\nThat's a good idea, pg_enable_data_checksums now accepts a third parameter\n\"fast\" (defaults to false) which will enable immediate checkpoints when true.\n\n>> * Monitoring - an insightful off-list reviewer asked how the current progress\n>> of the operation is monitored. So far I've been using pg_stat_activity but I\n>> don't disagree that it's not a very sharp tool for this. Maybe we need a\n>> specific function or view or something? There clearly needs to be a way for a\n>> user to query state and progress of a transition.\n> \n> Yeah, I think a view like pg_stat_progress_checksums would work.\n\nAdded in the attached version. It probably needs some polish (the docs for\nsure do) but it's at least a start.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 30 Sep 2024 23:21:30 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Changing the state of data checksums in a running cluster" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 30, 2024 at 11:21:30PM +0200, Daniel Gustafsson wrote:\n> > Yeah, I think a view like pg_stat_progress_checksums would work.\n> \n> Added in the attached version. It probably needs some polish (the docs for\n> sure do) but it's at least a start.\n\nJust a nitpick, but we call it data_checksums about everywhere, but the\nnew view is called pg_stat_progress_datachecksums - I think\npg_stat_progress_data_checksums would look better even if it gets quite\nlong.\n\n\nMichael\n\n\n", "msg_date": "Tue, 1 Oct 2024 00:43:49 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Changing the state of data checksums in a running cluster" } ]
[ { "msg_contents": "Hello hackers,\n\nI have discovered a peculiar behavior in mul_var() when it is called with\nrscale=0, but the input variables have many decimal digits, resulting in a\nproduct with a .5 decimal part. Given that no decimals are requested by the\ncaller, I would expect the result to be rounded up. However, it is instead\ntruncated or rounded down.\n\nTo investigate this further, I added an SQL-callable function,\nnumeric_mul_patched(), which takes rscale_adjustment as a third input parameter.\nThis allows calling mul_var() with varying rscale values.\n\nHere is an example demonstrating the issue:\n\nSELECT 5.51574061794 * 0.99715659165;\n5.5000571150105152442010 -- exact result\n\nSELECT numeric_mul_patched(5.51574061794,0.99715659165,-22);\n\nThe output debug information before and after modifying MUL_GUARD_DIGITS\nfrom 2 to 3 is as follows:\n\n-#define MUL_GUARD_DIGITS 2\n+#define MUL_GUARD_DIGITS 3\n\n-make_result(): NUMERIC w=0 d=11 POS 0005 5157 4061 7940\n+make_result(): NUMERIC w=0 d=11 POS 0005 5157 4061 7940\n-make_result(): NUMERIC w=-1 d=11 POS 9971 5659 1650\n+make_result(): NUMERIC w=-1 d=11 POS 9971 5659 1650\n-before round_var: VAR w=1 d=0 POS 0000 0005 4999 8742\n+before round_var: VAR w=1 d=0 POS 0000 0005 5000 5710 3944\n-after round_var: VAR w=1 d=0 POS 0000 0005\n+after round_var: VAR w=1 d=0 POS 0000 0006\n-make_result(): NUMERIC w=0 d=0 POS 0005\n+make_result(): NUMERIC w=0 d=0 POS 0006\n numeric_mul_patched\n---------------------\n- 5\n+ 6\n(1 row)\n\nAs shown above, changing MUL_GUARD_DIGITS from 2 to 3 corrects the rounding\nerror, ensuring the correct result of 6 instead of 5.\n\nAlthough this is likely only a potential issue as current callers of mul_var()\ndon't use it in this specific way, it would be beneficial to fix it for\ncorrectness and ensure mul_var() adheres to its contract.\n\nI encountered this issue while working on an optimization of mul_var() in a\ndifferent thread. Initially, I thought there was an error in my code due to\nthe different result, but I later realized it was a rounding error in the\noriginal code.\n\nNot submitting a patch yet, since I might have misunderstood something here.\nMaybe this is all fine after all. Guidance appreciated.\n\nRegards,\nJoel\n\n\n", "msg_date": "Wed, 03 Jul 2024 09:35:07 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "numeric.c: Should MUL_GUARD_DIGITS be increased from 2 to 3?" }, { "msg_contents": "On Wed, 3 Jul 2024 at 08:36, Joel Jacobson <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> I have discovered a peculiar behavior in mul_var() when it is called with\n> rscale=0, but the input variables have many decimal digits, resulting in a\n> product with a .5 decimal part. Given that no decimals are requested by the\n> caller, I would expect the result to be rounded up. However, it is instead\n> truncated or rounded down.\n>\n> To investigate this further, I added an SQL-callable function,\n> numeric_mul_patched(), which takes rscale_adjustment as a third input parameter.\n> This allows calling mul_var() with varying rscale values.\n>\n> Here is an example demonstrating the issue:\n>\n> SELECT 5.51574061794 * 0.99715659165;\n> 5.5000571150105152442010 -- exact result\n>\n\nNo, as I mentioned on the other thread, that's not a bug, but perhaps\nit's worth mentioning in the function's comment that it doesn't\nguarantee correctly rounded results when rscale < var1->dscale +\nvar2->dscale. What it does do is \"good enough\" for the transcendental\nfunctions that use it in this way, which are themselves inexact.\n\nNote that increasing MUL_GUARD_DIGITS won't guarantee correctly\nrounded results, no matter how much you increase it by. The only way\nto guarantee that the result is correctly rounded in all cases is to\ncompute the full result and then round, which would be a lot slower.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 3 Jul 2024 12:40:00 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: numeric.c: Should MUL_GUARD_DIGITS be increased from 2 to 3?" } ]
[ { "msg_contents": "Re-reading Nathans recent 8213df9effaf I noticed a few more small things which\ncan be cleaned up. In two of the get<Object> functions we lack a fast-path for\nwhen no tuples are found which leads to pg_malloc(0) calls. Another thing is\nthat we in one place reset the PQExpBuffer immediately after creating it which\nisn't required.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 3 Jul 2024 09:37:32 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Additional minor pg_dump cleanups" }, { "msg_contents": "Em qua., 3 de jul. de 2024 às 04:37, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> Re-reading Nathans recent 8213df9effaf I noticed a few more small things\n> which\n> can be cleaned up. In two of the get<Object> functions we lack a\n> fast-path for\n> when no tuples are found which leads to pg_malloc(0) calls. Another thing\n> is\n> that we in one place reset the PQExpBuffer immediately after creating it\n> which\n> isn't required.\n>\n0001 Looks good to me.\n\n0002:\nWith the function *getPublications* I think it would be good to free up the\nallocated memory?\n\n }\n+ pg_free(pubinfo);\n+cleanup:\n PQclear(res);\n\nWith the function *getExtensions* I think it would be good to return NULL\nin case ntups = 0?\nOtherwise we may end up with an uninitialized variable.\n\n- ExtensionInfo *extinfo;\n+ ExtensionInfo *extinfo = NULL;\n\nFunny, the function *getExtensionMembership* does not use the parameter\nExtensionInfo extinfo.\ngetExtensions does not have another caller, Is it really necessary?\n\nbest regards,\nRanier Vilela\n\nEm qua., 3 de jul. de 2024 às 04:37, Daniel Gustafsson <[email protected]> escreveu:Re-reading Nathans recent 8213df9effaf I noticed a few more small things which\ncan be cleaned up.  In two of the get<Object> functions we lack a fast-path for\nwhen no tuples are found which leads to pg_malloc(0) calls.  Another thing is\nthat we in one place reset the PQExpBuffer immediately after creating it which\nisn't required.0001 Looks good to me.0002:With the function *getPublications* I think it would be good to free up the allocated memory?      }+     pg_free(pubinfo);+cleanup:     \tPQclear(res);With the function *getExtensions* I think it would be good to return NULL in case ntups = 0?Otherwise we may end up with an uninitialized variable.-\tExtensionInfo *extinfo;+ \tExtensionInfo *extinfo = NULL;Funny, the function *getExtensionMembership* does not use the parameter ExtensionInfo extinfo.getExtensions does not have another caller, Is it really necessary?best regards,Ranier Vilela", "msg_date": "Wed, 3 Jul 2024 08:29:15 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Additional minor pg_dump cleanups" }, { "msg_contents": "> On 3 Jul 2024, at 13:29, Ranier Vilela <[email protected]> wrote:\n\n> With the function *getPublications* I think it would be good to free up the allocated memory? \n> \n> }\n> + pg_free(pubinfo);\n> +cleanup:\n> PQclear(res);\n\nSince the pubinfo is recorded in the DumpableObject and is responsible for\nkeeping track of which publications to dump, it would be quite incorrect to\nfree it here.\n\n> With the function *getExtensions* I think it would be good to return NULL in case ntups = 0?\n> Otherwise we may end up with an uninitialized variable.\n> \n> - ExtensionInfo *extinfo;\n> + ExtensionInfo *extinfo = NULL;\n\nI guess that won't hurt, though any code inspecting extinfo when numExtensions\nis returned as zero is flat-out wrong. It may however silence a static\nanalyzer so there is that.\n\n> Funny, the function *getExtensionMembership* does not use the parameter ExtensionInfo extinfo.\n> getExtensions does not have another caller, Is it really necessary?\n\nYes, see processExtensionTables().\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 4 Jul 2024 10:17:49 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Additional minor pg_dump cleanups" }, { "msg_contents": "Em qui., 4 de jul. de 2024 às 05:18, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 3 Jul 2024, at 13:29, Ranier Vilela <[email protected]> wrote:\n>\n> > With the function *getPublications* I think it would be good to free up\n> the allocated memory?\n> >\n> > }\n> > + pg_free(pubinfo);\n> > +cleanup:\n> > PQclear(res);\n>\n> Since the pubinfo is recorded in the DumpableObject and is responsible for\n> keeping track of which publications to dump, it would be quite incorrect to\n> free it here.\n>\n> > With the function *getExtensions* I think it would be good to return\n> NULL in case ntups = 0?\n> > Otherwise we may end up with an uninitialized variable.\n> >\n> > - ExtensionInfo *extinfo;\n> > + ExtensionInfo *extinfo = NULL;\n>\n> I guess that won't hurt, though any code inspecting extinfo when\n> numExtensions\n> is returned as zero is flat-out wrong. It may however silence a static\n> analyzer so there is that.\n>\n> > Funny, the function *getExtensionMembership* does not use the parameter\n> ExtensionInfo extinfo.\n> > getExtensions does not have another caller, Is it really necessary?\n>\n> Yes, see processExtensionTables().\n>\nI saw, thanks.\n\nLGTM.\n\nbest regards,\nRanier Vilela\n\nEm qui., 4 de jul. de 2024 às 05:18, Daniel Gustafsson <[email protected]> escreveu:> On 3 Jul 2024, at 13:29, Ranier Vilela <[email protected]> wrote:\n\n> With the function *getPublications* I think it would be good to free up the allocated memory? \n> \n>      }\n> +     pg_free(pubinfo);\n> +cleanup:\n>       PQclear(res);\n\nSince the pubinfo is recorded in the DumpableObject and is responsible for\nkeeping track of which publications to dump, it would be quite incorrect to\nfree it here.\n\n> With the function *getExtensions* I think it would be good to return NULL in case ntups = 0?\n> Otherwise we may end up with an uninitialized variable.\n> \n> - ExtensionInfo *extinfo;\n> + ExtensionInfo *extinfo = NULL;\n\nI guess that won't hurt, though any code inspecting extinfo when numExtensions\nis returned as zero is flat-out wrong.  It may however silence a static\nanalyzer so there is that.\n\n> Funny, the function *getExtensionMembership* does not use the parameter ExtensionInfo extinfo.\n> getExtensions does not have another caller, Is it really necessary?\n\nYes, see processExtensionTables().I saw, thanks.LGTM.best regards,Ranier Vilela", "msg_date": "Thu, 4 Jul 2024 09:49:53 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Additional minor pg_dump cleanups" } ]
[ { "msg_contents": "Hi,\n\nI found that pg_wal_summary_contents() may miss some results that pg_walsummary returns for the same WAL summary file. Here are the steps to reproduce the issue:\n\n-----------------------------\ninitdb -D data\necho \"summarize_wal = on\" >> data/postgresql.conf\npg_ctl -D data start\npsql <<EOF\nCREATE TABLE t AS SELECT n i, n j FROM generate_series(1, 1000) n;\nDELETE FROM t;\nCHECKPOINT;\nVACUUM t;\nCHECKPOINT;\nSELECT foo.* FROM (SELECT * FROM pg_available_wal_summaries() ORDER BY start_lsn DESC LIMIT 1) JOIN LATERAL pg_wal_summary_contents(tli, start_lsn, end_lsn) foo ON true;\nEOF\npg_walsummary -i data/pg_wal/summaries/$(ls -1 data/pg_wal/summaries/ | tail -1)\n-----------------------------\n\nIn my test, pg_walsummary returned three records:\n\nTS 1663, DB 5, REL 1259, FORK main: block 0\nTS 1663, DB 5, REL 16384, FORK main: limit 0\nTS 1663, DB 5, REL 16384, FORK vm: limit 0\n\nHowever, pg_wal_summary_contents() returned only one record:\n\n relfilenode | reltablespace | reldatabase | relforknumber | relblocknumber | is_limit_block\n-------------+---------------+-------------+---------------+----------------+----------------\n 1259 | 1663 | 5 | 0 | 0 | f\n\npg_wal_summary_contents() seems to miss the summary information with \"limit\" that pg_walsummary reports. This appears to be a bug. The attached patch fixes this.\n\nBy the way, pg_wal_summary_contents() and pg_walsummary perform nearly the same task but are implemented in different functions. This could be the root of issues like this. In the future, it would be better to have a common function for outputting the WAL summary file that both can use.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 3 Jul 2024 18:33:44 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "pg_wal_summary_contents() and pg_walsummary may return different\n results on the same WAL summary file" }, { "msg_contents": "On Wed, Jul 3, 2024 at 5:34 AM Fujii Masao <[email protected]> wrote:\n> pg_wal_summary_contents() seems to miss the summary information with \"limit\" that pg_walsummary reports. This appears to be a bug. The attached patch fixes this.\n\nOops. It looks like pg_wal_summary_contents() forgets to emit the\nlimit block when that's the only data for a particular relation fork.\nAnd maybe you can make it emit the limit block multiple times if the\nlist of block numbers is long enough.\n\nThanks for the patch. I think you can commit and back-patch this, but\nI don't think the commit message is quite right, because it's not like\nthis code just NEVER executes where it is located currently. Or am I\nmissing something?\n\n> By the way, pg_wal_summary_contents() and pg_walsummary perform nearly the same task but are implemented in different functions. This could be the root of issues like this. In the future, it would be better to have a common function for outputting the WAL summary file that both can use.\n\nIt's entirely possible that, with some refactoring, more code could be\nshared. I tried to make all of the blkreftable stuff reusable, but I\ndidn't pay as much attention to synchronizing up the various users of\nit. However, the fact that pg_walsummary is frontend code and\npg_wal_summary_contents() is backend code does make it hard to get\nperfect reuse. I think if you go through pg_wal_summary_contents(),\nyou'll find that almost every line of that function contains something\nbackend-specific.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 09:42:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_wal_summary_contents() and pg_walsummary may return different\n results on the same WAL summary file" }, { "msg_contents": "On 2024/07/03 22:42, Robert Haas wrote:\n> On Wed, Jul 3, 2024 at 5:34 AM Fujii Masao <[email protected]> wrote:\n>> pg_wal_summary_contents() seems to miss the summary information with \"limit\" that pg_walsummary reports. This appears to be a bug. The attached patch fixes this.\n> \n> Oops. It looks like pg_wal_summary_contents() forgets to emit the\n> limit block when that's the only data for a particular relation fork.\n> And maybe you can make it emit the limit block multiple times if the\n> list of block numbers is long enough.\n> \n> Thanks for the patch. I think you can commit and back-patch this, but\n> I don't think the commit message is quite right, because it's not like\n> this code just NEVER executes where it is located currently. Or am I\n> missing something?\n\nYes, so I updated the commit message. I borrowed your description and used it in the message. Attached is the revised version of the patch.\n\nIf there are no objections, I will commit and backpatch it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 4 Jul 2024 19:16:16 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_wal_summary_contents() and pg_walsummary may return different\n results on the same WAL summary file" }, { "msg_contents": "On Thu, Jul 4, 2024 at 6:16 AM Fujii Masao <[email protected]> wrote:\n> Yes, so I updated the commit message. I borrowed your description and used it in the message. Attached is the revised version of the patch.\n>\n> If there are no objections, I will commit and backpatch it.\n\n+1. Maybe change \"Fix bugs in pg_wal_summary_contents()\" to \"Fix limit\nblock handling in pg_wal_summary_contents()\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 09:50:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_wal_summary_contents() and pg_walsummary may return different\n results on the same WAL summary file" }, { "msg_contents": "\n\nOn 2024/07/08 22:50, Robert Haas wrote:\n> On Thu, Jul 4, 2024 at 6:16 AM Fujii Masao <[email protected]> wrote:\n>> Yes, so I updated the commit message. I borrowed your description and used it in the message. Attached is the revised version of the patch.\n>>\n>> If there are no objections, I will commit and backpatch it.\n> \n> +1. Maybe change \"Fix bugs in pg_wal_summary_contents()\" to \"Fix limit\n> block handling in pg_wal_summary_contents()\".\n\nThanks! I've pushed the patch and used your wording in the commit message.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:32:47 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_wal_summary_contents() and pg_walsummary may return different\n results on the same WAL summary file" } ]
[ { "msg_contents": "Hello. BYTEA type has the ability to use comparison operations. But it\nis absent of min/max aggregate functions. They are nice to have to\nprovide consistency with the TEXT type.\n\n\n--\n\nWith best regards,\nMarat Bukharov", "msg_date": "Wed, 3 Jul 2024 16:03:38 +0300", "msg_from": "Marat Buharov <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "V2 patch with fixed tests\n\n--\n\nWith best regards,\nMarat Bukharov\n\nср, 3 июл. 2024 г. в 16:03, Marat Buharov <[email protected]>:\n>\n> Hello. BYTEA type has the ability to use comparison operations. But it\n> is absent of min/max aggregate functions. They are nice to have to\n> provide consistency with the TEXT type.\n>\n>\n> --\n>\n> With best regards,\n> Marat Bukharov", "msg_date": "Wed, 3 Jul 2024 17:17:39 +0300", "msg_from": "Marat Buharov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "V3 patch with fixed length comparison\n\n--\n\nWith best regards,\nMarat Bukharov\n\n>\n> V2 patch with fixed tests\n>\n> >\n> > Hello. BYTEA type has the ability to use comparison operations. But it\n> > is absent of min/max aggregate functions. They are nice to have to\n> > provide consistency with the TEXT type.\n> >\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:54:24 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "V3 patch with fixed length comparison\n\n\n--\n\nWith best regards,\nMarat Bukharov\n\n>\n> V2 patch with fixed tests\n>\n> >\n> > Hello. BYTEA type has the ability to use comparison operations. But it\n> > is absent of min/max aggregate functions. They are nice to have to\n> > provide consistency with the TEXT type.\n> >", "msg_date": "Wed, 3 Jul 2024 17:55:54 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "V4 path with fixed usage PG_GETARG_BYTEA_PP instead of PG_GETARG_TEXT_PP\n\n--\n\nWith best regards,\nMarat Bukharov\n\n>\n> V3 patch with fixed length comparison\n>\n> >\n> > V2 patch with fixed tests\n> >\n> > >\n> > > Hello. BYTEA type has the ability to use comparison operations. But it\n> > > is absent of min/max aggregate functions. They are nice to have to\n> > > provide consistency with the TEXT type.\n> > >", "msg_date": "Wed, 3 Jul 2024 19:04:40 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "Hi Marat,\n\n> V4 path with fixed usage PG_GETARG_BYTEA_PP instead of PG_GETARG_TEXT_PP\n\nThanks for the patch.\n\nPlease add it to the nearest open commitfest [1].\n\n```\n+select min(v) from bytea_test_table;\n+ min\n+------\n+ \\xaa\n+(1 row)\n+\n+select max(v) from bytea_test_table;\n+ max\n+------\n+ \\xff\n+(1 row)\n```\n\nIf I understand correctly, all the v's are of the same size. If this\nis the case you should add more test cases.\n\n[1]: https://commitfest.postgresql.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 15:29:09 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "What part of commitfest should I put the current patch to: \"SQL\nCommands\", \"Miscellaneous\" or something else? I can't figure it out.\n\n--\n\nWith best regards,\nMarat Bukharov\n\n> Hi Marat,\n>\n> > V4 path with fixed usage PG_GETARG_BYTEA_PP instead of PG_GETARG_TEXT_PP\n>\n> Thanks for the patch.\n>\n> Please add it to the nearest open commitfest [1].\n>\n> ```\n> +select min(v) from bytea_test_table;\n> + min\n> +------\n> + \\xaa\n> +(1 row)\n> +\n> +select max(v) from bytea_test_table;\n> + max\n> +------\n> + \\xff\n> +(1 row)\n> ```\n>\n> If I understand correctly, all the v's are of the same size. If this\n> is the case you should add more test cases.\n>\n> [1]: https://commitfest.postgresql.org/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n", "msg_date": "Fri, 5 Jul 2024 22:43:56 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "Hi,\n\n> What part of commitfest should I put the current patch to: \"SQL\n> Commands\", \"Miscellaneous\" or something else? I can't figure it out.\n\nPersonally I qualified a similar patch [1] as \"Server Features\",\nalthough I'm not 100% sure if this was the best choice.\n\n[1]: https://commitfest.postgresql.org/48/4905/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:06:29 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "V5 patch. I've added more tests with different bytea sizes\n\n--\n\nWith best regards,\nMarat Bukharov\n\nчт, 4 июл. 2024 г. в 15:29, Aleksander Alekseev <[email protected]>:\n>\n> Hi Marat,\n>\n> > V4 path with fixed usage PG_GETARG_BYTEA_PP instead of PG_GETARG_TEXT_PP\n>\n> Thanks for the patch.\n>\n> Please add it to the nearest open commitfest [1].\n>\n> ```\n> +select min(v) from bytea_test_table;\n> + min\n> +------\n> + \\xaa\n> +(1 row)\n> +\n> +select max(v) from bytea_test_table;\n> + max\n> +------\n> + \\xff\n> +(1 row)\n> ```\n>\n> If I understand correctly, all the v's are of the same size. If this\n> is the case you should add more test cases.\n>\n> [1]: https://commitfest.postgresql.org/\n>\n> --\n> Best regards,\n> Aleksander Alekseev", "msg_date": "Wed, 24 Jul 2024 17:42:11 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "Added patch to commitfest https://commitfest.postgresql.org/49/5138/\n\n--\n\nWith best regards,\nMarat Bukharov\n\n>\n> Hi,\n>\n> > What part of commitfest should I put the current patch to: \"SQL\n> > Commands\", \"Miscellaneous\" or something else? I can't figure it out.\n>\n> Personally I qualified a similar patch [1] as \"Server Features\",\n> although I'm not 100% sure if this was the best choice.\n>\n> [1]: https://commitfest.postgresql.org/48/4905/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n", "msg_date": "Wed, 24 Jul 2024 17:47:01 +0300", "msg_from": "Marat Bukharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "\n\n> On 24 Jul 2024, at 17:42, Marat Bukharov <[email protected]> wrote:\n> \n> V5 patch. I've added more tests with different bytea sizes\n\nHi Marat!\n\nThis looks like a nice feature to have.\n\nI’ve took a look into the patch and have few suggestions:\n0. Please write more descriptive commit message akin to [0]\n1. Use oids from development range 8000-9999\n2. Replace VARDATA_ANY\\memcmp dance with a call to varstrfastcmp_c().\n\nThank you!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/postgres/postgres/commit/a0f1fce80c03\n\n", "msg_date": "Fri, 2 Aug 2024 12:20:04 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "\"Andrey M. Borodin\" <[email protected]> writes:\n> 0. Please write more descriptive commit message akin to [0]\n> 1. Use oids from development range 8000-9999\n\nYeah, we don't try anymore to manually select permanent oids [1].\n\n> 2. Replace VARDATA_ANY\\memcmp dance with a call to varstrfastcmp_c().\n\nI don't agree with that recommendation in the slightest: it's a\nfundamental type pun for bytea to piggyback on text/varchar functions.\nIt risks bugs in bytea due to somebody inserting collation dependencies\ninto those functions. It also creates special cases that those\nfunctions shouldn't have to cope with, see e.g. comment for\nvarstr_sortsupport about how we have to allow NUL bytes there but\nonly if it's C locale. That's a laughably rickety bit of coding.\n\nI see that somebody already made such a pun in bytea_sortsupport,\nbut that was a bad idea that we should undo not double down on.\n\nI wonder if we shouldn't pull all the bytea support functions out\nof varlena.c (say into a new file bytea.c), to discourage such\ngamesmanship in the future.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT\n\n\n", "msg_date": "Tue, 03 Sep 2024 17:19:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" }, { "msg_contents": "On Thu, 25 Jul 2024 at 02:42, Marat Bukharov <[email protected]> wrote:\n> V5 patch. I've added more tests with different bytea sizes\n\nI just glanced over this patch. Are you still planning on working on\nit? There's been no adjustments made since the last feedback you got\nin early August.\n\nCan you address Andrey's feedback on point #1?\n\nAlso, for bytea_larger() and bytea_smaller(), I suggest copying what's\nbeen done in record_larger() and record_smaller() except use\nbyteacmp(). That'll remove all the duplicated code.\n\nIf you fix those up, I see no reason not to commit the patch.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 13:05:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min/max aggregate functions to BYTEA" } ]
[ { "msg_contents": "Hi,\n\nThe documentation states that \"WAL summarization cannot be enabled when wal_level is set to minimal.\" Therefore, at startup, the postmaster checks these settings and exits with an error if they are not configured properly.\n\nHowever, I found that summarize_wal can still be enabled while the server is running with wal_level=minimal. Please see the following example to cause this situation. I think this is a bug.\n\n\n=# SHOW wal_level;\n wal_level\n-----------\n minimal\n(1 row)\n\n=# SELECT * FROM pg_get_wal_summarizer_state();\n summarized_tli | summarized_lsn | pending_lsn | summarizer_pid\n----------------+----------------+-------------+----------------\n 0 | 0/0 | 0/0 | (null)\n(1 row)\n\n=# ALTER SYSTEM SET summarize_wal TO on;\nALTER SYSTEM\n\n=# SELECT pg_reload_conf();\n pg_reload_conf\n----------------\n t\n(1 row)\n\n=# SELECT * FROM pg_get_wal_summarizer_state();\n summarized_tli | summarized_lsn | pending_lsn | summarizer_pid\n----------------+----------------+-------------+----------------\n 1 | 0/1492D80 | 0/1492DF8 | 12228\n(1 row)\n\n\nThe attached patch adds a GUC check hook to ensure summarize_wal cannot be enabled when wal_level is minimal, fixing the issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 3 Jul 2024 23:08:48 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Add a GUC check hook to ensure summarize_wal cannot be enabled when\n wal_level is minimal" }, { "msg_contents": "On Wed, Jul 03, 2024 at 11:08:48PM +0900, Fujii Masao wrote:\n> +/*\n> + * GUC check_hook for summarize_wal\n> + */\n> +bool\n> +check_summarize_wal(bool *newval, void **extra, GucSource source)\n> +{\n> +\tif (*newval && wal_level == WAL_LEVEL_MINIMAL)\n> +\t{\n> +\t\tGUC_check_errmsg(\"WAL cannot be summarized when \\\"wal_level\\\" is \\\"minimal\\\"\");\n> +\t\treturn false;\n> +\t}\n> +\treturn true;\n> +}\n\nIME these sorts of GUC hooks that depend on the value of other GUCs tend to\nbe quite fragile. This one might be okay because wal_level defaults to\n'replica' and because there is an additional check in postmaster.c, but\nthat at least deserves a comment.\n\nThis sort of thing comes up enough that perhaps we should add a\nbetter-supported way to deal with GUCs that depend on each other...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 3 Jul 2024 09:29:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "> On 3 Jul 2024, at 16:29, Nathan Bossart <[email protected]> wrote:\n\n> This sort of thing comes up enough that perhaps we should add a\n> better-supported way to deal with GUCs that depend on each other...\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:41:40 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, 3 Jul 2024 at 16:30, Nathan Bossart <[email protected]> wrote:\n> IME these sorts of GUC hooks that depend on the value of other GUCs tend to\n> be quite fragile. This one might be okay because wal_level defaults to\n> 'replica' and because there is an additional check in postmaster.c, but\n> that at least deserves a comment.\n\nYeah, this hook only works because wal_level isn't PGC_SIGHUP and\nindeed because there's a check in postmaster.c. It now depends on the\nordering of these values in your config which place causes the error\nmessage on startup.\n\nThis hits the already existing check:\nsummarize_wal = 'true'\nwal_sumarizer = 'minimal'\n\nThis hits the new check:\nsummarize_wal = 'true'\nwal_sumarizer = 'minimal'\n\nAnd actually this would throw an error from the new check even though\nthe config is fine:\n\nwal_sumarizer = 'minimal'\nsummarize_wal = 'true'\nwal_sumarizer = 'logical'\n\n> This sort of thing comes up enough that perhaps we should add a\n> better-supported way to deal with GUCs that depend on each other...\n\n+1. Sounds like we need a global GUC consistency check\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:45:10 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Ugh copy paste mistake, this is what I meant\n\nOn Wed, 3 Jul 2024 at 16:45, Jelte Fennema-Nio <[email protected]> wrote:\n> This hits the already existing check:\n> summarize_wal = 'true'\n> wal_level = 'minimal'\n>\n> This hits the new check:\n> summarize_wal = 'true'\n> wal_level = 'minimal'\n>\n> And actually this would throw an error from the new check even though\n> the config is fine:\n>\n> wal_level = 'minimal'\n> summarize_wal = 'true'\n> wal_level = 'logical'\n\n\n", "msg_date": "Wed, 3 Jul 2024 16:46:21 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, 3 Jul 2024 at 16:46, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> Ugh copy paste mistake, this is what I meant\n>\n> On Wed, 3 Jul 2024 at 16:45, Jelte Fennema-Nio <[email protected]> wrote:\n> > This hits the already existing check:\n> > summarize_wal = 'true'\n> > wal_level = 'minimal'\n> >\n> > This hits the new check:\n> > wal_level = 'minimal'\n> > summarize_wal = 'true'\n> >\n> > And actually this would throw an error from the new check even though\n> > the config is fine:\n> >\n> > wal_level = 'minimal'\n> > summarize_wal = 'true'\n> > wal_level = 'logical'\n\nOkay... fixed one more copy paste mistake... (I blame end-of-day brain)\n\n\n", "msg_date": "Wed, 3 Jul 2024 17:01:40 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> This sort of thing comes up enough that perhaps we should add a\n> better-supported way to deal with GUCs that depend on each other...\n\nYeah, a GUC check hook that tries to inspect the value of some\nother GUC is generally going to create more problems than it\nsolves; we've learned that the hard way in the past. We have\nyour patch to remove one instance of that on the CF queue:\n\nhttps://www.postgresql.org/message-id/flat/ZnMr2k-Nk5vj7T7H@nathan\n\nBut that fix only works because those GUCs are PGC_POSTMASTER\nand so we can perform a consistency check on them after GUC setup.\n\nI'm not sure what a more general consistency check mechanism would\nlook like, but it would have to act at some other point than the\ncheck_hooks do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2024 11:16:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 3, 2024 at 10:09 AM Fujii Masao <[email protected]> wrote:\n> The documentation states that \"WAL summarization cannot be enabled when wal_level is set to minimal.\" Therefore, at startup, the postmaster checks these settings and exits with an error if they are not configured properly.\n>\n> However, I found that summarize_wal can still be enabled while the server is running with wal_level=minimal. Please see the following example to cause this situation. I think this is a bug.\n\nWell, that's unfortunate. I suppose I got confused about whether\nsummarize_wal could be changed without a server restart.\n\nI think the fix is probably not to cross-check the GUC values, but to\nput something in the summarizer that prevents it from generating a\nsummary file if wal_level==minimal. Because an incremental backup\nbased on such summaries would be no good. I won't be working the next\ncouple of days due to the US holiday tomorrow, but I've made a note to\nlook into this more next week.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jul 2024 15:13:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/03 23:29, Nathan Bossart wrote:\n> On Wed, Jul 03, 2024 at 11:08:48PM +0900, Fujii Masao wrote:\n>> +/*\n>> + * GUC check_hook for summarize_wal\n>> + */\n>> +bool\n>> +check_summarize_wal(bool *newval, void **extra, GucSource source)\n>> +{\n>> +\tif (*newval && wal_level == WAL_LEVEL_MINIMAL)\n>> +\t{\n>> +\t\tGUC_check_errmsg(\"WAL cannot be summarized when \\\"wal_level\\\" is \\\"minimal\\\"\");\n>> +\t\treturn false;\n>> +\t}\n>> +\treturn true;\n>> +}\n> \n> IME these sorts of GUC hooks that depend on the value of other GUCs tend to\n> be quite fragile. This one might be okay because wal_level defaults to\n> 'replica' and because there is an additional check in postmaster.c, but\n> that at least deserves a comment.\n\nYes, I didn't add a cross-check for wal_level and summarize_wal intentionally\nbecause wal_level is a PGC_POSTMASTER option, and incorrect settings of them\nare already checked at server startup. However, I agree that adding a comment\nabout this would be helpful.\n\nBTW, another concern with summarize_wal is that it might not be enough to\njust check the summarize_wal and wal_level settings. We also need to\nensure that WAL data generated with wal_level=minimal is not summarized.\n\nFor example, if wal_level is changed from minimal to replica or logical,\nsome old WAL data generated with wal_level=minimal might still exist in pg_wal.\nIn this case, if summarize_wal is enabled, the settings of wal_level and\nsummarize_wal is valid, but the started WAL summarizer might summarize\nthis old WAL data unexpectedly.\n\nI haven't reviewed the code regarding how the WAL summarizer chooses\nits starting point, so I'm not sure if there's a real risk of summarizing\nWAL data generated with wal_level=minimal.\n\n\n> This sort of thing comes up enough that perhaps we should add a\n> better-supported way to deal with GUCs that depend on each other...\n\n+1 for v18 or later. However, since the reported issue is in v17,\nit needs to be addressed without such a improved check mechanism.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 4 Jul 2024 23:35:00 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Thu, Jul 4, 2024 at 10:35 AM Fujii Masao <[email protected]> wrote:\n> +1 for v18 or later. However, since the reported issue is in v17,\n> it needs to be addressed without such a improved check mechanism.\n\nHere is a draft patch for that. This is only lightly tested at this\npoint, so further testing would be greatly appreciated, as would code\nreview.\n\nNOTE THAT this seems to require bumping XLOG_PAGE_MAGIC and\nPG_CONTROL_VERSION, which I imagine won't be super-popular considering\nwe've already shipped beta2. However, I don't see another principled\nway to fix the reported problem. We do currently log an\nXLOG_PARAMETER_CHANGE message at startup when wal_level has changed,\nbut that's useless for this purpose. Consider that WAL summarization\non a standby can start from any XLOG_CHECKPOINT_SHUTDOWN or\nXLOG_CHECKPOINT_REDO record, and the most recent XLOG_PARAMETER_CHANGE\nmay be arbitrarily far behind that point, and replay may be\narbitrarily far ahead of that point by which things may have changed\nagain. The only way to know for certain what wal_level was in effect\nat the time one of those records was generated is if the record itself\ncontains that information, so that's what I did. If I'm reading the\ncode correctly, this doesn't increase the size of the WAL, because\nXLOG_CHECKPOINT_REDO was previously storing a dummy integer, and\nCheckPoint looks to have an alignment padding hole where I inserted\nthe new member. Even if the size did increase, I don't think it would\nmatter: these records aren't emitted frequently enough for it to be a\nproblem. But I think it doesn't. However, it does change the format of\nWAL, and because the checkpoint data is also stored in the control\nfile, it changes the format of that, too.\n\nIf we really, really don't want to let those values change at this\npoint in the release cycle, then we could alternatively just document\nthe kinds of scenarios that this protects against as unsupported and\nadmonish people sternly not to do that kind of thing. I think it's\nactually sort of rare, because you have to do something like: enable\nWAL summarization, do some stuff, restart with wal_level=minimal and\nsummarize_wal=off, do some more stuff but not so much stuff that any\nof the WAL you generate gets removed, then restart again with\nwal_level=replica/logical and summarize_wal=on again, so that the WAL\ngenerated at the lower WAL level gets summarized before it gets\nremoved. Then, an incremental backup that you took after doing that\ndance against a prior backup taken before doing all that might get\nconfused about what to do. Although that does not sound like a\nterribly likely or advisable sequence of steps, I bet some people will\ndo it and then it will be hard to figure out what went wrong (and even\nafter we do, it won't make those people whole). And on the flip side I\ndon't think that bumping XLOG_PAGE_MAGIC or PG_CONTROL_VERSION will\ncause huge inconvenience, because people don't tend to put beta2 into\nproduction -- the early adopters tend to go for .0 or .1, not betaX.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 8 Jul 2024 15:37:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Here's v2.\n\nJakub Wartak pointed out to me off-list that this broke the case where\na chain of incrementals crosses a timeline switch. That made me\nrealize that I also need to add the WAL level to XLOG_END_OF_RECOVERY,\nso this version does that.\n\nI also forgot to mention that this patch changes behavior in the case\nwhere you've been running with summarize_wal=off for a while and then\nyou turned it on. Previously, we'd start summarizing from the oldest\nWAL record we could still read from pg_xlog. Now, we'll start\nsummarizing from the first checkpoint (or timeline switch) after that.\nThat's necessary, because when we read the oldest record available, we\ncan't know for sure what WAL level was used to generate it, so we have\nto assume the worst case, i.e. minimal, and thus skip summarizing that\nWAL. However, it's also harmless, because a WAL summary that covers\npart of a checkpoint cycle is useless to us anyway. We need completely\nWAL summaries from the start of the prior backup to the start of the\ncurrent one to be able to do an incremental backup, and the previous\nbackup and the current backup must have each started with a\ncheckpoint, so a summary covering part of a checkpoint cycle can never\nmake an incremental backup possible where it would not otherwise have\nbeen possible.\n\nOne more thing I forgot to mention is that we can't fix this problem\nby making summarize_wal PGC_POSTMASTER. That doesn't work because of\nwhat is mentioned in the previous paragraph: when summarize_wal is\nturned on it will go back and try to summarize any older WAL that is\nstill around: we need this infrastructure to know whether or not that\nolder WAL is safe to summarize. And I don't think we can remove the\nbehavior where we back up and try to summarize old WAL, either,\nbecause then after a crash you'd always have a gap in your summary\nfiles and you would have to take a new full backup afterwards, which\nwould suck. I continue to think that a lot of the value of this\nfeature is in making sure that it *always* works -- when you start to\nadd cases where full backups are required, this becomes a lot less\nuseful to the target audience for the feature, namely, people whose\ndatabases are so large that full backups take an unreasonably long\ntime to complete.\n\n...Robert", "msg_date": "Tue, 9 Jul 2024 13:57:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/10 2:57, Robert Haas wrote:\n> Here's v2.\n\nThanks for the patch!\n\nWith v2 patch, I found a case where WAL data generated with\nwal_level = minimal is summarized. In the steps below,\nthe hoge2_minimal table is created under wal_level = minimal,\nbut its block modification information is included in\nthe WAL summary files. I confirmed this by checking\nthe contents of WAL summary files using pg_wal_summary_contents().\n\nAdditionally, the hoge3_replica table created under\nwal_level = replica is not summarized.\n\n-------------------------------------------\ninitdb -D data\necho \"wal_keep_size = '16GB'\" >> data/postgresql.conf\n\npg_ctl -D data start\npsql <<EOF\nSHOW wal_level;\nCREATE TABLE hoge1_replica AS SELECT n FROM generate_series(1, 100) n;\nALTER SYSTEM SET max_wal_senders TO 0;\nALTER SYSTEM SET wal_level TO 'minimal';\nEOF\n\npg_ctl -D data restart\npsql <<EOF\nSHOW wal_level;\nCREATE TABLE hoge2_minimal AS SELECT n FROM generate_series(1, 100) n;\nALTER SYSTEM SET wal_level TO 'replica';\nEOF\n\npg_ctl -D data restart\npsql <<EOF\nSHOW wal_level;\nCREATE TABLE hoge3_replica AS SELECT n FROM generate_series(1, 100) n;\nCHECKPOINT;\nCREATE TABLE hoge4_replica AS SELECT n FROM generate_series(1, 100) n;\nCHECKPOINT;\nALTER SYSTEM SET summarize_wal TO on;\nSELECT pg_reload_conf();\nSELECT pg_sleep(5);\nSELECT wsc.*, c.relname FROM pg_available_wal_summaries() JOIN LATERAL pg_wal_summary_contents(tli, start_lsn, end_lsn) wsc ON true JOIN pg_class c ON wsc.relfilenode = c.relfilenode WHERE c.relname LIKE 'hoge%' ORDER BY c.relname;\nEOF\n-------------------------------------------\n\nI believe this issue occurs when the server is shut down cleanly.\nThe shutdown-checkpoint record retains the wal_level value used\nbefore the shutdown. If wal_level is changed after this,\nthe wal_level that indicated by the shutdown-checkpoint record\nand that the WAL data generated afterwards depends on may not match.\n\n\nI'm sure this patch is necessary as a safeguard for WAL summarization.\nOTOH, I also think we should apply the patch I proposed earlier\nin this thread, which prevents summarize_wal from being enabled\nwhen wal_level is set to minimal. This way, if there's\na misconfiguration, users will see an error message and\ncan quickly identify and fix the issue. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 10 Jul 2024 14:56:13 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 1:56 AM Fujii Masao <[email protected]> wrote:\n> I believe this issue occurs when the server is shut down cleanly.\n> The shutdown-checkpoint record retains the wal_level value used\n> before the shutdown. If wal_level is changed after this,\n> the wal_level that indicated by the shutdown-checkpoint record\n> and that the WAL data generated afterwards depends on may not match.\n\nOh, that's a problem. I'll have to think about that.\n\n> I'm sure this patch is necessary as a safeguard for WAL summarization.\n> OTOH, I also think we should apply the patch I proposed earlier\n> in this thread, which prevents summarize_wal from being enabled\n> when wal_level is set to minimal. This way, if there's\n> a misconfiguration, users will see an error message and\n> can quickly identify and fix the issue. Thought?\n\nI interpreted these emails as meaning that we should not proceed with\nthat approach:\n\nhttps://www.postgresql.org/message-id/CAGECzQR2r-rHFLQr5AonFehVP8DiFH+==R2yqdBvunYnwxsXNA@mail.gmail.com\nhttp://postgr.es/m/[email protected]\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 10:10:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 10:10:30AM -0400, Robert Haas wrote:\n> On Wed, Jul 10, 2024 at 1:56 AM Fujii Masao <[email protected]> wrote:\n>> I'm sure this patch is necessary as a safeguard for WAL summarization.\n>> OTOH, I also think we should apply the patch I proposed earlier\n>> in this thread, which prevents summarize_wal from being enabled\n>> when wal_level is set to minimal. This way, if there's\n>> a misconfiguration, users will see an error message and\n>> can quickly identify and fix the issue. Thought?\n> \n> I interpreted these emails as meaning that we should not proceed with\n> that approach:\n> \n> https://www.postgresql.org/message-id/CAGECzQR2r-rHFLQr5AonFehVP8DiFH+==R2yqdBvunYnwxsXNA@mail.gmail.com\n> http://postgr.es/m/[email protected]\n\nYeah. I initially thought this patch might be okay, at least as a stopgap,\nbut Jelte pointed out a case where it doesn't work, namely when you have\nsomething like the following in the config file:\n\n\twal_level = 'minimal'\n\tsummarize_wal = 'true'\n\twal_level = 'logical'\n\nI'm not sure that's a dealbreaker for v17 if we can't come up with anything\nelse, but IMHO it at least deserves a loud comment and a plan for a better\nsolution in v18.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:18:38 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, 10 Jul 2024 at 16:18, Nathan Bossart <[email protected]> wrote:\n> Yeah. I initially thought this patch might be okay, at least as a stopgap,\n> but Jelte pointed out a case where it doesn't work, namely when you have\n> something like the following in the config file:\n>\n> wal_level = 'minimal'\n> summarize_wal = 'true'\n> wal_level = 'logical'\n\nI think that issue can be solved fairly easily by making the guc\ncheck_hook always pass during postmaster startup (by e.g. checking\npmState), and relying on the previous startup check instead during\nstartup.\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:29:14 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 04:29:14PM +0200, Jelte Fennema-Nio wrote:\n> On Wed, 10 Jul 2024 at 16:18, Nathan Bossart <[email protected]> wrote:\n>> Yeah. I initially thought this patch might be okay, at least as a stopgap,\n>> but Jelte pointed out a case where it doesn't work, namely when you have\n>> something like the following in the config file:\n>>\n>> wal_level = 'minimal'\n>> summarize_wal = 'true'\n>> wal_level = 'logical'\n> \n> I think that issue can be solved fairly easily by making the guc\n> check_hook always pass during postmaster startup (by e.g. checking\n> pmState), and relying on the previous startup check instead during\n> startup.\n\nI was actually just thinking about doing something similar in a different\nthread [0]. Do we actually need to look at pmState? Or could we just skip\nit if the context is <= PGC_S_ARGV?\n\n[0] https://postgr.es/m/Zow-DBaDY2IzAzA2%40nathan\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:46:41 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, 10 Jul 2024 at 16:46, Nathan Bossart <[email protected]> wrote:\n> Do we actually need to look at pmState? Or could we just skip\n> it if the context is <= PGC_S_ARGV?\n\nI'm not 100% sure, but I think PGC_S_FILE would still be used when\npostgresql.conf changes and on SIGHUP is sent. And we would want the\ncheck_hook to be used then.\n\n\n", "msg_date": "Wed, 10 Jul 2024 17:08:05 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Wed, 10 Jul 2024 at 16:18, Nathan Bossart <[email protected]> wrote:\n>> Yeah. I initially thought this patch might be okay, at least as a stopgap,\n>> but Jelte pointed out a case where it doesn't work, namely when you have\n>> something like the following in the config file:\n>> \n>> wal_level = 'minimal'\n>> summarize_wal = 'true'\n>> wal_level = 'logical'\n\n> I think that issue can be solved fairly easily by making the guc\n> check_hook always pass during postmaster startup (by e.g. checking\n> pmState), and relying on the previous startup check instead during\n> startup.\n\nPlease, no. We went through a ton of permutations of that kind of\nidea years ago, when it first became totally clear that cross-checks\nbetween GUCs do not work nicely if implemented in check_hooks.\n(You can find all the things we tried in the commit log, although\nI don't recall exactly when.) A counter-example for what you just\nsaid is when a configuration file like the above is loaded after\npostmaster start.\n\nIf we want to solve this, let's actually solve it, perhaps by\ninventing a \"consistency check\" mechanism that GUC applies after\nit thinks it's reached a final set of GUC values. I'm not very\nclear on how outside checking code would be able to look at the\ntentative rather than active values of the variables, but that\nshould be solvable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jul 2024 11:11:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 11:11:13AM -0400, Tom Lane wrote:\n> Please, no. We went through a ton of permutations of that kind of\n> idea years ago, when it first became totally clear that cross-checks\n> between GUCs do not work nicely if implemented in check_hooks.\n> (You can find all the things we tried in the commit log, although\n> I don't recall exactly when.)\n\nUnderstood.\n\n> A counter-example for what you just\n> said is when a configuration file like the above is loaded after\n> postmaster start.\n\nI haven't tested it, but from skimming around the code, it looks like\nProcessConfigFileInternal() would deduplicate any previous entries in the\nfile prior to applying the values and running the check hooks. Else,\nreloading a configuration file with multiple startup-only GUC entries could\nfail, even without bogus GUC check hooks.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 10:44:14 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> I haven't tested it, but from skimming around the code, it looks like\n> ProcessConfigFileInternal() would deduplicate any previous entries in the\n> file prior to applying the values and running the check hooks. Else,\n> reloading a configuration file with multiple startup-only GUC entries could\n> fail, even without bogus GUC check hooks.\n\nWhile it's been a little while since I actually traced the logic,\nI believe the reason that case doesn't fail is this bit in\nset_config_with_handle, about line 3477 as of HEAD:\n\n case PGC_POSTMASTER:\n if (context == PGC_SIGHUP)\n {\n /*\n * We are re-reading a PGC_POSTMASTER variable from\n * postgresql.conf. We can't change the setting, so we should\n * give a warning if the DBA tries to change it. However,\n * because of variant formats, canonicalization by check\n * hooks, etc, we can't just compare the given string directly\n * to what's stored. Set a flag to check below after we have\n * the final storable value.\n */\n prohibitValueChange = true;\n }\n else if (context != PGC_POSTMASTER)\n // throw \"cannot be changed now\" error\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jul 2024 11:54:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/10 23:18, Nathan Bossart wrote:\n> On Wed, Jul 10, 2024 at 10:10:30AM -0400, Robert Haas wrote:\n>> On Wed, Jul 10, 2024 at 1:56 AM Fujii Masao <[email protected]> wrote:\n>>> I'm sure this patch is necessary as a safeguard for WAL summarization.\n>>> OTOH, I also think we should apply the patch I proposed earlier\n>>> in this thread, which prevents summarize_wal from being enabled\n>>> when wal_level is set to minimal. This way, if there's\n>>> a misconfiguration, users will see an error message and\n>>> can quickly identify and fix the issue. Thought?\n>>\n>> I interpreted these emails as meaning that we should not proceed with\n>> that approach:\n>>\n>> https://www.postgresql.org/message-id/CAGECzQR2r-rHFLQr5AonFehVP8DiFH+==R2yqdBvunYnwxsXNA@mail.gmail.com\n>> http://postgr.es/m/[email protected]\n> \n> Yeah. I initially thought this patch might be okay, at least as a stopgap,\n> but Jelte pointed out a case where it doesn't work, namely when you have\n> something like the following in the config file:\n> \n> \twal_level = 'minimal'\n> \tsummarize_wal = 'true'\n> \twal_level = 'logical'\n\nUnless I'm mistaken, the patch works fine in this case. If the check_hook\ntriggered every time a parameter appears in the configuration file,\nit would mistakenly detect wal_level=minimal and summarize_wal=on together\nand raise an error. However, this isn't the case. The check_hook is\ndesigned to trigger after duplicate parameters are deduplicated.\nAm I missing something?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 11 Jul 2024 01:02:25 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/11 0:44, Nathan Bossart wrote:\n> On Wed, Jul 10, 2024 at 11:11:13AM -0400, Tom Lane wrote:\n>> Please, no. We went through a ton of permutations of that kind of\n>> idea years ago, when it first became totally clear that cross-checks\n>> between GUCs do not work nicely if implemented in check_hooks.\n>> (You can find all the things we tried in the commit log, although\n>> I don't recall exactly when.)\n> \n> Understood.\n> \n>> A counter-example for what you just\n>> said is when a configuration file like the above is loaded after\n>> postmaster start.\n> \n> I haven't tested it, but from skimming around the code, it looks like\n> ProcessConfigFileInternal() would deduplicate any previous entries in the\n> file prior to applying the values and running the check hooks. Else,\n> reloading a configuration file with multiple startup-only GUC entries could\n> fail, even without bogus GUC check hooks.\n\nYeah, I'm thinking the same.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 11 Jul 2024 01:07:51 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 11:54:38AM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I haven't tested it, but from skimming around the code, it looks like\n>> ProcessConfigFileInternal() would deduplicate any previous entries in the\n>> file prior to applying the values and running the check hooks. Else,\n>> reloading a configuration file with multiple startup-only GUC entries could\n>> fail, even without bogus GUC check hooks.\n> \n> While it's been a little while since I actually traced the logic,\n> I believe the reason that case doesn't fail is this bit in\n> set_config_with_handle, about line 3477 as of HEAD:\n> \n> case PGC_POSTMASTER:\n> if (context == PGC_SIGHUP)\n> {\n> /*\n> * We are re-reading a PGC_POSTMASTER variable from\n> * postgresql.conf. We can't change the setting, so we should\n> * give a warning if the DBA tries to change it. However,\n> * because of variant formats, canonicalization by check\n> * hooks, etc, we can't just compare the given string directly\n> * to what's stored. Set a flag to check below after we have\n> * the final storable value.\n> */\n> prohibitValueChange = true;\n> }\n> else if (context != PGC_POSTMASTER)\n> // throw \"cannot be changed now\" error\n\nThat's what I thought at first, but then I saw this in\nProcessConfigFileInternal():\n\n\t\t\t/* If it's already marked, then this is a duplicate entry */\n\t\t\tif (record->status & GUC_IS_IN_FILE)\n\t\t\t{\n\t\t\t\t/*\n\t\t\t\t * Mark the earlier occurrence(s) as dead/ignorable. We could\n\t\t\t\t * avoid the O(N^2) behavior here with some additional state,\n\t\t\t\t * but it seems unlikely to be worth the trouble.\n\t\t\t\t */\n\t\t\t\tConfigVariable *pitem;\n\n\t\t\t\tfor (pitem = head; pitem != item; pitem = pitem->next)\n\t\t\t\t{\n\t\t\t\t\tif (!pitem->ignore &&\n\t\t\t\t\t\tstrcmp(pitem->name, item->name) == 0)\n\t\t\t\t\t\tpitem->ignore = true;\n\t\t\t\t}\n\t\t\t}\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 11:15:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Thu, Jul 11, 2024 at 01:02:25AM +0900, Fujii Masao wrote:\n> On 2024/07/10 23:18, Nathan Bossart wrote:\n>> Yeah. I initially thought this patch might be okay, at least as a stopgap,\n>> but Jelte pointed out a case where it doesn't work, namely when you have\n>> something like the following in the config file:\n>> \n>> \twal_level = 'minimal'\n>> \tsummarize_wal = 'true'\n>> \twal_level = 'logical'\n> \n> Unless I'm mistaken, the patch works fine in this case. If the check_hook\n> triggered every time a parameter appears in the configuration file,\n> it would mistakenly detect wal_level=minimal and summarize_wal=on together\n> and raise an error. However, this isn't the case. The check_hook is\n> designed to trigger after duplicate parameters are deduplicated.\n> Am I missing something?\n\nAfter further research, I think you are right about that.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 11:29:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 10:10 AM Robert Haas <[email protected]> wrote:\n> On Wed, Jul 10, 2024 at 1:56 AM Fujii Masao <[email protected]> wrote:\n> > I believe this issue occurs when the server is shut down cleanly.\n> > The shutdown-checkpoint record retains the wal_level value used\n> > before the shutdown. If wal_level is changed after this,\n> > the wal_level that indicated by the shutdown-checkpoint record\n> > and that the WAL data generated afterwards depends on may not match.\n>\n> Oh, that's a problem. I'll have to think about that.\n\nHere is an attempt at fixing this problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 12:35:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 11:11:13AM -0400, Tom Lane wrote:\n> We went through a ton of permutations of that kind of\n> idea years ago, when it first became totally clear that cross-checks\n> between GUCs do not work nicely if implemented in check_hooks.\n> (You can find all the things we tried in the commit log, although\n> I don't recall exactly when.)\n\nDo you remember the general timeframe or any of the GUCs involved? I spent\nsome time searching through the commit log and mailing lists, but I've thus\nfar only found allusions to past bad experiences with such hooks.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 13:41:41 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/11 1:35, Robert Haas wrote:\n> On Wed, Jul 10, 2024 at 10:10 AM Robert Haas <[email protected]> wrote:\n>> On Wed, Jul 10, 2024 at 1:56 AM Fujii Masao <[email protected]> wrote:\n>>> I believe this issue occurs when the server is shut down cleanly.\n>>> The shutdown-checkpoint record retains the wal_level value used\n>>> before the shutdown. If wal_level is changed after this,\n>>> the wal_level that indicated by the shutdown-checkpoint record\n>>> and that the WAL data generated afterwards depends on may not match.\n>>\n>> Oh, that's a problem. I'll have to think about that.\n> \n> Here is an attempt at fixing this problem.\n\nThanks for updating the patch!\n\n+\t * fast_forward is normally false, but is true when we have encountered\n+\t * WAL generated with wal_level=minimal and are skipping over it without\n+\t * emitting summary files. In this case, summarized_tli and summarized_lsn\n+\t * will advance even though nothing is being written to disk, until we\n+\t * again reach a point where wal_level>minimal.\n+\t *\n \t * summarizer_pgprocno is the proc number of the summarizer process, if\n \t * one is running, or else INVALID_PROC_NUMBER.\n \t *\n@@ -83,6 +89,7 @@ typedef struct\n \tTimeLineID\tsummarized_tli;\n \tXLogRecPtr\tsummarized_lsn;\n \tbool\t\tlsn_is_exact;\n+\tbool\t\tfast_forward;\n\nIt looks like the fast_forward field in WalSummarizerData is no longer necessary.\n\nSo far, I haven't found any other issues with the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 11 Jul 2024 19:51:20 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Thu, Jul 11, 2024 at 6:51 AM Fujii Masao <[email protected]> wrote:\n> It looks like the fast_forward field in WalSummarizerData is no longer necessary.\n>\n> So far, I haven't found any other issues with the patch.\n\nThanks for reviewing. Regarding fast_forward, I think I had the idea\nin mind that perhaps it should be exposed by\npg_get_wal_summarizer_state(), but I didn't actually implement that.\nThinking about it again, I think maybe it's fine to just remove it\nfrom the shared memory state, as this should be a rare scenario in\npractice. What is your opinion?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jul 2024 12:16:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Thu, Jul 11, 2024 at 6:51 AM Fujii Masao <[email protected]> wrote:\n> So far, I haven't found any other issues with the patch.\n\nHere is a new version that removes the hunks you highlighted and also\nadds a test case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 11 Jul 2024 14:00:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Wed, Jul 10, 2024 at 01:41:41PM -0500, Nathan Bossart wrote:\n> On Wed, Jul 10, 2024 at 11:11:13AM -0400, Tom Lane wrote:\n>> We went through a ton of permutations of that kind of\n>> idea years ago, when it first became totally clear that cross-checks\n>> between GUCs do not work nicely if implemented in check_hooks.\n>> (You can find all the things we tried in the commit log, although\n>> I don't recall exactly when.)\n> \n> Do you remember the general timeframe or any of the GUCs involved? I spent\n> some time searching through the commit log and mailing lists, but I've thus\n> far only found allusions to past bad experiences with such hooks.\n\nCould it be the effective_cache_size work from 2013-2014?\n\n\thttps://www.postgresql.org/message-id/flat/CAHyXU0weDECnab1pypNum-dWGwjso_XMTY8-NvvzRphzM2Hv5A%40mail.gmail.com\n\thttps://www.postgresql.org/message-id/flat/CAMkU%3D1zTMNZsnUV6L7aMvfJZfzjKbzAtuML3N35wyYaia9MJAw%40mail.gmail.com\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ee1e5662d8d8330726eaef7d3110cb7add24d058\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2850896961994aa0993b9e2ed79a209750181b8a\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=af930e606a3217db3909029c6c3f8d003ba70920\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a16d421ca4fc639929bc964b2585e8382cf16e33;hp=08c8e8962f56c23c6799178d52d3b31350a0708f\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 11 Jul 2024 15:42:26 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Jul 10, 2024 at 01:41:41PM -0500, Nathan Bossart wrote:\n>> On Wed, Jul 10, 2024 at 11:11:13AM -0400, Tom Lane wrote:\n>>> We went through a ton of permutations of that kind of\n>>> idea years ago, when it first became totally clear that cross-checks\n>>> between GUCs do not work nicely if implemented in check_hooks.\n\n>> Do you remember the general timeframe or any of the GUCs involved? I spent\n>> some time searching through the commit log and mailing lists, but I've thus\n>> far only found allusions to past bad experiences with such hooks.\n\n> Could it be the effective_cache_size work from 2013-2014?\n\nYeah, that's what I was remembering. It looks like my memory was\nslightly faulty, in that what ee1e5662d tried to do was make the\ndefault value of one GUC depend on the actual value of another one,\nnot implement a consistency check per se. But the underlying problem\nis the same: a check_hook can't assume it is seeing the appropriate\nvalue of some other GUC, since a change of that one may be pending.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2024 17:46:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\nOn 2024/07/12 1:16, Robert Haas wrote:\n> On Thu, Jul 11, 2024 at 6:51 AM Fujii Masao <[email protected]> wrote:\n>> It looks like the fast_forward field in WalSummarizerData is no longer necessary.\n>>\n>> So far, I haven't found any other issues with the patch.\n> \n> Thanks for reviewing. Regarding fast_forward, I think I had the idea\n> in mind that perhaps it should be exposed by\n> pg_get_wal_summarizer_state(),\n\nUnderstood.\n\n\n> but I didn't actually implement that.\n> Thinking about it again, I think maybe it's fine to just remove it\n> from the shared memory state, as this should be a rare scenario in\n> practice. What is your opinion?\n\nI don't think it's a rare scenario since summarize_wal can be enabled\nafter starting the server with wal_level=minimal. Therefore, I believe\nsuch a configuration should be prohibited using a GUC check hook,\nas my patch does. Alternatively, we should at least report or\nlog something when summarize_wal is enabled but fast_forward is also\nenabled, so users can easily detect or investigate this unexpected situation.\nI'm not sure if exposing fast_forward is necessary for that or not...\n\nRegarding pg_get_wal_summarizer_state(), it is documented that\nsummarized_lsn indicates the ending LSN of the last WAL summary file\nwritten to disk. However, with the patch, summarized_lsn advances\neven when fast_forward is enabled. The documentation should be updated,\nor summarized_lsn should be changed so it doesn't advance while\nfast_forward is enabled.\n\n\nOn 2024/07/12 3:00, Robert Haas wrote:\n> On Thu, Jul 11, 2024 at 6:51 AM Fujii Masao <[email protected]> wrote:\n>> So far, I haven't found any other issues with the patch.\n> \n> Here is a new version that removes the hunks you highlighted and also\n> adds a test case.\n\nThanks for updating the patch! LGTM.\n\nI have one small comment:\n\n+# This test aims to validate that takeing an incremental backup fails when\n\n\"takeing\" should be \"taking\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Jul 2024 11:56:42 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Sun, Jul 14, 2024 at 10:56 PM Fujii Masao\n<[email protected]> wrote:\n> I don't think it's a rare scenario since summarize_wal can be enabled\n> after starting the server with wal_level=minimal. Therefore, I believe\n> such a configuration should be prohibited using a GUC check hook,\n> as my patch does.\n\nI guess I'm in the group of people who doesn't understand how this can\npossibly work. There's no guarantee about the order in which GUC check\nhooks are called, so you don't know if the value of the other variable\nhas already been set to the final value or not, which seems like a\nfatal problem even if the code happens to work correctly as of today.\nEven if you have such a guarantee, you can't prohibit a configuration\nchange at pg_ctl reload time: the server can refuse to start in case\nof an invalid configuration, but a running server can't decide to shut\ndown or stop working at reload time.\n\n> Alternatively, we should at least report or\n> log something when summarize_wal is enabled but fast_forward is also\n> enabled, so users can easily detect or investigate this unexpected situation.\n> I'm not sure if exposing fast_forward is necessary for that or not...\n\nTo be honest, I'm uncomfortable with how much time is passing while we\ndebate these details. I feel like we need to get these WAL format\nchanges done sooner rather than later considering beta2 is already out\nthe door. Other changes like logging messages or not can be debated\nonce the basic fix is in. Even if they don't happen, nobody will have\na corrupted backup once the basic fix is done. At most they will be\nconfused about why some backup is failing.\n\n> Regarding pg_get_wal_summarizer_state(), it is documented that\n> summarized_lsn indicates the ending LSN of the last WAL summary file\n> written to disk. However, with the patch, summarized_lsn advances\n> even when fast_forward is enabled. The documentation should be updated,\n> or summarized_lsn should be changed so it doesn't advance while\n> fast_forward is enabled.\n\nI think we need to advance summarized_lsn to get the proper behavior.\nI can update the documentation.\n\n> I have one small comment:\n>\n> +# This test aims to validate that takeing an incremental backup fails when\n>\n> \"takeing\" should be \"taking\"?\n\nWill fix, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jul 2024 14:30:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Mon, Jul 15, 2024 at 02:30:42PM -0400, Robert Haas wrote:\n> On Sun, Jul 14, 2024 at 10:56 PM Fujii Masao\n> <[email protected]> wrote:\n>> I don't think it's a rare scenario since summarize_wal can be enabled\n>> after starting the server with wal_level=minimal. Therefore, I believe\n>> such a configuration should be prohibited using a GUC check hook,\n>> as my patch does.\n> \n> I guess I'm in the group of people who doesn't understand how this can\n> possibly work. There's no guarantee about the order in which GUC check\n> hooks are called, so you don't know if the value of the other variable\n> has already been set to the final value or not, which seems like a\n> fatal problem even if the code happens to work correctly as of today.\n> Even if you have such a guarantee, you can't prohibit a configuration\n> change at pg_ctl reload time: the server can refuse to start in case\n> of an invalid configuration, but a running server can't decide to shut\n> down or stop working at reload time.\n\nMy understanding is that the correctness of this GUC check hook depends on\nwal_level being a PGC_POSTMASTER GUC. The check hook would always return\ntrue during startup, and there'd be an additional cross-check in\nPostmasterMain() that would fail startup if necessary. After that point,\nwe know that wal_level cannot change, so the GUC check hook for\nsummarize_wal can depend on wal_level. If it fails, my expectation would\nbe that the server would just ignore that change and continue.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 15 Jul 2024 13:47:14 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Mon, Jul 15, 2024 at 01:47:14PM -0500, Nathan Bossart wrote:\n> On Mon, Jul 15, 2024 at 02:30:42PM -0400, Robert Haas wrote:\n>> I guess I'm in the group of people who doesn't understand how this can\n>> possibly work. There's no guarantee about the order in which GUC check\n>> hooks are called, so you don't know if the value of the other variable\n>> has already been set to the final value or not, which seems like a\n>> fatal problem even if the code happens to work correctly as of today.\n>> Even if you have such a guarantee, you can't prohibit a configuration\n>> change at pg_ctl reload time: the server can refuse to start in case\n>> of an invalid configuration, but a running server can't decide to shut\n>> down or stop working at reload time.\n> \n> My understanding is that the correctness of this GUC check hook depends on\n> wal_level being a PGC_POSTMASTER GUC. The check hook would always return\n> true during startup, and there'd be an additional cross-check in\n> PostmasterMain() that would fail startup if necessary. After that point,\n> we know that wal_level cannot change, so the GUC check hook for\n> summarize_wal can depend on wal_level. If it fails, my expectation would\n> be that the server would just ignore that change and continue.\n\nI should also note that since wal_level defaults to \"replica\", I don't\nthink we even need any extra \"always return true on startup\" logic. If\nwal_level is set prior to summarize_wal, the check hook will fail startup\nas needed. If summarize_wal is set first, the check hook will return true,\nand we'll fall back on the PostmasterMain() check (that already exists).\n\nIn short, the original patch [0] seems like it should work in this\nparticular scenario, barring some corner case I haven't discovered. That\nbeing said, it's admittedly fragile and probably not a great precedent to\nset. I've been thinking about some ideas for more generic GUC dependency\ntooling, but I don't have anything to share yet.\n\n[0] https://postgr.es/m/attachment/161852/v1-0001-Prevent-summarize_wal-from-enabling-when-wal_leve.patch\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 15 Jul 2024 14:14:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Mon, Jul 15, 2024 at 2:47 PM Nathan Bossart <[email protected]> wrote:\n> My understanding is that the correctness of this GUC check hook depends on\n> wal_level being a PGC_POSTMASTER GUC. The check hook would always return\n> true during startup, and there'd be an additional cross-check in\n> PostmasterMain() that would fail startup if necessary. After that point,\n> we know that wal_level cannot change, so the GUC check hook for\n> summarize_wal can depend on wal_level. If it fails, my expectation would\n> be that the server would just ignore that change and continue.\n\nBut how do you know that, during startup, the setting for\nsummarize_wal is processed after the setting for wal_level?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jul 2024 16:03:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Mon, Jul 15, 2024 at 04:03:13PM -0400, Robert Haas wrote:\n> On Mon, Jul 15, 2024 at 2:47 PM Nathan Bossart <[email protected]> wrote:\n>> My understanding is that the correctness of this GUC check hook depends on\n>> wal_level being a PGC_POSTMASTER GUC. The check hook would always return\n>> true during startup, and there'd be an additional cross-check in\n>> PostmasterMain() that would fail startup if necessary. After that point,\n>> we know that wal_level cannot change, so the GUC check hook for\n>> summarize_wal can depend on wal_level. If it fails, my expectation would\n>> be that the server would just ignore that change and continue.\n> \n> But how do you know that, during startup, the setting for\n> summarize_wal is processed after the setting for wal_level?\n\nYou don't, but the GUC check hook should always return true when\nsummarize_wal is processed first. We'd rely on the PostmasterMain() check\nto fail in that case.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:10:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Mon, Jul 15, 2024 at 4:10 PM Nathan Bossart <[email protected]> wrote:\n> You don't, but the GUC check hook should always return true when\n> summarize_wal is processed first. We'd rely on the PostmasterMain() check\n> to fail in that case.\n\nOK, I see. So at startup time, the check hook might or might not catch\na configuration mismatch, but if it doesn't, then the PostmasterMain()\ncheck will fire. Later, we'd have an unsolvable problem if both\nwal_level and summarize_wal could change, but since wal_level can't\nchange after startup time, we only need to check summarize_wal, and it\ncan rely on the value of wal_level being correct.\n\nTBH, I don't want to do that. I think it's too fragile. It's the sort\nof thing that just barely works given the exact behavior of these\nparticular GUCs, but it relies on a bunch of subtle assumptions which\nwon't be evident to future readers of the code. People will very\npossibly copy this barely-working code into other contexts where it\ndoesn't work at all, or they'll think the code implementing this is\nbuggy even if it isn't.\n\nWe don't really need that check for correctness here, and it arguably\neven prohibits more than necessary - e.g. suppose you crash without\nsummarizing all the WAL and then restart with wal_level=minimal. It's\nperfectly fine to run the summarizer and have it catch up with all\npossible pre-crash summarization, but the proposed change would\nprohibit it. Granted, the current check would require you to start\nwith summarize_wal=off and then enable it later to get that done, but\nwe could remove that check. Or maybe we should leave it alone, but my\npoint is: if we have a reasonable option to decouple the values of two\nGUCs so that the legal values of one don't depend on each other, I\nthink that is always going to be better and simpler than trying to\nimplement cross-checks on the values for which we don't have proper\ninfrastructure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jul 2024 12:23:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Tue, Jul 16, 2024 at 12:23:19PM -0400, Robert Haas wrote:\n> TBH, I don't want to do that. I think it's too fragile. It's the sort\n> of thing that just barely works given the exact behavior of these\n> particular GUCs, but it relies on a bunch of subtle assumptions which\n> won't be evident to future readers of the code. People will very\n> possibly copy this barely-working code into other contexts where it\n> doesn't work at all, or they'll think the code implementing this is\n> buggy even if it isn't.\n\nAgreed. If there was really no other option, it would at the very least\nneed a humongous comment that explained why it worked in this specific case\nand is unlikely to work in others. But it sounds like we have another\nchoice... \n\n-- \nnathan\n\n\n", "msg_date": "Tue, 16 Jul 2024 11:30:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/17 1:30, Nathan Bossart wrote:\n> On Tue, Jul 16, 2024 at 12:23:19PM -0400, Robert Haas wrote:\n>> TBH, I don't want to do that. I think it's too fragile. It's the sort\n>> of thing that just barely works given the exact behavior of these\n>> particular GUCs, but it relies on a bunch of subtle assumptions which\n>> won't be evident to future readers of the code. People will very\n>> possibly copy this barely-working code into other contexts where it\n>> doesn't work at all, or they'll think the code implementing this is\n>> buggy even if it isn't.\n> \n> Agreed. If there was really no other option, it would at the very least\n> need a humongous comment that explained why it worked in this specific case\n> and is unlikely to work in others. But it sounds like we have another\n> choice...\n\nI don't have another solution that can be pushed into v17. I understand\nthe risks raised so far, so I'm okay with just pushing the \"fast_forward\" patch.\nIt might be helpful to add a note to the summarize_wal documentation,\nfor example, \"summarize_wal can be enabled after startup with wal_level = minimal,\nbut WAL generated at this level won't be summarized.\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 17 Jul 2024 02:16:27 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Tue, Jul 16, 2024 at 1:16 PM Fujii Masao <[email protected]> wrote:\n> I don't have another solution that can be pushed into v17. I understand\n> the risks raised so far, so I'm okay with just pushing the \"fast_forward\" patch.\n> It might be helpful to add a note to the summarize_wal documentation,\n> for example, \"summarize_wal can be enabled after startup with wal_level = minimal,\n> but WAL generated at this level won't be summarized.\"?\n\nHere is v5. This version differs from v4 in that it updates the\ndocumentation for summarize_wal and pg_get_wal_summarizer_state(). The\nprevious version made no documentation changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Jul 2024 09:44:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/17 22:44, Robert Haas wrote:\n> On Tue, Jul 16, 2024 at 1:16 PM Fujii Masao <[email protected]> wrote:\n>> I don't have another solution that can be pushed into v17. I understand\n>> the risks raised so far, so I'm okay with just pushing the \"fast_forward\" patch.\n>> It might be helpful to add a note to the summarize_wal documentation,\n>> for example, \"summarize_wal can be enabled after startup with wal_level = minimal,\n>> but WAL generated at this level won't be summarized.\"?\n> \n> Here is v5. This version differs from v4 in that it updates the\n> documentation for summarize_wal and pg_get_wal_summarizer_state(). The\n> previous version made no documentation changes.\n\nThanks for updating the patch! It looks good to me, except for one minor detail.\n\n> + reaches WAL not generated under <literal>wal_level=minimal</literal>,\n> + it will resume writing summaries to disk.\n\n\"WAL not generated under wal_level=minimal\" sounds a bit confusing??\nHow about \"WAL generated while wal_level is replica or higher\" instead?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Jul 2024 22:47:16 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "On Thu, Jul 18, 2024 at 9:47 AM Fujii Masao <[email protected]> wrote:\n> \"WAL not generated under wal_level=minimal\" sounds a bit confusing??\n> How about \"WAL generated while wal_level is replica or higher\" instead?\n\nCommitted and back-patched to 17 with approximately that change, but\nreworded slightly to make the tenses work.\n\nI just realized that I should've created a CF entry for this so that\nit went through CI before actually committing it. Let's see how long\nit takes for me to get punished for that oversight...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jul 2024 12:27:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" }, { "msg_contents": "\n\nOn 2024/07/19 1:27, Robert Haas wrote:\n> On Thu, Jul 18, 2024 at 9:47 AM Fujii Masao <[email protected]> wrote:\n>> \"WAL not generated under wal_level=minimal\" sounds a bit confusing??\n>> How about \"WAL generated while wal_level is replica or higher\" instead?\n> \n> Committed and back-patched to 17 with approximately that change, but\n> reworded slightly to make the tenses work.\n\nMany thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Jul 2024 02:52:10 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add a GUC check hook to ensure summarize_wal cannot be enabled\n when wal_level is minimal" } ]
[ { "msg_contents": "Is there a guidebook, any guidelines for writing grammar in Postgres, or\nany suggestions to keep in mind?\nDo we have a set of guidelines to write production rules in gram.y (for the\nBison Parser Generator) to make the grammar conflict-free and extendible in\nthe future?\n\nIs there a guidebook, any guidelines for writing grammar in Postgres, or any suggestions to keep in mind?Do we have a set of guidelines to write production rules in gram.y (for the Bison Parser Generator) to make the grammar conflict-free and extendible in the future?", "msg_date": "Wed, 3 Jul 2024 11:55:07 -0700", "msg_from": "Harjyot Bagga <[email protected]>", "msg_from_op": true, "msg_subject": "Grammar guidelines in Postgres" }, { "msg_contents": "Hi,\n\n> Is there a guidebook, any guidelines for writing grammar in Postgres, or any suggestions to keep in mind?\n> Do we have a set of guidelines to write production rules in gram.y (for the Bison Parser Generator) to make the grammar conflict-free and extendible in the future?\n\nI'm far from being an expert in compilers but my dilettante\nunderstanding is that there are two types of conflicts. Shift/reduce\nconflicts are resolved with operator priorities (e.g. does 1+2*3 mean\n1+(2*3) or (1+2)*3). Reduce/reduce conflict means a \"real\" conflict\nand Flex/Bison should (?) warn you about those. They will not happen\nthough as long as you use common sense. On top of that note that\ngenerally PostgreSQL follows SQL standard.\n\nIf you look for guidelines \"flex & bison\" by John Levine [1] and\n\"Compilers: Principles, Techniques, and Tools\" by Aho, Ullman et al\n[2] are good reads.\n\nI'm not entirely sure if it answers your question but I hope that it's helpful.\n\n[1]: https://www.amazon.com/Flex-Bison-John-R-Levine/dp/0596155972/\n[2]: https://www.amazon.com/Compilers-Principles-Techniques-Tools-2nd/dp/0321486811/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 12:47:08 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Grammar guidelines in Postgres" }, { "msg_contents": "Hi,\n\n> Thank you for your reply. I am aware about these conflicts, but thank you for the explanation.\n> My question is specific to Postgres. Do we have a set of guidelines we keep in mind while writing grammar rules while introducing new features to postgres?\n>\n> One such suggestion or rule for example is the Postgres does not support Postfix operators. So whenever a new feature is introduced developers make sure that they do not add a postfix operators in their grammar. Just like that are there any other further rules or suggestions compiled by post hackers and maintainers?\n\nI believe you wanted to reply to the mailing list, not to me directly.\nPlease use the \"Reply to All\" button.\n\nDo the postfix operators you mention exist in the SQL standard?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:10:35 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Grammar guidelines in Postgres" }, { "msg_contents": "Greetings,\nThank you for your reply. I am aware about these conflicts, but thank you\nfor the explanation.\nMy question is specific to Postgres. Do we have a set of guidelines we keep\nin mind while writing grammar rules while introducing new features to\npostgres?\n\nOne such suggestion or rule for example is the Postgres does not support\nPostfix operators. So whenever a new feature is introduced developers make\nsure that they do not add a postfix operators in their grammar. Just like\nthat are there any other further rules or suggestions compiled by post\nhackers and maintainers?\n\n\nOn Thu, Jul 4, 2024 at 2:47 AM Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi,\n>\n> > Is there a guidebook, any guidelines for writing grammar in Postgres, or\n> any suggestions to keep in mind?\n> > Do we have a set of guidelines to write production rules in gram.y (for\n> the Bison Parser Generator) to make the grammar conflict-free and\n> extendible in the future?\n>\n> I'm far from being an expert in compilers but my dilettante\n> understanding is that there are two types of conflicts. Shift/reduce\n> conflicts are resolved with operator priorities (e.g. does 1+2*3 mean\n> 1+(2*3) or (1+2)*3). Reduce/reduce conflict means a \"real\" conflict\n> and Flex/Bison should (?) warn you about those. They will not happen\n> though as long as you use common sense. On top of that note that\n> generally PostgreSQL follows SQL standard.\n>\n> If you look for guidelines \"flex & bison\" by John Levine [1] and\n> \"Compilers: Principles, Techniques, and Tools\" by Aho, Ullman et al\n> [2] are good reads.\n>\n> I'm not entirely sure if it answers your question but I hope that it's\n> helpful.\n>\n> [1]: https://www.amazon.com/Flex-Bison-John-R-Levine/dp/0596155972/\n> [2]:\n> https://www.amazon.com/Compilers-Principles-Techniques-Tools-2nd/dp/0321486811/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nGreetings,Thank you for your reply. I am aware about these conflicts, but thank you for the explanation. My question is specific to Postgres. Do we have a set of guidelines we keep in mind while writing grammar rules while introducing new features to postgres?One such suggestion or rule for example is the Postgres does not support Postfix operators. So whenever a new feature is introduced developers make sure that they do not add a postfix operators in their grammar. Just like that are there any other further rules or suggestions compiled by post hackers and maintainers?On Thu, Jul 4, 2024 at 2:47 AM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> Is there a guidebook, any guidelines for writing grammar in Postgres, or any suggestions to keep in mind?\n> Do we have a set of guidelines to write production rules in gram.y (for the Bison Parser Generator) to make the grammar conflict-free and extendible in the future?\n\nI'm far from being an expert in compilers but my dilettante\nunderstanding is that there are two types of conflicts. Shift/reduce\nconflicts are resolved with operator priorities (e.g. does 1+2*3 mean\n1+(2*3) or (1+2)*3). Reduce/reduce conflict means a \"real\" conflict\nand Flex/Bison should (?) warn you about those. They will not happen\nthough as long as you use common sense. On top of that note that\ngenerally PostgreSQL follows SQL standard.\n\nIf you look for guidelines \"flex & bison\" by John Levine [1] and\n\"Compilers: Principles, Techniques, and Tools\" by Aho, Ullman et al\n[2] are good reads.\n\nI'm not entirely sure if it answers your question but I hope that it's helpful.\n\n[1]: https://www.amazon.com/Flex-Bison-John-R-Levine/dp/0596155972/\n[2]: https://www.amazon.com/Compilers-Principles-Techniques-Tools-2nd/dp/0321486811/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 4 Jul 2024 03:56:30 -0700", "msg_from": "Harjyot Bagga <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Grammar guidelines in Postgres" }, { "msg_contents": "Harjyot Bagga <[email protected]> writes:\n> One such suggestion or rule for example is the Postgres does not support\n> Postfix operators. So whenever a new feature is introduced developers make\n> sure that they do not add a postfix operators in their grammar. Just like\n> that are there any other further rules or suggestions compiled by post\n> hackers and maintainers?\n\n[ shrug... ] If you try to re-introduce postfix operators you'll get\na ton of shift-reduce conflicts. We have a hard rule that such\nconflicts are not allowed, even though Bison can be told to ignore\nthem. Beyond that, there's not much.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 12:01:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Grammar guidelines in Postgres" } ]
[ { "msg_contents": "pgproc.h has this:\n\n> struct PGPROC\n> {\n> \t/* proc->links MUST BE FIRST IN STRUCT (see ProcSleep,ProcWakeup,etc) */\n> \tdlist_node\tlinks;\t\t\t/* list link if process is in a list */\n> \tdlist_head *procgloballist; /* procglobal list that owns this PGPROC */\n> ...\n\nI don't see any particular reason for 'links' to be the first field. We \nused to do things like \"proc = (PGPROC *) waitQueue->links.next\", but \nsince commit 5764f611e1, this has been a \"dlist\", and dlist_container() \ncan handle the list link being anywhere in the struct.\n\nI tried moving it and ran the regression tests. That revealed one place \nwhere we still don't use dlist_container:\n\n> \tif (!dlist_is_empty(procgloballist))\n> \t{\n> \t\tMyProc = (PGPROC *) dlist_pop_head_node(procgloballist);\n> ...\n\nI believe that was just an oversight. Trivial patch attached.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 4 Jul 2024 01:54:18 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Cleanup: PGProc->links doesn't need to be the first field anymore" }, { "msg_contents": "Hi Heikki,\n\n> I tried moving it and ran the regression tests. That revealed one place\n> where we still don't use dlist_container:\n>\n> > if (!dlist_is_empty(procgloballist))\n> > {\n> > MyProc = (PGPROC *) dlist_pop_head_node(procgloballist);\n> > ...\n>\n> I believe that was just an oversight. Trivial patch attached.\n\nI tested your patch. LGTM.\n\nPGPROC is exposed to third-party code, but since we don't change the\nstructure this time, the extensions will not be affected.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 15:09:15 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: PGProc->links doesn't need to be the first field anymore" }, { "msg_contents": "Hi,\n\nOn 2024-07-04 01:54:18 +0300, Heikki Linnakangas wrote:\n> pgproc.h has this:\n> \n> > struct PGPROC\n> > {\n> > \t/* proc->links MUST BE FIRST IN STRUCT (see ProcSleep,ProcWakeup,etc) */\n> > \tdlist_node\tlinks;\t\t\t/* list link if process is in a list */\n> > \tdlist_head *procgloballist; /* procglobal list that owns this PGPROC */\n> > ...\n> \n> I don't see any particular reason for 'links' to be the first field. We used\n> to do things like \"proc = (PGPROC *) waitQueue->links.next\", but since\n> commit 5764f611e1, this has been a \"dlist\", and dlist_container() can handle\n> the list link being anywhere in the struct.\n\nIndeed.\n\n\n> I tried moving it and ran the regression tests. That revealed one place\n> where we still don't use dlist_container:\n> \n> > \tif (!dlist_is_empty(procgloballist))\n> > \t{\n> > \t\tMyProc = (PGPROC *) dlist_pop_head_node(procgloballist);\n> > ...\n> \n> I believe that was just an oversight. Trivial patch attached.\n\nOops. Yes, I clearly should have used dlist_container() here.\n\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:20:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleanup: PGProc->links doesn't need to be the first field anymore" }, { "msg_contents": "On 04/07/2024 23:20, Andres Freund wrote:\n> On 2024-07-04 01:54:18 +0300, Heikki Linnakangas wrote:\n>> I believe that was just an oversight. Trivial patch attached.\n> \n> Oops. Yes, I clearly should have used dlist_container() here.\nCommitted, thanks.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\t\n\n\n", "msg_date": "Fri, 5 Jul 2024 11:34:06 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cleanup: PGProc->links doesn't need to be the first field anymore" } ]
[ { "msg_contents": "Hi, Postgres hackers!\r\n\r\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\". I think this might be an annotation by a developer to indicate the commit time, but from the commit history (using git), they does not seem to match.\r\n\r\nSpecifically:\r\n\r\n 1. What does “cim” mean here?\r\n 2. Are these annotations still used in modern code practices within the PostgreSQL project, or have they been replaced by version control commit histories? I guess they are not used anymore as there are only 11 annotations in the source code.\r\n\r\nI appreciate any information or historical context you can provide regarding these annotations.\r\n\r\nThank you!\r\n\r\nBest regards, Steve Lau.\r\n\n\n\n\n\n\r\nHi, Postgres hackers!\r\n\n\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\". I think this might be an annotation by a developer to indicate the commit time, but from the commit history (using git), they does not seem to match.\n\nSpecifically:\n\nWhat does “cim” mean here?Are these annotations still used in modern code practices within the PostgreSQL project, or have they been replaced by version control commit histories? I guess they are not used anymore as there are only 11 annotations in the source code.\nI appreciate any information or historical context you can provide regarding these annotations.\nThank you!\nBest regards, Steve Lau.", "msg_date": "Thu, 4 Jul 2024 03:46:09 +0000", "msg_from": "Steve Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Unknown annotation '-cim' in source code" }, { "msg_contents": "On Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n\n>\n> While reading the source code, I noticed comments like \"-cim 9/10/89\". I\n> think this might be an annotation by a developer to indicate the commit\n> time, but from the commit history (using git), they does not seem to match.\n>\n\nIt's the initials of the person who, back in 1989, wrote the preceding\ncomments\n\n * Note: most of these are \"incomplete\" because I didn't\n * need the ones not defined. More should be added\n * only as necessary -cim 10/26/89\n\n\"I\" == cim == ???\n\nPostgreSQL inherited the code which is when our git history begins. This\ncomment was part of the original source.\n\nI do not know the full name of the person those initials refer to.\n\nDavid J.\n\nOn Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n\n\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\". I think this might be an annotation by a developer to indicate the commit time, but from the commit history (using git), they does not seem to match.It's the initials of the person who, back in 1989, wrote the preceding comments * Note: most of these are \"incomplete\" because I didn't *\t\t\t  need the ones not defined.  More should be added *\t\t\t  only as necessary -cim 10/26/89\"I\" == cim == ???PostgreSQL inherited the code which is when our git history begins.  This comment was part of the original source.I do not know the full name of the person those initials refer to.David J.", "msg_date": "Wed, 3 Jul 2024 21:16:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "On Jul 4, 2024, at 12:16 PM, David G. Johnston <[email protected]> wrote:\r\n\r\nOn Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\". I think this might be an annotation by a developer to indicate the commit time, but from the commit history (using git), they does not seem to match.\r\n\r\nIt's the initials of the person who, back in 1989, wrote the preceding comments\r\n\r\n * Note: most of these are \"incomplete\" because I didn't\r\n * need the ones not defined. More should be added\r\n * only as necessary -cim 10/26/89\r\n\r\n\"I\" == cim == ???\r\n\r\nPostgreSQL inherited the code which is when our git history begins. This comment was part of the original source.\r\n\r\nI do not know the full name of the person those initials refer to.\r\n\r\nDavid J.\r\n\r\n\r\n\r\nHi, thanks for the reply and that helpful information!\r\n\r\nBest regards, Steve Lau.\r\n\n\n\n\n\n\n\n\n\nOn Jul 4, 2024, at 12:16 PM, David G. Johnston <[email protected]> wrote:\n\n\n\n\nOn Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n\n\n\n\n\n\n\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\". I think this might be an annotation by a developer to indicate the commit time, but from the commit history (using git), they does not seem to match.\n\n\n\n\nIt's the initials of the person who, back in 1989, wrote the preceding comments\n\n\n * \r\nNote: most of these are \"incomplete\" because I didn't\r\n *  need the ones not defined.  More should be added\r\n *  only as necessary -cim 10/26/89\n\n\n\n\"I\" == cim == ???\n\nPostgreSQL inherited the code which is when our git history begins.  This comment was part of the original source.\n\n\n\nI do not know the full name of the person those initials refer to.\n\n\nDavid J.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi, thanks for the reply and that helpful information!\n\n\nBest regards, Steve Lau.", "msg_date": "Thu, 4 Jul 2024 04:27:04 +0000", "msg_from": "Steve Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n>> While reading the source code, I noticed comments like \"-cim 9/10/89\".\n\n> It's the initials of the person who, back in 1989, wrote the preceding\n> comments\n\nRight.\n\n> PostgreSQL inherited the code which is when our git history begins. This\n> comment was part of the original source.\n\nWe lack any source-code-control history before 1996, so there's no\nway to be sure who wrote that, unless you can identify some Berkeley\nPostgres person with those initials.\n\nThere are other cases in the code with other initials. The practice\nfell out of favor among the open-source PG community in the late 90s,\npossibly because Tom Lockhart and I share the same initials so it\nbecame completely impossible to avoid confusion :-(. I think the\nsurviving \"tgl\" comments in the code are mostly his, but I've not\ncounted carefully.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 00:49:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "On 2024-Jul-04, Tom Lane wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n> >> While reading the source code, I noticed comments like \"-cim 9/10/89\".\n> \n> > It's the initials of the person who, back in 1989, wrote the preceding\n> > comments\n> \n> Right.\n> \n> > PostgreSQL inherited the code which is when our git history begins. This\n> > comment was part of the original source.\n> \n> We lack any source-code-control history before 1996, so there's no\n> way to be sure who wrote that, unless you can identify some Berkeley\n> Postgres person with those initials.\n\nActually, somebody (thanks, Stas) set up a Github repo of the old\nhistory here:\nhttps://github.com/kelvich/postgres_pre95\nThere you can find commits like this\nhttps://github.com/kelvich/postgres_pre95/commit/0bf22e7dbb09b68b6e4c34dccc1440ebe98f8049\nwhere tons of \"- cim\" comments were introduced. Unix account name was\n\"cimarron\". You can go on from there if you want, but why?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Thu, 4 Jul 2024 10:33:52 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "On Jul 4, 2024, at 4:33 PM, Alvaro Herrera <[email protected]> wrote:\r\n\r\nOn 2024-Jul-04, Tom Lane wrote:\r\n\r\n\"David G. Johnston\" <[email protected]> writes:\r\nOn Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\r\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\".\r\n\r\nIt's the initials of the person who, back in 1989, wrote the preceding\r\ncomments\r\n\r\nRight.\r\n\r\nPostgreSQL inherited the code which is when our git history begins. This\r\ncomment was part of the original source.\r\n\r\nWe lack any source-code-control history before 1996, so there's no\r\nway to be sure who wrote that, unless you can identify some Berkeley\r\nPostgres person with those initials.\r\n\r\nActually, somebody (thanks, Stas) set up a Github repo of the old\r\nhistory here:\r\nhttps://github.com/kelvich/postgres_pre95\r\nThere you can find commits like this\r\nhttps://github.com/kelvich/postgres_pre95/commit/0bf22e7dbb09b68b6e4c34dccc1440ebe98f8049\r\nwhere tons of \"- cim\" comments were introduced. Unix account name was\r\n\"cimarron\". You can go on from there if you want, but why?\r\n\r\n--\r\nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/<https://www.enterprisedb.com/>\r\n\"But static content is just dynamic content that isn't moving!\"\r\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\r\n\r\nThanks for the reply from both you guys!\r\n\r\nI really appreciate the link to that pre95 repo, and\r\n\r\n> You can go on from there if you want, but why?\r\n\r\nI would say I love history stories, but yeah, I agree that it does not mean too much nowadays.\r\n\r\nBest regards, Steve Lau.\r\n\n\n\n\n\n\n\n\n\nOn Jul 4, 2024, at 4:33 PM, Alvaro Herrera <[email protected]> wrote:\n\nOn\r\n 2024-Jul-04, Tom Lane wrote:\n\n\r\n\"David G. Johnston\" <[email protected]> writes:\nOn Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\nWhile reading the source code, I noticed comments like \"-cim 9/10/89\".\n\n\n\nIt's the initials of the person who, back in 1989, wrote the preceding\r\ncomments\n\n\r\nRight.\n\nPostgreSQL inherited the code which is when our git history begins.  This\r\ncomment was part of the original source.\n\n\r\nWe lack any source-code-control history before 1996, so there's no\r\nway to be sure who wrote that, unless you can identify some Berkeley\r\nPostgres person with those initials.\n\n\nActually,\r\n somebody (thanks, Stas) set up a Github repo of the old\nhistory\r\n here:\nhttps://github.com/kelvich/postgres_pre95\nThere\r\n you can find commits like this\nhttps://github.com/kelvich/postgres_pre95/commit/0bf22e7dbb09b68b6e4c34dccc1440ebe98f8049\nwhere\r\n tons of \"- cim\" comments were introduced.  Unix account name was\n\"cimarron\".\r\n  You can go on from there if you want, but why?\n\n-- \nÁlvaro\r\n Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/\n\"But\r\n static content is just dynamic content that isn't moving!\"\n               http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n\nThanks for the reply from both you guys!\n\n\nI really appreciate the link to that pre95 repo, and\n\n\n>  You can go on from there if you want, but why?\n\n\nI would say I love history stories, but yeah, I agree that it does not mean too much nowadays.\n\n\nBest regards, Steve Lau.", "msg_date": "Thu, 4 Jul 2024 09:08:46 +0000", "msg_from": "Steve Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n\n> On 2024-Jul-04, Tom Lane wrote:\n>\n>> \"David G. Johnston\" <[email protected]> writes:\n>> > On Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n>> >> While reading the source code, I noticed comments like \"-cim 9/10/89\".\n>> \n>> > It's the initials of the person who, back in 1989, wrote the preceding\n>> > comments\n>> \n>> Right.\n>> \n>> > PostgreSQL inherited the code which is when our git history begins. This\n>> > comment was part of the original source.\n>> \n>> We lack any source-code-control history before 1996, so there's no\n>> way to be sure who wrote that, unless you can identify some Berkeley\n>> Postgres person with those initials.\n>\n> Actually, somebody (thanks, Stas) set up a Github repo of the old\n> history here:\n> https://github.com/kelvich/postgres_pre95\n> There you can find commits like this\n> https://github.com/kelvich/postgres_pre95/commit/0bf22e7dbb09b68b6e4c34dccc1440ebe98f8049\n> where tons of \"- cim\" comments were introduced. Unix account name was\n> \"cimarron\". You can go on from there if you want, but why?\n\nSearching for \"cimarron postgres\" returns\nhttps://en.wikipedia.org/wiki/Illustra, which mentions a Cimarron Taylor\nas one of Stonebraker's students, but I can't find anything else\nrelevant in a few minutes of searching.\n\n- ilmari\n\n\n", "msg_date": "Thu, 04 Jul 2024 13:13:19 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "\n\n> On 4 Jul 2024, at 14:13, Dagfinn Ilmari Mannsåker <[email protected]> wrote:\n> \n> Alvaro Herrera <[email protected]> writes:\n> \n>> On 2024-Jul-04, Tom Lane wrote:\n>> \n>>> \"David G. Johnston\" <[email protected]> writes:\n>>>> On Wed, Jul 3, 2024 at 8:46 PM Steve Lau <[email protected]> wrote:\n>>>>> While reading the source code, I noticed comments like \"-cim 9/10/89\".\n>>> \n>>>> It's the initials of the person who, back in 1989, wrote the preceding\n>>>> comments\n>>> \n>>> Right.\n>>> \n>>>> PostgreSQL inherited the code which is when our git history begins. This\n>>>> comment was part of the original source.\n>>> \n>>> We lack any source-code-control history before 1996, so there's no\n>>> way to be sure who wrote that, unless you can identify some Berkeley\n>>> Postgres person with those initials.\n>> \n>> Actually, somebody (thanks, Stas) set up a Github repo of the old\n>> history here:\n>> https://github.com/kelvich/postgres_pre95\n>> There you can find commits like this\n>> https://github.com/kelvich/postgres_pre95/commit/0bf22e7dbb09b68b6e4c34dccc1440ebe98f8049\n>> where tons of \"- cim\" comments were introduced. Unix account name was\n>> \"cimarron\". You can go on from there if you want, but why?\n> \n> Searching for \"cimarron postgres\" returns\n> https://en.wikipedia.org/wiki/Illustra, which mentions a Cimarron Taylor\n> as one of Stonebraker's students, but I can't find anything else\n> relevant in a few minutes of searching.\n\nThat seems to match up. There is a Cimarron Taylor on LinkedIN who was\n\"Programmer/Analyst\" at U.C. Berkeley Database Research Group in January 1978\nthrough January 1990.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 16:18:43 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" }, { "msg_contents": "On Thu, Jul 4, 2024 at 04:18:43PM +0200, Daniel Gustafsson wrote:\n> > Searching for \"cimarron postgres\" returns\n> > https://en.wikipedia.org/wiki/Illustra, which mentions a Cimarron Taylor\n> > as one of Stonebraker's students, but I can't find anything else\n> > relevant in a few minutes of searching.\n> \n> That seems to match up. There is a Cimarron Taylor on LinkedIN who was\n> \"Programmer/Analyst\" at U.C. Berkeley Database Research Group in January 1978\n> through January 1990.\n\nAnd the Stonebraker video lists Cimarron Taylor as one of the 39 Berkely\ncontributors:\n\n\thttps://momjian.us/main/blogs/pgblog/2015.html#August_5_2015\n\thttps://www.youtube.com/watch?v=BbGeKi6T6QI&t=4269s\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 5 Jul 2024 22:34:49 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unknown annotation '-cim' in source code" } ]
[ { "msg_contents": "Hi,\n\nAttached is a small patch to fix a comment on PQcancelErrorMessage.\n\nIt was accidentally \"Get the socket of the cancel connection.\"\nI rewrote it to \"Returns the error message most recently generated\nby an operation on the cancel connection.\"\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Thu, 4 Jul 2024 13:46:38 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a comment on PQcancelErrorMessage" }, { "msg_contents": "On Thu, 4 Jul 2024 at 06:46, Yugo NAGATA <[email protected]> wrote:\n> Attached is a small patch to fix a comment on PQcancelErrorMessage.\n\nOops, copy paste mistake on my part I guess. New comment LGTM\n\n\n", "msg_date": "Thu, 4 Jul 2024 11:06:03 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a comment on PQcancelErrorMessage" }, { "msg_contents": "On Thu, 4 Jul 2024 11:06:03 +0200\nJelte Fennema-Nio <[email protected]> wrote:\n\n> On Thu, 4 Jul 2024 at 06:46, Yugo NAGATA <[email protected]> wrote:\n> > Attached is a small patch to fix a comment on PQcancelErrorMessage.\n> \n> Oops, copy paste mistake on my part I guess. New comment LGTM\n\nThank you for your comments.\n\nI made a trivial change that fixes the line break position in keeping with\nthe surrounding comments.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Thu, 4 Jul 2024 19:39:31 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a comment on PQcancelErrorMessage" }, { "msg_contents": "On 2024-Jul-04, Yugo NAGATA wrote:\n\n> On Thu, 4 Jul 2024 11:06:03 +0200\n> Jelte Fennema-Nio <[email protected]> wrote:\n> \n> > On Thu, 4 Jul 2024 at 06:46, Yugo NAGATA <[email protected]> wrote:\n> > > Attached is a small patch to fix a comment on PQcancelErrorMessage.\n> > \n> > Oops, copy paste mistake on my part I guess. New comment LGTM\n> \n> Thank you for your comments.\n> \n> I made a trivial change that fixes the line break position in keeping with\n> the surrounding comments.\n\nUgh, mea culpa for failing to notice -- thank you, pushed.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:00:18 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a comment on PQcancelErrorMessage" } ]
[ { "msg_contents": "> Hi,\n> \n> Attached is a small patch to fix a comment on PQcancelErrorMessage.\n> \n> It was accidentally \"Get the socket of the cancel connection.\"\n> I rewrote it to \"Returns the error message most recently generated\n> by an operation on the cancel connection.\"\n\nGood catch. The proposed message matches the PQCancelErrorMessage()\ndocumentation and looks good.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 04 Jul 2024 14:07:47 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a comment on PQcancelErrorMessage" } ]
[ { "msg_contents": "Hi hackers,\n\nWe got trusted extensions since version 13, and some of them are\nrelocatable:\npostgres=> select distinct name from pg_available_extension_versions\n where trusted and relocatable;\n name\n-----------------\n tsm_system_time\n pgcrypto\n dict_int\n tablefunc\n uuid-ossp\n seg\n hstore\n ltree\n intarray\n tcn\n unaccent\n btree_gist\n citext\n pg_trgm\n isn\n btree_gin\n tsm_system_rows\n cube\n fuzzystrmatch\n lo\n(20 rows)\n\nBut, when a non-superuser is trying to use ALTER EXTENSION ... SET SCHEMA,\nit always fails.\nExample:\ntest=> create schema s1;\nCREATE SCHEMA\ntest=> create schema s2;\nCREATE SCHEMA\ntest=> create extension seg with schema s1;\nCREATE EXTENSION\ntest=> alter extension seg set schema s2;\nERROR: must be owner of type s1.seg\n\nIs it an expected behaviour or oversight?\n\nAnother thing is that we allow CREATE EXTENSION ... WITH SCHEMA even if an\nunprivileged user doesn't have CREATE permissions on it.\nFor example, the following works:\ntest=> create extension seg with schema pg_catalog;\nCREATE EXTENSION\n\nBut, at the same time relocation to pg_catalog will fail with the error\nthat the user doesn't have permissions:\ntest=> create extension seg;\nCREATE EXTENSION\ntest=> alter extension seg set schema pg_catalog;\nERROR: permission denied for schema pg_catalog\n\nThis looks like inconsistent behaviour.\n\nShould it be fixed?\n\nThere are two main points:\n1. Either CREATE EXTENSION ... WITH SCHEMA should also check that the user\nhas CREATE permissions on a schema, or we need to remove this check from\nALTER EXTENSION ... SET SCHEMA.\n2. Independently of that we need to switch to a BOOTSTRAP_SUPERUSERID in\nthe AlterExtensionNamespace()\n\nWhat do you think?\n\nRegards,\n--\nAlexander Kukushkin\n\nHi hackers,We got trusted extensions since version 13, and some of them are relocatable:postgres=> select distinct name from pg_available_extension_versions where trusted and relocatable;      name       ----------------- tsm_system_time pgcrypto dict_int tablefunc uuid-ossp seg hstore ltree intarray tcn unaccent btree_gist citext pg_trgm isn btree_gin tsm_system_rows cube fuzzystrmatch lo(20 rows)But, when a non-superuser is trying to use ALTER EXTENSION ... SET SCHEMA, it always fails.Example:test=> create schema s1;CREATE SCHEMAtest=> create schema s2;CREATE SCHEMAtest=> create extension seg with schema s1;CREATE EXTENSIONtest=> alter extension seg set schema s2;ERROR:  must be owner of type s1.segIs it an expected behaviour or oversight?Another thing is that we allow CREATE EXTENSION ... WITH SCHEMA even if an unprivileged user doesn't have CREATE permissions on it.For example, the following works:test=> create extension seg with schema pg_catalog;CREATE EXTENSIONBut, at the same time relocation to pg_catalog will fail with the error that the user doesn't have permissions:test=> create extension seg;CREATE EXTENSIONtest=> alter extension seg set schema pg_catalog;ERROR:  permission denied for schema pg_catalogThis looks like inconsistent behaviour.Should it be fixed?There are two main points:1. Either CREATE EXTENSION ... WITH SCHEMA should also check that the user has CREATE permissions on a schema, or we need to remove this check from ALTER EXTENSION ... SET SCHEMA.2. Independently of that we need to switch to a BOOTSTRAP_SUPERUSERID in the AlterExtensionNamespace()What do you think?Regards,--Alexander Kukushkin", "msg_date": "Thu, 4 Jul 2024 09:15:46 +0200", "msg_from": "Alexander Kukushkin <[email protected]>", "msg_from_op": true, "msg_subject": "Non-superuser can't relocated its own trusted extensions" } ]
[ { "msg_contents": "Add pg_get_acl() to get the ACL for a database object\n\nThis function returns the ACL for a database object, specified by\ncatalog OID and object OID. This is useful to be able to\nretrieve the ACL associated to an object specified with a\n(class_id,objid) couple, similarly to the other functions for object\nidentification, when joined with pg_depend or pg_shdepend.\n\nOriginal idea by Álvaro Herrera.\n\nBump catalog version.\n\nAuthor: Joel Jacobson\nReviewed-by: Isaac Morland, Michael Paquier, Ranier Vilela\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/4564f1cebd437d93590027c9ff46ef60bc3286ae\n\nModified Files\n--------------\ndoc/src/sgml/func.sgml | 41 +++++++++++++++++++++++++++\nsrc/backend/catalog/objectaddress.c | 48 ++++++++++++++++++++++++++++++++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 5 ++++\nsrc/test/regress/expected/privileges.out | 29 +++++++++++++++++++\nsrc/test/regress/sql/privileges.sql | 6 ++++\n6 files changed, 130 insertions(+), 1 deletion(-)", "msg_date": "Thu, 04 Jul 2024 08:09:25 +0000", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Add pg_get_acl() to get the ACL for a database object\n> This function returns the ACL for a database object, specified by\n> catalog OID and object OID.\n\nUh, why is it defined like that rather than allowing a subobject?\nThis definition is unable to fetch column-specific ACLs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 11:44:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Thu, Jul 4, 2024, at 17:44, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> Add pg_get_acl() to get the ACL for a database object\n>> This function returns the ACL for a database object, specified by\n>> catalog OID and object OID.\n>\n> Uh, why is it defined like that rather than allowing a subobject?\n> This definition is unable to fetch column-specific ACLs.\n\nGood point, that's surely an important missing feature,\nthat I hadn't thought about up until now.\nProbably because all object classes, except columns, don't have subobjects.\n\nI wonder if it would be motivated to provide overloads for this function,\nand perhaps even for pg_get_object_address and pg_identify_object_as_address?\n\nThat is, two param versions (class OID and object OID),\nand three param versions that in addition also take subobject ID.\n\nWhy I think this could be motivated, is since during discussion,\nsome even wanted reg* overloads, to avoid having to pass the class OID.\n\nAs a middle ground, maybe users would appreciate if they at least\ndidn't have pass in the extra 0, since it's meaningless anyway,\nmost of the times (for all classes except columns)?\n\nAnyway, that's just an idea. We still need support for subobject,\nso I had a look on how to implement it.\n\nUnfortunately, the AlterObjectOwner_internal function in alter.c,\nwhich pg_get_acl in objectaddress.c is based upon,\ndoesn't deal with subobjects.\n\nI found some code in aclchk.c on line 4452-4468 that seems useful,\nbut not sure. Maybe there is some other existing code that is better\nas an inspiration?\n\nI guess we need to handle the RelationRelationId separately,\nand handle all other classes using the current code in pg_get_acl()?\n\nRegards,\nJoel\n\n\n", "msg_date": "Thu, 04 Jul 2024 22:53:49 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "(Moving to -hackers)\n\nOn Thu, Jul 04, 2024 at 10:53:49PM +0200, Joel Jacobson wrote:\n> On Thu, Jul 4, 2024, at 17:44, Tom Lane wrote:\n>> Uh, why is it defined like that rather than allowing a subobject?\n>> This definition is unable to fetch column-specific ACLs.\n\nYes, I was wondering about that as well yesterday when looking at the\npatch, and saw that this was already a pretty good cut as it covers\nmost of the objects types and does 90% of the job, so just moved on.\n\n> I wonder if it would be motivated to provide overloads for this function,\n> and perhaps even for pg_get_object_address and pg_identify_object_as_address?\n>\n> That is, two param versions (class OID and object OID),\n> and three param versions that in addition also take subobject ID.\n\nI would still stick to only one function, with arguments coming from\nscanning pg_[sh]depend.\n\n> I found some code in aclchk.c on line 4452-4468 that seems useful,\n> but not sure. Maybe there is some other existing code that is better\n> as an inspiration?\n\nMy first feelings about that was that subobjids are only used for\npg_attribute, so if we use them as a base, it would look like:\n- Adding a new AttrNumber in ObjectProperty to track to which column\nthe subobjid should apply and its catcache number (ATTNUM for the\nattribute number for fast lookup).\n- Extend get_catalog_object_by_oid() with more data: the attribute\ncolumn for the subobjid scan stored in ObjectProperty, the subobjid\nvalue given by the caller. If get_catalog_object_by_oid()'s caller\ndefines a subobjid, use get_object_catcache_oid() on it over the main\nOID one. If it cannot be found in the cache, use a scan key based on\nthe class OID and the subobjid.\n\n> I guess we need to handle the RelationRelationId separately,\n> and handle all other classes using the current code in pg_get_acl()?\n\nBut most of that simply blows away because we track the dependency to\nthe attribute ACLs in pg_shdepend based on pg_class, not pg_attribute.\nget_attname() itself relies on ATTNUM, so perhaps that's OK to just\nadd an exception based on RelationRelationId in the code path of\npg_get_acl(). That makes me a bit uncomfortable to derive more from\nthe other routines for the object descriptions, but we've been living\nwith that in getObjectDescription() for years now, as well.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 08:18:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Fri, Jul 5, 2024, at 01:18, Michael Paquier wrote:\n> I would still stick to only one function, with arguments coming from\n> scanning pg_[sh]depend.\n>\n>> I found some code in aclchk.c on line 4452-4468 that seems useful,\n>> but not sure. Maybe there is some other existing code that is better\n>> as an inspiration?\n>\n> My first feelings about that was that subobjids are only used for\n> pg_attribute, so if we use them as a base, it would look like:\n> - Adding a new AttrNumber in ObjectProperty to track to which column\n> the subobjid should apply and its catcache number (ATTNUM for the\n> attribute number for fast lookup).\n> - Extend get_catalog_object_by_oid() with more data: the attribute\n> column for the subobjid scan stored in ObjectProperty, the subobjid\n> value given by the caller. If get_catalog_object_by_oid()'s caller\n> defines a subobjid, use get_object_catcache_oid() on it over the main\n> OID one. If it cannot be found in the cache, use a scan key based on\n> the class OID and the subobjid.\n>\n>> I guess we need to handle the RelationRelationId separately,\n>> and handle all other classes using the current code in pg_get_acl()?\n>\n> But most of that simply blows away because we track the dependency to\n> the attribute ACLs in pg_shdepend based on pg_class, not pg_attribute.\n> get_attname() itself relies on ATTNUM, so perhaps that's OK to just\n> add an exception based on RelationRelationId in the code path of\n> pg_get_acl(). That makes me a bit uncomfortable to derive more from\n> the other routines for the object descriptions, but we've been living\n> with that in getObjectDescription() for years now, as well.\n\nOK, I made an attempt to implement this, based on adapting code from\nrecordExtObjInitPriv() in aclchk.c, which retrieves ACL based on ATTNUM.\n\nThere are now two different code paths for columns and non-columns.\n\nIt sounds like a bigger refactoring in this area would be nice,\nwhich would enable cleaning up code in other functions as well,\nbut maybe that's better to do as a separate project.\n\nI've also updated func.sgml, adjusted pg_proc.dat, and added\nregression tests for column privileges.\n\nRegards,\nJoel", "msg_date": "Fri, 05 Jul 2024 10:40:39 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Fri, Jul 05, 2024 at 10:40:39AM +0200, Joel Jacobson wrote:\n> OK, I made an attempt to implement this, based on adapting code from\n> recordExtObjInitPriv() in aclchk.c, which retrieves ACL based on ATTNUM.\n> \n> There are now two different code paths for columns and non-columns.\n> \n> It sounds like a bigger refactoring in this area would be nice,\n> which would enable cleaning up code in other functions as well,\n> but maybe that's better to do as a separate project.\n\nThanks for the patch. I have been looking at it for a few hours,\neyeing a bit on the ObjectProperty parts a bit if we were to extend it\nfor sub-object IDs, and did not like the complexity this introduces,\nso I'd be OK to live with the extra handling in pg_get_acl() itself.\n\n+ /* ignore dropped columns */\n+ if (atttup->attisdropped)\n+ {\n+ ReleaseSysCache(tup);\n+ PG_RETURN_NULL();\n+ }\n\nHmm. This is an important bit and did not consider it first. That\nmakes the use of ObjectProperty less flexible because we want to look\nat the data in the pg_attribute tuple to be able to return NULL in\nthis case.\n\nI've tweaked a bit what you are proposing, simplifying the code and\nremoving the first batch of queries in the tests as these were less\ninteresting. How does that look?\n--\nMichael", "msg_date": "Mon, 8 Jul 2024 17:34:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Mon, Jul 8, 2024, at 10:34, Michael Paquier wrote:\n> Thanks for the patch. I have been looking at it for a few hours,\n> eyeing a bit on the ObjectProperty parts a bit if we were to extend it\n> for sub-object IDs, and did not like the complexity this introduces,\n> so I'd be OK to live with the extra handling in pg_get_acl() itself.\n>\n> + /* ignore dropped columns */\n> + if (atttup->attisdropped)\n> + {\n> + ReleaseSysCache(tup);\n> + PG_RETURN_NULL();\n> + }\n>\n> Hmm. This is an important bit and did not consider it first. That\n> makes the use of ObjectProperty less flexible because we want to look\n> at the data in the pg_attribute tuple to be able to return NULL in\n> this case.\n>\n> I've tweaked a bit what you are proposing, simplifying the code and\n> removing the first batch of queries in the tests as these were less\n> interesting. How does that look?\n\nThanks, nice simplifications.\nI agree the tests you removed are not that interesting.\n\nLooks good to me.\n\nPatch didn't apply to HEAD nor on top of any of the previous commits either,\nbut I couldn't figure out why based on the .rej files, strange.\nI copy/pasted the parts from the patch to test it. Let me know if you need a\nrebased version of it, in case you will need to do the same, to save some work.\n\nAlso noted the below in your last commit:\n\n a6417078c414 has introduced as project policy that new features\n committed during the development cycle should use new OIDs in the\n [8000,9999] range.\n\n 4564f1cebd43 did not respect that rule, so let's renumber pg_get_acl()\n to use an OID in the correct range.\n\nThanks for the info!\nWill make sure to use the `src/include/catalog/renumber_oids.pl` tool\nfor future patches.\n\nRegards,\nJoel\n\n\n", "msg_date": "Mon, 08 Jul 2024 11:55:28 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Mon, Jul 08, 2024 at 11:55:28AM +0200, Joel Jacobson wrote:\n> Patch didn't apply to HEAD nor on top of any of the previous commits either,\n> but I couldn't figure out why based on the .rej files, strange.\n> I copy/pasted the parts from the patch to test it. Let me know if you need a\n> rebased version of it, in case you will need to do the same, to save some work.\n\nStrange. I have just downloaded v2 again. `patch -p1` and `git am`\nare both working here.\n\n> Also noted the below in your last commit:\n> \n> a6417078c414 has introduced as project policy that new features\n> committed during the development cycle should use new OIDs in the\n> [8000,9999] range.\n> \n> 4564f1cebd43 did not respect that rule, so let's renumber pg_get_acl()\n> to use an OID in the correct range.\n> \n> Thanks for the info!\n> Will make sure to use the `src/include/catalog/renumber_oids.pl` tool\n> for future patches.\n\nNo problem. This is the kind of tweaks that are really easy to\nforget.\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 08:52:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" }, { "msg_contents": "On Mon, Jul 08, 2024 at 11:55:28AM +0200, Joel Jacobson wrote:\n> Thanks, nice simplifications.\n> I agree the tests you removed are not that interesting.\n> \n> Looks good to me.\n\nOn second review, I have spotted a use-after-free for the case of an\nattribute ACL in both v1 and v2, as we would finish by releasing the\nsyscache entry before returning the ACL datum. For builds with the\nflags -DRELCACHE_FORCE_RELEASE -DCATCACHE_FORCE_RELEASE, like the\nbuildfarm member prion, this would cause lookup failures in the\nfunction when building the result because the datum is borked.\n\nAt the end, I have settled down on using SearchSysCacheCopyAttNum().\nIt requires a heap_freetuple(), which we already don't care much about\nin the existing callers of get_catalog_object_by_oid() because these\nare short-lived, with bonus points because this routine uses\nSearchSysCacheAttNum() that has a check for attisdropped, making the \ncode of pg_get_acl() even simpler.\n\nThe usual trick I use to check a fix with this configuration is to use\ntwo builds because initdb is insanely slow if caches are released: one\nwith the flags and one without them. I have used the build without\nthe flags to initialize a cluster with the objects and their ACLs.\nThen, the second build is used only to execute all the scenarios of\npg_get_acl(), which are much cheaper to execute.\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 10:25:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add pg_get_acl() to get the ACL for a database object" } ]
[ { "msg_contents": "While I was trying to install PostgreSQL from the git repository to start\ncontributing I faced this issue. When I try to type ./configure it gives me\nthis error\n\nchecking for a thread-safe mkdir -p... /usr/bin/mkdir -p\nchecking for bison... /c/GnuWin32/bin/bison\nconfigure: using bison (GNU Bison) 2.4.1\nchecking for flex... configure: error:\n*** The installed version of Flex, /c/GnuWin32/bin/flex, is too old to use\nwith PostgreSQL.\n*** Flex version 2.5.35 or later is required, but this is\nC:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n\nLook at the last two lines, the error says that the installed version of\nflex is too old and is 2.4 which is correct and not too old and should be\nvalid but actually I can't proceed beyond this point. And I double checked\nthe version of flex\n\n$ flex --version\nC:\\GnuWin32\\bin\\flex.exe version 2.5.4\n\nand made sure that it is properly included in PATH\n\n$ which flex\n/c/GnuWin32/bin/flex\n\nWhile I was trying to install PostgreSQL from the git repository to start contributing I faced this issue. When I try to type ./configure it gives me this error checking for a thread-safe mkdir -p... /usr/bin/mkdir -pchecking for bison... /c/GnuWin32/bin/bisonconfigure: using bison (GNU Bison) 2.4.1checking for flex... configure: error: *** The installed version of Flex, /c/GnuWin32/bin/flex, is too old to use with PostgreSQL.*** Flex version 2.5.35 or later is required, but this is C:\\GnuWin32\\bin\\flex.exe version 2.5.4.Look at the last two lines, the error says that the installed version of flex is too old and is 2.4 which is correct and not too old and should be valid but actually I can't proceed beyond this point. And I double checked the version of flex $ flex --versionC:\\GnuWin32\\bin\\flex.exe version 2.5.4and made sure that it is properly included in PATH$ which flex/c/GnuWin32/bin/flex", "msg_date": "Thu, 4 Jul 2024 13:16:17 +0300", "msg_from": "Mohab Yaser <[email protected]>", "msg_from_op": true, "msg_subject": "Problem while installing PostgreSQL using make" }, { "msg_contents": "> On 4 Jul 2024, at 12:16, Mohab Yaser <[email protected]> wrote:\n\n> *** Flex version 2.5.35 or later is required, but this is C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n\n> $ flex --version\n> C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n\nYou have all the information you need right there, your Flex is 31 minor\nreleases too old.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 12:19:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "Hi,\n\n> While I was trying to install PostgreSQL from the git repository to start contributing I faced this issue. When I try to type ./configure it gives me this error\n>\n> checking for a thread-safe mkdir -p... /usr/bin/mkdir -p\n> checking for bison... /c/GnuWin32/bin/bison\n> configure: using bison (GNU Bison) 2.4.1\n> checking for flex... configure: error:\n> *** The installed version of Flex, /c/GnuWin32/bin/flex, is too old to use with PostgreSQL.\n> *** Flex version 2.5.35 or later is required, but this is C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n>\n> Look at the last two lines, the error says that the installed version of flex is too old and is 2.4 which is correct and not too old and should be valid but actually I can't proceed beyond this point. And I double checked the version of flex\n>\n> $ flex --version\n> C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n>\n> and made sure that it is properly included in PATH\n>\n> $ which flex\n> /c/GnuWin32/bin/flex\n\nFlex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\ndidn't look further to figure out the exact release year of 2.5.4\n\nYou need something like flex 2.6.4 and bison >= 2.3. That's what I use.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:20:50 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "can you send me a link to download this version on windows as I didn't find\nanything other than the one I already have downloaded\n\nOn Thu, Jul 4, 2024 at 1:21 PM Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi,\n>\n> > While I was trying to install PostgreSQL from the git repository to\n> start contributing I faced this issue. When I try to type ./configure it\n> gives me this error\n> >\n> > checking for a thread-safe mkdir -p... /usr/bin/mkdir -p\n> > checking for bison... /c/GnuWin32/bin/bison\n> > configure: using bison (GNU Bison) 2.4.1\n> > checking for flex... configure: error:\n> > *** The installed version of Flex, /c/GnuWin32/bin/flex, is too old to\n> use with PostgreSQL.\n> > *** Flex version 2.5.35 or later is required, but this is\n> C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n> >\n> > Look at the last two lines, the error says that the installed version of\n> flex is too old and is 2.4 which is correct and not too old and should be\n> valid but actually I can't proceed beyond this point. And I double checked\n> the version of flex\n> >\n> > $ flex --version\n> > C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n> >\n> > and made sure that it is properly included in PATH\n> >\n> > $ which flex\n> > /c/GnuWin32/bin/flex\n>\n> Flex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\n> didn't look further to figure out the exact release year of 2.5.4\n>\n> You need something like flex 2.6.4 and bison >= 2.3. That's what I use.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\ncan you send me a link to download this version on windows as I didn't find anything other than the one I already have downloadedOn Thu, Jul 4, 2024 at 1:21 PM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> While I was trying to install PostgreSQL from the git repository to start contributing I faced this issue. When I try to type ./configure it gives me this error\n>\n> checking for a thread-safe mkdir -p... /usr/bin/mkdir -p\n> checking for bison... /c/GnuWin32/bin/bison\n> configure: using bison (GNU Bison) 2.4.1\n> checking for flex... configure: error:\n> *** The installed version of Flex, /c/GnuWin32/bin/flex, is too old to use with PostgreSQL.\n> *** Flex version 2.5.35 or later is required, but this is C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n>\n> Look at the last two lines, the error says that the installed version of flex is too old and is 2.4 which is correct and not too old and should be valid but actually I can't proceed beyond this point. And I double checked the version of flex\n>\n> $ flex --version\n> C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n>\n> and made sure that it is properly included in PATH\n>\n> $ which flex\n> /c/GnuWin32/bin/flex\n\nFlex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\ndidn't look further to figure out the exact release year of 2.5.4\n\nYou need something like flex 2.6.4 and bison >= 2.3. That's what I use.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 4 Jul 2024 13:22:22 +0300", "msg_from": "Mohab Yaser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "Hi,\n\n>> Flex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\n>> didn't look further to figure out the exact release year of 2.5.4\n>>\n>> You need something like flex 2.6.4 and bison >= 2.3. That's what I use.\n>\n> can you send me a link to download this version on windows as I didn't find anything other than the one I already have downloaded\n\nWe don't use top posing in this mailing list [1].\n\nSorry, I only have Linux and MacOS. Here are the scripts I use [2].\nMaybe someone who develops on Windows will answer your questions.\nHowever IMO your learning curve will be less steep with a Linux\nvirtual machine.\n\n[1]: https://wiki.postgresql.org/wiki/Mailing_Lists\n[2]: https://github.com/afiskon/pgscripts/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:27:51 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "> On 4 Jul 2024, at 12:27, Aleksander Alekseev <[email protected]> wrote:\n> \n> Hi,\n> \n>>> Flex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\n>>> didn't look further to figure out the exact release year of 2.5.4\n>>> \n>>> You need something like flex 2.6.4 and bison >= 2.3. That's what I use.\n>> \n>> can you send me a link to download this version on windows as I didn't find anything other than the one I already have downloaded\n> \n> We don't use top posing in this mailing list [1].\n> \n> Sorry, I only have Linux and MacOS. Here are the scripts I use [2].\n> Maybe someone who develops on Windows will answer your questions.\n> However IMO your learning curve will be less steep with a Linux\n> virtual machine.\n\nFlex/Bison are only required when building from a Git tree, downloading a\nsource archive and building from there might be easier to get started.\n\n\thttps://www.postgresql.org/ftp/source/\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 12:32:17 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "Hi,\n\n> > We don't use top posing in this mailing list [1].\n> >\n> > Sorry, I only have Linux and MacOS. Here are the scripts I use [2].\n> > Maybe someone who develops on Windows will answer your questions.\n> > However IMO your learning curve will be less steep with a Linux\n> > virtual machine.\n>\n> Flex/Bison are only required when building from a Git tree, downloading a\n> source archive and building from there might be easier to get started.\n>\n> https://www.postgresql.org/ftp/source/\n\nIt could work but personally I wouldn't recommend this path for\nsomeone who wants to contribute.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:34:21 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "Hi,\n\n>> > Flex/Bison are only required when building from a Git tree, downloading a\n>> > source archive and building from there might be easier to get started.\n>> >\n>> > https://www.postgresql.org/ftp/source/\n>>\n>> It could work but personally I wouldn't recommend this path for\n>> someone who wants to contribute.\n>\n> Why that?\n\nI believe you wanted to reply to the mailing list, not to me directly.\nPlease use the \"Reply to All\" button.\n\nBecause formatting/rebasing patches is most convenient when you have\ngit and `git format-path`. And in my humble experience this is\nsomething done often.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jul 2024 13:38:44 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "> On 4 Jul 2024, at 12:34, Aleksander Alekseev <[email protected]> wrote:\n\n>>> Sorry, I only have Linux and MacOS. Here are the scripts I use [2].\n>>> Maybe someone who develops on Windows will answer your questions.\n>>> However IMO your learning curve will be less steep with a Linux\n>>> virtual machine.\n>> \n>> Flex/Bison are only required when building from a Git tree, downloading a\n>> source archive and building from there might be easier to get started.\n>> \n>> https://www.postgresql.org/ftp/source/\n> \n> It could work but personally I wouldn't recommend this path for\n> someone who wants to contribute.\n\nAbsolutely, I don't disagree that, but there is value in not getting stuck\ndirectly as well.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 16:30:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "On 2024-Jul-04, Mohab Yaser wrote:\n\n> can you send me a link to download this version on windows as I didn't find\n> anything other than the one I already have downloaded\n\nWell, \nhttps://packages.msys2.org/package/flex\nhas 2.6.4. I don't know what GnuWin32 is, but it looks abandoned.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n", "msg_date": "Thu, 4 Jul 2024 17:19:22 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> Flex/Bison are only required when building from a Git tree, downloading a\n> source archive and building from there might be easier to get started.\n> \thttps://www.postgresql.org/ftp/source/\n\nThat's no longer true I think - as of v17 the source tarballs won't\ncontain any generated files, so you do need flex and bison if you want\nto do any sort of development. I am mildly astonished to hear that\ntoo-old versions of those tools aren't extinct in the wild, though.\n\nIn any case I agree with the recommendation to install and use git\nrather than downloading source tarballs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jul 2024 11:37:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" }, { "msg_contents": "On 2024-07-04 Th 6:22 AM, Mohab Yaser wrote:\n> can you send me a link to download this version on windows as I didn't \n> find anything other than the one I already have downloaded\n>\n> On Thu, Jul 4, 2024 at 1:21 PM Aleksander Alekseev \n> <[email protected]> wrote:\n>\n> Hi,\n>\n> > While I was trying to install PostgreSQL from the git repository\n> to start contributing I faced this issue. When I try to type\n> ./configure it gives me this error\n> >\n> > checking for a thread-safe mkdir -p... /usr/bin/mkdir -p\n> > checking for bison... /c/GnuWin32/bin/bison\n> > configure: using bison (GNU Bison) 2.4.1\n> > checking for flex... configure: error:\n> > *** The installed version of Flex, /c/GnuWin32/bin/flex, is too\n> old to use with PostgreSQL.\n> > *** Flex version 2.5.35 or later is required, but this is\n> C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n> >\n> > Look at the last two lines, the error says that the installed\n> version of flex is too old and is 2.4 which is correct and not too\n> old and should be valid but actually I can't proceed beyond this\n> point. And I double checked the version of flex\n> >\n> > $ flex --version\n> > C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n> >\n> > and made sure that it is properly included in PATH\n> >\n> > $ which flex\n> > /c/GnuWin32/bin/flex\n>\n> Flex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and I\n> didn't look further to figure out the exact release year of 2.5.4\n>\n> You need something like flex 2.6.4 and bison >= 2.3. That's what I\n> use.\n>\n\nI assume this configure script is running under Msys2? Just install its \nflex and bison (and remove these ancient versions):\n\n\n    pacman -S bison flex\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-04 Th 6:22 AM, Mohab Yaser\n wrote:\n\n\n\ncan you send me a link to download this version on\n windows as I didn't find anything other than the one I\n already have downloaded\n\n\nOn Thu, Jul 4, 2024 at 1:21 PM\n Aleksander Alekseev <[email protected]>\n wrote:\n\nHi,\n\n > While I was trying to install PostgreSQL from the git\n repository to start contributing I faced this issue. When I\n try to type ./configure it gives me this error\n >\n > checking for a thread-safe mkdir -p... /usr/bin/mkdir -p\n > checking for bison... /c/GnuWin32/bin/bison\n > configure: using bison (GNU Bison) 2.4.1\n > checking for flex... configure: error:\n > *** The installed version of Flex, /c/GnuWin32/bin/flex,\n is too old to use with PostgreSQL.\n > *** Flex version 2.5.35 or later is required, but this is\n C:\\GnuWin32\\bin\\flex.exe version 2.5.4.\n >\n > Look at the last two lines, the error says that the\n installed version of flex is too old and is 2.4 which is\n correct and not too old and should be valid but actually I\n can't proceed beyond this point. And I double checked the\n version of flex\n >\n > $ flex --version\n > C:\\GnuWin32\\bin\\flex.exe version 2.5.4\n >\n > and made sure that it is properly included in PATH\n >\n > $ which flex\n > /c/GnuWin32/bin/flex\n\n Flex 2.5.4 is ancient. Version 2.5.39 was released in 2020 and\n I\n didn't look further to figure out the exact release year of\n 2.5.4\n\n You need something like flex 2.6.4 and bison >= 2.3. That's\n what I use.\n\n\n\n\n\nI assume this configure script is running under Msys2? Just\n install its flex and bison (and remove these ancient versions):\n\n\n   pacman -S bison flex\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Jul 2024 08:53:40 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem while installing PostgreSQL using make" } ]
[ { "msg_contents": "Hi,\n\nWith the current file_fdw, if even one line of data conversion fails, \nthe contents of the file cannot be referenced at all:\n\n =# \\! cat data/test.data\n 1,a\n 2,b\n a,c\n =# create foreign table f_fdw_test_1 (i int, t text) server f_fdw \noptions (filename 'test.data', format 'csv');\n CREATE FOREIGN TABLE\n\n =# table f_fdw_test_1;\n ERROR: invalid input syntax for type integer: \"a\"\n CONTEXT: COPY f_fdw_test, line 3, column i: \"a\"\n\nSince we'll support ON_ERROR option which tolerates data conversion \nerrors in COPY FROM and LOG_VERBOSITY option at v17[1], how about \nsupporting them on file_fdw?\n\nThis idea comes from Fujii-san[2], and I think it'd be useful when \nreading a bit dirty data.\n\nAttached PoC patch works like below:\n\n =# create foreign table f_fdw_test_2 (i int, t text) server f_fdw \noptions (filename 'test.data', format 'csv', on_error 'ignore');\n CREATE FOREIGN TABLE\n\n =# table f_fdw_test_2;\n NOTICE: 1 row was skipped due to data type incompatibility\n i | t\n ---+---\n 1 | a\n 2 | b\n (2 rows)\n\n\n =# create foreign table f_fdw_test_3 (i int, t text) server f_fdw \noptions (filename 'test.data', format 'csv', on_error 'ignore', \nlog_verbosity 'verbose');\nCREATE FOREIGN TABLE\n\n =# table f_fdw_test_3 ;\n NOTICE: skipping row due to data type incompatibility at line 3 for \ncolumn i: \"a\"\n NOTICE: 1 row was skipped due to data type incompatibility\n i | t\n ---+---\n 1 | a\n 2 | b\n (2 rows)\n\n\nI'm going to continue developing the patch(e.g. add doc, measure \nperformance degradation) when people also think this feature is worth \nadding.\n\n\nWhat do you think?\n\n\n[1] https://www.postgresql.org/docs/devel/sql-copy.html\n[2] https://x.com/fujii_masao/status/1808178032219509041\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Fri, 05 Jul 2024 00:27:30 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-07-05 00:27, torikoshia wrote:\n> Hi,\n> \n> With the current file_fdw, if even one line of data conversion fails,\n> the contents of the file cannot be referenced at all:\n> \n> =# \\! cat data/test.data\n> 1,a\n> 2,b\n> a,c\n> =# create foreign table f_fdw_test_1 (i int, t text) server f_fdw\n> options (filename 'test.data', format 'csv');\n> CREATE FOREIGN TABLE\n> \n> =# table f_fdw_test_1;\n> ERROR: invalid input syntax for type integer: \"a\"\n> CONTEXT: COPY f_fdw_test, line 3, column i: \"a\"\n> \n> Since we'll support ON_ERROR option which tolerates data conversion\n> errors in COPY FROM and LOG_VERBOSITY option at v17[1], how about\n> supporting them on file_fdw?\n> \n> This idea comes from Fujii-san[2], and I think it'd be useful when\n> reading a bit dirty data.\n> \n> Attached PoC patch works like below:\n> \n> =# create foreign table f_fdw_test_2 (i int, t text) server f_fdw\n> options (filename 'test.data', format 'csv', on_error 'ignore');\n> CREATE FOREIGN TABLE\n> \n> =# table f_fdw_test_2;\n> NOTICE: 1 row was skipped due to data type incompatibility\n> i | t\n> ---+---\n> 1 | a\n> 2 | b\n> (2 rows)\n> \n> \n> =# create foreign table f_fdw_test_3 (i int, t text) server f_fdw\n> options (filename 'test.data', format 'csv', on_error 'ignore',\n> log_verbosity 'verbose');\n> CREATE FOREIGN TABLE\n> \n> =# table f_fdw_test_3 ;\n> NOTICE: skipping row due to data type incompatibility at line 3 for\n> column i: \"a\"\n> NOTICE: 1 row was skipped due to data type incompatibility\n> i | t\n> ---+---\n> 1 | a\n> 2 | b\n> (2 rows)\n> \n> \n> I'm going to continue developing the patch(e.g. add doc, measure\n> performance degradation) when people also think this feature is worth\n> adding.\n> \n> \n> What do you think?\n> \n> \n> [1] https://www.postgresql.org/docs/devel/sql-copy.html\n> [2] https://x.com/fujii_masao/status/1808178032219509041\n\nUpdate the patch since v1 patch caused compiler warning.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Fri, 19 Jul 2024 10:37:47 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "Hi,\n\nOn Thu, Jul 18, 2024 at 6:38 PM torikoshia <[email protected]> wrote:\n>\n> On 2024-07-05 00:27, torikoshia wrote:\n> > Hi,\n> >\n> > With the current file_fdw, if even one line of data conversion fails,\n> > the contents of the file cannot be referenced at all:\n> >\n> > =# \\! cat data/test.data\n> > 1,a\n> > 2,b\n> > a,c\n> > =# create foreign table f_fdw_test_1 (i int, t text) server f_fdw\n> > options (filename 'test.data', format 'csv');\n> > CREATE FOREIGN TABLE\n> >\n> > =# table f_fdw_test_1;\n> > ERROR: invalid input syntax for type integer: \"a\"\n> > CONTEXT: COPY f_fdw_test, line 3, column i: \"a\"\n> >\n> > Since we'll support ON_ERROR option which tolerates data conversion\n> > errors in COPY FROM and LOG_VERBOSITY option at v17[1], how about\n> > supporting them on file_fdw?\n\n+1\n\n> >\n> > This idea comes from Fujii-san[2], and I think it'd be useful when\n> > reading a bit dirty data.\n> >\n> > Attached PoC patch works like below:\n> >\n> > =# create foreign table f_fdw_test_2 (i int, t text) server f_fdw\n> > options (filename 'test.data', format 'csv', on_error 'ignore');\n> > CREATE FOREIGN TABLE\n> >\n> > =# table f_fdw_test_2;\n> > NOTICE: 1 row was skipped due to data type incompatibility\n> > i | t\n> > ---+---\n> > 1 | a\n> > 2 | b\n> > (2 rows)\n\nI'm slightly concerned that users might not want to see the NOTICE\nmessage for every scan. Unlike COPY FROM, scanning a file via file_fdw\ncould be frequent. An alternative idea of place to write the\ninformation of the number of malformed rows would be the EXPLAIN\ncommand as follow:\n\n QUERY PLAN\n----------------------------------------------------------------\n Foreign Scan on public.test (cost=0.00..1.10 rows=1 width=12)\n Output: a, b, c\n Foreign File: test.csv\n Foreign File Size: 12 b\n Skipped Rows: 10\n\n> >\n> >\n> > =# create foreign table f_fdw_test_3 (i int, t text) server f_fdw\n> > options (filename 'test.data', format 'csv', on_error 'ignore',\n> > log_verbosity 'verbose');\n> > CREATE FOREIGN TABLE\n> >\n> > =# table f_fdw_test_3 ;\n> > NOTICE: skipping row due to data type incompatibility at line 3 for\n> > column i: \"a\"\n> > NOTICE: 1 row was skipped due to data type incompatibility\n> > i | t\n> > ---+---\n> > 1 | a\n> > 2 | b\n> > (2 rows)\n\nIIUC we have to execute ALTER FOREIGN TABLE to change the\nlog_verbosity value and which requires to be the owner. Which seems\nnot to be user-friendly.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 15:07:46 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On Mon, Jul 22, 2024 at 03:07:46PM -0700, Masahiko Sawada wrote:\n> I'm slightly concerned that users might not want to see the NOTICE\n> message for every scan. Unlike COPY FROM, scanning a file via file_fdw\n> could be frequent. An alternative idea of place to write the\n> information of the number of malformed rows would be the EXPLAIN\n> command as follow:\n\nYeah, I also have some concerns regarding the noise that this could\nproduce if called on a foreign table on a regular basis. The verbose\nmode is disabled by default so I don't see why we should not allow it\nif the relation owner wants to show it.\n\nPerhaps we should first do a silence mode for log_verbosity to skip\nthe NOTICE produced at the end of the COPY FROM summarizing the whole?\nIt would be confusing to have different defaults between COPY and\nfile_fdw, but having the option to silence that completely is also\nappealing from the user point of view.\n\n> QUERY PLAN\n> ----------------------------------------------------------------\n> Foreign Scan on public.test (cost=0.00..1.10 rows=1 width=12)\n> Output: a, b, c\n> Foreign File: test.csv\n> Foreign File Size: 12 b\n> Skipped Rows: 10\n\nInteresting idea linked to the idea of pushing the error state to\nsomething else than the logs. Sounds like a separate feature.\n\n> IIUC we have to execute ALTER FOREIGN TABLE to change the\n> log_verbosity value and which requires to be the owner. Which seems\n> not to be user-friendly.\n\nI am not sure about allowing scans to force an option to be a\ndifferent thing at runtime vs what's been defined in the relation\nitself with CREATE/ALTER.\n--\nMichael", "msg_date": "Tue, 23 Jul 2024 08:57:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-07-23 08:57, Michael Paquier wrote:\n> On Mon, Jul 22, 2024 at 03:07:46PM -0700, Masahiko Sawada wrote:\n>> I'm slightly concerned that users might not want to see the NOTICE\n>> message for every scan. Unlike COPY FROM, scanning a file via file_fdw\n>> could be frequent.\n\nAgreed.\n\n> Yeah, I also have some concerns regarding the noise that this could\n> produce if called on a foreign table on a regular basis. The verbose\n> mode is disabled by default so I don't see why we should not allow it\n> if the relation owner wants to show it.\n> \n> Perhaps we should first do a silence mode for log_verbosity to skip\n> the NOTICE produced at the end of the COPY FROM summarizing the whole?\n\nI like this idea.\nIf there are no objections, I'm going to make a patch for this.\n\n> It would be confusing to have different defaults between COPY and\n> file_fdw, but having the option to silence that completely is also\n> appealing from the user point of view.\n\nI'm not sure we should change the defaults.\nIf the default of file_fdw is silence mode, I am a little concerned that \nthere may be cases where people think they have no errors, but in fact \nthey have.\n\n>> QUERY PLAN\n>> ----------------------------------------------------------------\n>> Foreign Scan on public.test (cost=0.00..1.10 rows=1 width=12)\n>> Output: a, b, c\n>> Foreign File: test.csv\n>> Foreign File Size: 12 b\n>> Skipped Rows: 10\n> \n> Interesting idea linked to the idea of pushing the error state to\n> something else than the logs. Sounds like a separate feature.\n\n+1\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Wed, 24 Jul 2024 19:43:37 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-07-24 19:43, torikoshia wrote:\n> On 2024-07-23 08:57, Michael Paquier wrote:\n>> On Mon, Jul 22, 2024 at 03:07:46PM -0700, Masahiko Sawada wrote:\n>>> I'm slightly concerned that users might not want to see the NOTICE\n>>> message for every scan. Unlike COPY FROM, scanning a file via \n>>> file_fdw\n>>> could be frequent.\n> \n> Agreed.\n> \n>> Yeah, I also have some concerns regarding the noise that this could\n>> produce if called on a foreign table on a regular basis. The verbose\n>> mode is disabled by default so I don't see why we should not allow it\n>> if the relation owner wants to show it.\n>> \n>> Perhaps we should first do a silence mode for log_verbosity to skip\n>> the NOTICE produced at the end of the COPY FROM summarizing the whole?\n> \n> I like this idea.\n> If there are no objections, I'm going to make a patch for this.\n\nAttached patches.\n0001 adds new option 'silent' to log_verbosity and 0002 adds on_error \nand log_verbosity options to file_fdw.\n\n\n> I'm going to continue developing the patch(e.g. add doc, measure\nperformance degradation) when people also think this feature is worth \nadding.\n\nHere is a quick performance test result.\n\nI loaded 1,000,000 rows using pgbench_accounts to a file and counted the \nnumber of rows using file_fdw on different conditions and compared the \nexecution times on my laptop.\n\nThe changed conditions are below:\n- source code: HEAD/patch applied\n- data: no error data/all row occur data conversion error at the 1st \ncolumn\n- file_fdw options: on_error=stop/on_error=ignore\n\nThere seems no significant difference in performance between HEAD and \nthe patch applied with on_error option specified as either ignore/stop \nwhen data has no error.\nOTOH when all rows occur data conversion error, it is significantly \nfaster than other cases:\n\n# HAED(e950fe58bd0)\n## data:no error\n\n=# create foreign table t1 (a int, b int, c int, t text) server f_fdw \noptions (filename 'pgb_ac', format 'csv');\n=# select count(*) from t1;\n\n1567.569 ms\n1675.112 ms\n1555.782 ms\n1547.676 ms\n1660.221 ms\n\n# patch applied\n## data:no error, on_error:stop\n\n=# create foreign table t1 (a int, b int, c int, t text) server f_fdw \noptions (filename 'pgb_ac', format 'csv', on_error 'stop');\n=# select count(*) from t1;\n\n1580.656 ms\n1623.784 ms\n1596.947 ms\n1652.307 ms\n1613.607 ms\n\n## data:no error, on_error:ignore\n\n=# create foreign table t1 (a int, b int, c int, t text) server f_fdw \noptions (filename 'pgb_ac', format 'csv', on_error 'ignore');\n=# select count(*) from t1;\n\n1575.718 ms\n1597.464 ms\n1596.540 ms\n1665.818 ms\n1595.453 ms\n\n#### data:all rows contain error, on_error:ignore\n\n=# create foreign table t1 (a int, b int, c int, t text) server f_fdw \noptions (filename 'pgb_ac', format 'csv', on_error 'ignore');\n=# select count(*) from t1;\n\n914.537 ms\n907.506 ms\n912.768 ms\n913.769 ms\n914.327 ms\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Thu, 08 Aug 2024 16:36:02 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "\n\nOn 2024/08/08 16:36, torikoshia wrote:\n> Attached patches.\n> 0001 adds new option 'silent' to log_verbosity and 0002 adds on_error and log_verbosity options to file_fdw.\n\nThanks for the patches!\n\nHere are the review comments for 0001 patch.\n\n+ <literal>silent</literal> excludes verbose messages.\n\nThis should clarify that in silent mode, not only verbose messages but also\ndefault ones are suppressed?\n\n+\t\tcstate->opts.log_verbosity != COPY_LOG_VERBOSITY_SILENT)\n\nI think using \"cstate->opts.log_verbosity >= COPY_LOG_VERBOSITY_DEFAULT\" instead\nmight improve readability.\n\n-\tCOPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional messages, default */\n-\tCOPY_LOG_VERBOSITY_VERBOSE, /* logs additional messages */\n+\tCOPY_LOG_VERBOSITY_SILENT = -1,\t/* logs none */\n+\tCOPY_LOG_VERBOSITY_DEFAULT = 0,\t/* logs no additional messages, default */\n+\tCOPY_LOG_VERBOSITY_VERBOSE,\t/* logs additional messages */\n\nWhy do we need to assign specific numbers like -1 or 0 in this enum definition?\n\n\nHere are the review comments for 0002 patch.\n\n+\t\t\tpgstat_progress_update_param(PROGRESS_COPY_TUPLES_SKIPPED,\n+\t\t\t\t\t\t\t\t\t\t ++skipped);\n\nThe skipped tuple count isn't accurate because fileIterateForeignScan() resets\n\"skipped\" to 0.\n\n+\t\tif (cstate->opts.on_error != COPY_ON_ERROR_STOP &&\n+\t\t\tcstate->escontext->error_occurred)\n+\t\t{\n+\t\t\t/*\n+\t\t\t * Soft error occurred, skip this tuple and deal with error\n+\t\t\t * information according to ON_ERROR.\n+\t\t\t */\n+\t\t\tif (cstate->opts.on_error == COPY_ON_ERROR_IGNORE)\n\nIf COPY_ON_ERROR_IGNORE indicates tuple skipping, shouldn’t we not only reset\nerror_occurred but also call \"pgstat_progress_update_param\" and continue within\nthis block?\n\n+\tfor(;;)\n+\t{\n\nUsing \"goto\" here might improve readability instead of using a \"for\" loop.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Wed, 11 Sep 2024 23:50:25 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-09-11 23:50, Fujii Masao wrote:\n\nThanks for your comments!\n\n> Here are the review comments for 0001 patch.\n> \n> + <literal>silent</literal> excludes verbose messages.\n> \n> This should clarify that in silent mode, not only verbose messages but \n> also\n> default ones are suppressed?\n\nAgreed.\n\n> + cstate->opts.log_verbosity != \n> COPY_LOG_VERBOSITY_SILENT)\n> \n> I think using \"cstate->opts.log_verbosity >= \n> COPY_LOG_VERBOSITY_DEFAULT\" instead\n> might improve readability.\n\nAgreed.\nI've also modified a similar code in fileEndForeignScan() in 0002 patch.\n\n> - COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional messages, \n> default */\n> - COPY_LOG_VERBOSITY_VERBOSE, /* logs additional messages */\n> + COPY_LOG_VERBOSITY_SILENT = -1, /* logs none */\n> + COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional messages, \n> default */\n> + COPY_LOG_VERBOSITY_VERBOSE, /* logs additional messages */\n> \n> Why do we need to assign specific numbers like -1 or 0 in this enum \n> definition?\n\nCopyFormatOptions is initialized by palloc0() at the beginning of \nProcessCopyOptions().\nThe reason to assign specific numbers here is to assign \nCOPY_LOG_VERBOSITY_DEFAULT to 0 as default value and sort elements of \nenum according to the amount of logging.\n\nShould we assign 0 to COPY_LOG_VERBOSITY_SILENT and change \nopts_out->log_verbosity after the initialization?\n\n> Here are the review comments for 0002 patch.\n> \n> + \n> pgstat_progress_update_param(PROGRESS_COPY_TUPLES_SKIPPED,\n> + \n> ++skipped);\n> \n> The skipped tuple count isn't accurate because fileIterateForeignScan() \n> resets\n> \"skipped\" to 0.\n\nUgh. Fixed to use cstate->num_errors.\nBTW CopyFrom() also uses local variable skipped. It isn't reset like \nfile_fdw, but using local variable seems not necessary since we can use \ncstate->num_errors here as well.\nI'm going to discuss it in another thread.\n\n> + if (cstate->opts.on_error != COPY_ON_ERROR_STOP &&\n> + cstate->escontext->error_occurred)\n> + {\n> + /*\n> + * Soft error occurred, skip this tuple and \n> deal with error\n> + * information according to ON_ERROR.\n> + */\n> + if (cstate->opts.on_error == \n> COPY_ON_ERROR_IGNORE)\n> \n> If COPY_ON_ERROR_IGNORE indicates tuple skipping, shouldn’t we not only \n> reset\n> error_occurred but also call \"pgstat_progress_update_param\" and \n> continue within\n> this block?\n\nI may misunderstand your comment, but I thought it to behave as you \nexpect in the below codes:\n\n```\n+ /* Report that this tuple was skipped by the ON_ERROR clause \n*/\n+ pgstat_progress_update_param(PROGRESS_COPY_TUPLES_SKIPPED,\n+ ++skipped);\n+\n+ /* Repeat NextCopyFrom() until no soft error occurs */\n+ continue;\n```\n\n> + for(;;)\n> + {\n> Using \"goto\" here might improve readability instead of using a \"for\" \n> loop.\n\nHmm, AFAIU we need to return a slot here before the end of file is \nreached.\n\n```\n--src/backend/executor/execMain.c [ExecutePlan()]\n /*\n * if the tuple is null, then we assume there is nothing more \nto\n * process so we just end the loop...\n */\n if (TupIsNull(slot))\n break;\n```\n\nWhen ignoring errors, we have to keep calling NextCopyFrom() until we \nfind a non error tuple or EOF and to do so calling NextCopyFrom() in for \nloop seems straight forward.\n\nReplacing the \"for\" loop using \"goto\" as follows is possible, but seems \nnot so readable because of the upward \"goto\":\n\n```\nstart_of_getting_tuple:\n\n if (!NextCopyFrom(cstate, econtext,\n slot->tts_values, slot->tts_isnull))\n goto end_of_fileread;\n\n if (cstate->opts.on_error != COPY_ON_ERROR_STOP &&\n cstate->escontext->error_occurred)\n {\n /*\n * Soft error occurred, skip this tuple and deal with error\n * information according to ON_ERROR.\n */\n if (cstate->opts.on_error == COPY_ON_ERROR_IGNORE)\n\n /*\n * Just make ErrorSaveContext ready for the next \nNextCopyFrom.\n * Since we don't set details_wanted and error_data is not \nto\n * be filled, just resetting error_occurred is enough.\n */\n cstate->escontext->error_occurred = false;\n\n /* Report that this tuple was skipped by the ON_ERROR clause */\n pgstat_progress_update_param(PROGRESS_COPY_TUPLES_SKIPPED,\n cstate->num_errors);\n goto start_of_fileread;\n }\n ExecStoreVirtualTuple(slot);\n\nend_of_getting_tuple:\n```\n\nAttached v4 patches reflected these comments.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Thu, 19 Sep 2024 23:16:47 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "\n\nOn 2024/09/19 23:16, torikoshia wrote:\n>> -       COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional messages, default */\n>> -       COPY_LOG_VERBOSITY_VERBOSE, /* logs additional messages */\n>> +       COPY_LOG_VERBOSITY_SILENT = -1, /* logs none */\n>> +       COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional messages, default */\n>> +       COPY_LOG_VERBOSITY_VERBOSE,     /* logs additional messages */\n>>\n>> Why do we need to assign specific numbers like -1 or 0 in this enum definition?\n> \n> CopyFormatOptions is initialized by palloc0() at the beginning of ProcessCopyOptions().\n> The reason to assign specific numbers here is to assign COPY_LOG_VERBOSITY_DEFAULT to 0 as default value and sort elements of enum according to the amount of logging.\n\nUnderstood.\n\n\n> BTW CopyFrom() also uses local variable skipped. It isn't reset like file_fdw, but using local variable seems not necessary since we can use cstate->num_errors here as well.\n\nSounds reasonable to me.\n\n\n>> +               if (cstate->opts.on_error != COPY_ON_ERROR_STOP &&\n>> +                       cstate->escontext->error_occurred)\n>> +               {\n>> +                       /*\n>> +                        * Soft error occurred, skip this tuple and deal with error\n>> +                        * information according to ON_ERROR.\n>> +                        */\n>> +                       if (cstate->opts.on_error == COPY_ON_ERROR_IGNORE)\n>>\n>> If COPY_ON_ERROR_IGNORE indicates tuple skipping, shouldn’t we not only reset\n>> error_occurred but also call \"pgstat_progress_update_param\" and continue within\n>> this block?\n> \n> I may misunderstand your comment, but I thought it to behave as you expect in the below codes:\n\nThe \"on_error == COPY_ON_ERROR_IGNORE\" condition isn't needed since\n\"on_error != COPY_ON_ERROR_STOP\" is already checked, and on_error accepts\nonly two values \"ignore\" and \"stop.\" I assume you added it with\na future option in mind, like \"set_to_null\" (as discussed in another thread).\nHowever, I’m not sure how much this helps such future changes.\nSo, how about simplifying the code by replacing \"on_error != COPY_ON_ERROR_STOP\"\nwith \"on_error == COPY_ON_ERROR_IGNORE\" at the top and removing\nthe \"on_error == COPY_ON_ERROR_IGNORE\" check in the middle?\nIt could improve readability.\n\n\n>> +       for(;;)\n>> +       {\n>> Using \"goto\" here might improve readability instead of using a \"for\" loop.\n> \n> Hmm, AFAIU we need to return a slot here before the end of file is reached.\n> \n> ```\n> --src/backend/executor/execMain.c [ExecutePlan()]\n>            /*\n>             * if the tuple is null, then we assume there is nothing more to\n>             * process so we just end the loop...\n>             */\n>            if (TupIsNull(slot))\n>                break;\n> ```\n> \n> When ignoring errors, we have to keep calling NextCopyFrom() until we find a non error tuple or EOF and to do so calling NextCopyFrom() in for loop seems straight forward.\n> \n> Replacing the \"for\" loop using \"goto\" as follows is possible, but seems not so readable because of the upward \"goto\":\n\nUnderstood.\n\n\n> Attached v4 patches reflected these comments.\n\nThanks for updating the patches!\n\nThe tab-completion needs to be updated to support the \"silent\" option?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Fri, 20 Sep 2024 11:27:42 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-09-20 11:27, Fujii Masao wrote:\n\nThanks for your comments!\n\n> On 2024/09/19 23:16, torikoshia wrote:\n>>> -       COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional \n>>> messages, default */\n>>> -       COPY_LOG_VERBOSITY_VERBOSE, /* logs additional messages */\n>>> +       COPY_LOG_VERBOSITY_SILENT = -1, /* logs none */\n>>> +       COPY_LOG_VERBOSITY_DEFAULT = 0, /* logs no additional \n>>> messages, default */\n>>> +       COPY_LOG_VERBOSITY_VERBOSE,     /* logs additional messages \n>>> */\n>>> \n>>> Why do we need to assign specific numbers like -1 or 0 in this enum \n>>> definition?\n>> \n>> CopyFormatOptions is initialized by palloc0() at the beginning of \n>> ProcessCopyOptions().\n>> The reason to assign specific numbers here is to assign \n>> COPY_LOG_VERBOSITY_DEFAULT to 0 as default value and sort elements of \n>> enum according to the amount of logging.\n> \n> Understood.\n> \n> \n>> BTW CopyFrom() also uses local variable skipped. It isn't reset like \n>> file_fdw, but using local variable seems not necessary since we can \n>> use cstate->num_errors here as well.\n> \n> Sounds reasonable to me.\n> \n> \n>>> +               if (cstate->opts.on_error != COPY_ON_ERROR_STOP &&\n>>> +                       cstate->escontext->error_occurred)\n>>> +               {\n>>> +                       /*\n>>> +                        * Soft error occurred, skip this tuple and \n>>> deal with error\n>>> +                        * information according to ON_ERROR.\n>>> +                        */\n>>> +                       if (cstate->opts.on_error == \n>>> COPY_ON_ERROR_IGNORE)\n>>> \n>>> If COPY_ON_ERROR_IGNORE indicates tuple skipping, shouldn’t we not \n>>> only reset\n>>> error_occurred but also call \"pgstat_progress_update_param\" and \n>>> continue within\n>>> this block?\n>> \n>> I may misunderstand your comment, but I thought it to behave as you \n>> expect in the below codes:\n> \n> The \"on_error == COPY_ON_ERROR_IGNORE\" condition isn't needed since\n> \"on_error != COPY_ON_ERROR_STOP\" is already checked, and on_error \n> accepts\n> only two values \"ignore\" and \"stop.\" I assume you added it with\n> a future option in mind, like \"set_to_null\" (as discussed in another \n> thread).\n> However, I’m not sure how much this helps such future changes.\n> So, how about simplifying the code by replacing \"on_error != \n> COPY_ON_ERROR_STOP\"\n> with \"on_error == COPY_ON_ERROR_IGNORE\" at the top and removing\n> the \"on_error == COPY_ON_ERROR_IGNORE\" check in the middle?\n> It could improve readability.\n\nThanks for the explanation and suggestion.\nSince there is almost the same code in copyfrom.c, attached 0003 patch \nfor refactoring both.\n\n>>> +       for(;;)\n>>> +       {\n>>> Using \"goto\" here might improve readability instead of using a \"for\" \n>>> loop.\n>> \n>> Hmm, AFAIU we need to return a slot here before the end of file is \n>> reached.\n>> \n>> ```\n>> --src/backend/executor/execMain.c [ExecutePlan()]\n>>            /*\n>>             * if the tuple is null, then we assume there is nothing \n>> more to\n>>             * process so we just end the loop...\n>>             */\n>>            if (TupIsNull(slot))\n>>                break;\n>> ```\n>> \n>> When ignoring errors, we have to keep calling NextCopyFrom() until we \n>> find a non error tuple or EOF and to do so calling NextCopyFrom() in \n>> for loop seems straight forward.\n>> \n>> Replacing the \"for\" loop using \"goto\" as follows is possible, but \n>> seems not so readable because of the upward \"goto\":\n> \n> Understood.\n> \n> \n>> Attached v4 patches reflected these comments.\n> \n> Thanks for updating the patches!\n> \n> The tab-completion needs to be updated to support the \"silent\" option?\n\nYes, updated 0002 patch.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Tue, 24 Sep 2024 20:08:00 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "\n\nOn 2024/09/24 20:08, torikoshia wrote:\n> Thanks for the explanation and suggestion.\n> Since there is almost the same code in copyfrom.c, attached 0003 patch for refactoring both.\n\nThanks for updating the patches!\n\nRegarding 0002.patch, I think it’s better to include the refactored code\nfrom the start rather than adding redundant code intentionally.\nHow about leaving just the refactor in copyfrom.c to 0003.patch?\nIf that works, as a refactoring, you could also replace \"skipped\" with\n\"cstate->num_errors\" in that patch, as you suggested earlier.\n\nWhile reviewing again, I noticed that running ANALYZE on a file_fdw\nforeign table also calls NextCopyFrom(), but it doesn’t seem to\nskip erroneous rows when on_error is set to \"ignore.\" This could lead\nto inaccurate statistics. Shouldn’t ANALYZE on file_fdw foreign tables\nwith on_error=ignore also skip erroneous rows?\n\n>> The tab-completion needs to be updated to support the \"silent\" option?\n> \n> Yes, updated 0002 patch.\n\nThanks! Also, this should be part of 0001.patch since \"silent\" is\nintroduced there, right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Wed, 25 Sep 2024 00:46:24 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024-09-25 00:46, Fujii Masao wrote:\nThanks for the comments!\n\n> On 2024/09/24 20:08, torikoshia wrote:\n>> Thanks for the explanation and suggestion.\n>> Since there is almost the same code in copyfrom.c, attached 0003 patch \n>> for refactoring both.\n> \n> Thanks for updating the patches!\n> \n> Regarding 0002.patch, I think it’s better to include the refactored \n> code\n> from the start rather than adding redundant code intentionally.\n> How about leaving just the refactor in copyfrom.c to 0003.patch?\n> If that works, as a refactoring, you could also replace \"skipped\" with\n> \"cstate->num_errors\" in that patch, as you suggested earlier.\n\nAgreed.\n\n> While reviewing again, I noticed that running ANALYZE on a file_fdw\n> foreign table also calls NextCopyFrom(), but it doesn’t seem to\n> skip erroneous rows when on_error is set to \"ignore.\" This could lead\n> to inaccurate statistics. Shouldn’t ANALYZE on file_fdw foreign tables\n> with on_error=ignore also skip erroneous rows?\n\nThanks, it seems the right thing to do.\n\n>>> The tab-completion needs to be updated to support the \"silent\" \n>>> option?\n>> \n>> Yes, updated 0002 patch.\n> \n> Thanks! Also, this should be part of 0001.patch since \"silent\" is\n> introduced there, right?\n\nAgreed.\n\nUpdated the patches.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Thu, 26 Sep 2024 21:57:34 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" }, { "msg_contents": "On 2024/09/26 21:57, torikoshia wrote:\n> Updated the patches.\n\nThanks for updating the patches! I’ve made some changes based on your work, which are attached.\nBarring any objections, I'm thinking to push these patches.\n\nFor patches 0001 and 0003, I ran pgindent and updated the commit message.\n\nRegarding patch 0002:\n\n- I updated the regression test to run ANALYZE on the file_fdw foreign table\n since the on_error option also affects the ANALYZE command. To ensure test\n stability, the test now runs ANALYZE with log_verbosity = 'silent'.\n\n- I removed the code that updated the count of skipped rows for\n the pg_stat_progress_copy view. As far as I know, file_fdw doesn’t\n currently support tracking pg_stat_progress_copy.tuples_processed.\n Supporting only tuples_skipped seems inconsistent, so I suggest creating\n a separate patch to extend file_fdw to track both tuples_processed and\n tuples_skipped in this view.\n\n- I refactored the for-loop handling on_error = 'ignore' in fileIterateForeignScan()\n by replacing it with a goto statement for improved readability.\n\n- I modified file_fdw to log a NOTICE message about skipped rows at the end of\n ANALYZE if any rows are skipped due to the on_error = 'ignore' setting.\n\n Regarding the \"file contains XXX rows\" message reported by the ANALYZE VERBOSE\n command on the file_fdw foreign table, what number should be reflected in XXX,\n especially when some rows are skipped due to on_error = 'ignore'?\n Currently, the count only includes valid rows, excluding any skipped rows.\n I haven't modified this code yet. Should we instead count all rows\n (valid and erroneous) and report that total?\n\n I noticed the code for reporting the number of skipped rows due to\n on_error = 'ignore' appears in three places. I’m considering creating\n a common function for this reporting to eliminate redundancy but haven’t\n implemented it yet.\n\n- I’ve updated the commit message and run pgindent.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 1 Oct 2024 00:36:11 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add on_error and log_verbosity options to file_fdw" } ]
[ { "msg_contents": "original CopyOneRowTo:\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/backend/commands/copyto.c#n922\nI change it to:\n-----------------------\nif (!cstate->opts.binary)\n{\nforeach_int(attnum, cstate->attnumlist)\n{\nDatum value = slot->tts_values[attnum - 1];\nbool isnull = slot->tts_isnull[attnum - 1];\n\nif (need_delim)\nCopySendChar(cstate, cstate->opts.delim[0]);\nneed_delim = true;\n\nif (isnull)\nCopySendString(cstate, cstate->opts.null_print_client);\nelse\n{\nstring = OutputFunctionCall(&out_functions[attnum - 1],\nvalue);\nif (cstate->opts.csv_mode)\nCopyAttributeOutCSV(cstate, string,\ncstate->opts.force_quote_flags[attnum - 1]);\nelse\nCopyAttributeOutText(cstate, string);\n}\n}\n}\nelse\n{\nforeach_int(attnum, cstate->attnumlist)\n{\nDatum value = slot->tts_values[attnum - 1];\nbool isnull = slot->tts_isnull[attnum - 1];\nbytea *outputbytes;\n\nif (isnull)\nCopySendInt32(cstate, -1);\nelse\n{\noutputbytes = SendFunctionCall(&out_functions[attnum - 1],\nvalue);\nCopySendInt32(cstate, VARSIZE(outputbytes) - VARHDRSZ);\nCopySendData(cstate, VARDATA(outputbytes),\nVARSIZE(outputbytes) - VARHDRSZ);\n}\n}\n}\n\n\noverall less \"if else\" logic,\nalso copy format don't change during copy, we don't need to check\nbinary or nor for each datum value.", "msg_date": "Fri, 5 Jul 2024 00:26:33 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "refactor the CopyOneRowTo" }, { "msg_contents": "Hi, hackers\n\nI'm sure this refactoring is useful because it's more readable when \ndatum value is binary or not.\n\nHowever, I can see a little improvement. We can declare variable 'bytea \n*outputbytes' in 'else' because variable is used nowhere except this place.\n\n\nRegards,\n\nIlia Evdokimov,\n\nTantor Labs LLC.\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 16:02:37 +0300", "msg_from": "Ilia Evdokimov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refactor the CopyOneRowTo" }, { "msg_contents": "On Fri, Jul 5, 2024 at 12:26 AM jian he <[email protected]> wrote:\n>\n> original CopyOneRowTo:\n> https://git.postgresql.org/cgit/postgresql.git/tree/src/backend/commands/copyto.c#n922\n> I change it to:\n> -----------------------\n> if (!cstate->opts.binary)\n> {\n> foreach_int(attnum, cstate->attnumlist)\n> {\n> Datum value = slot->tts_values[attnum - 1];\n> bool isnull = slot->tts_isnull[attnum - 1];\n>\n> if (need_delim)\n> CopySendChar(cstate, cstate->opts.delim[0]);\n> need_delim = true;\n>\n> if (isnull)\n> CopySendString(cstate, cstate->opts.null_print_client);\n> else\n> {\n> string = OutputFunctionCall(&out_functions[attnum - 1],\n> value);\n> if (cstate->opts.csv_mode)\n> CopyAttributeOutCSV(cstate, string,\n> cstate->opts.force_quote_flags[attnum - 1]);\n> else\n> CopyAttributeOutText(cstate, string);\n> }\n> }\n> }\n> else\n> {\n> foreach_int(attnum, cstate->attnumlist)\n> {\n> Datum value = slot->tts_values[attnum - 1];\n> bool isnull = slot->tts_isnull[attnum - 1];\n> bytea *outputbytes;\n>\n> if (isnull)\n> CopySendInt32(cstate, -1);\n> else\n> {\n> outputbytes = SendFunctionCall(&out_functions[attnum - 1],\n> value);\n> CopySendInt32(cstate, VARSIZE(outputbytes) - VARHDRSZ);\n> CopySendData(cstate, VARDATA(outputbytes),\n> VARSIZE(outputbytes) - VARHDRSZ);\n> }\n> }\n> }\n>\n>\n> overall less \"if else\" logic,\n> also copy format don't change during copy, we don't need to check\n> binary or nor for each datum value.\n\nI believe what you proposed is included in the patch 0002\nattached in [1], but you use foreach_int, which I think is\nan improvement.\n\n[1] https://www.postgresql.org/message-id/20240724.173059.909782980111496972.kou%40clear-code.com\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 31 Jul 2024 21:30:56 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refactor the CopyOneRowTo" }, { "msg_contents": "On 31/07/2024 16:30, Junwang Zhao wrote:\n> On Fri, Jul 5, 2024 at 12:26 AM jian he <[email protected]> wrote:\n>> overall less \"if else\" logic,\n>> also copy format don't change during copy, we don't need to check\n>> binary or nor for each datum value.\n\nCommitted, thanks.\n\nFor the archives: this code is in a very hot path during COPY TO, so I \ndid some quick micro-benchmarking on my laptop. I used this:\n\nCOPY (select \n0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10,0,1,2,3,4,5,6,7,8,9,10 \nfrom generate_series(1, 1_000_000) g) TO '/dev/null';\n\nand\n\nCOPY (select 0 from generate_series(1, 1_000_000) g) TO '/dev/null';\n\nto check the impact with few attributes and with many attributes. I \nrepeated that a few times with \\timing with and without the patch, and \neyeballed the result. I also used 'perf' to check the fraction of CPU \ntime spent in CopyOneRowTo. Conclusion: the patch has no performance impact.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 16 Aug 2024 14:05:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: refactor the CopyOneRowTo" } ]
[ { "msg_contents": "Hi.\n\nWe encountered a problem with excessive logging when transaction is \nsampled.\nWhen it is getting sampled every statement is logged, event SELECT. This \ncan\nlead to performance degradation and log polluting.\nI have added new setting to filter statements when transaction is \nsampled - log_transaction.\n\nOverview\n===================\nThis parameter works very similar to log_statement, but it filters out \nstatements only\nin sampled transaction. It has same values as log_statement, access \nrights (only superuser),\nsetup in postgresql.conf or by superuser with SET.\nBut by default it has value \"all\" in compliance with existing behaviour \n(to log every statement).\n\nExample usage\n==================\nLog every DDL, but only subset of other statements.\n\npostgresql.conf\n > log_transaction = ddl\n > log_statement = all\n > log_transaction_sample_rate = 1\n > log_statement_sample_rate = 0.3\n\nbackend:\n > BEGIN;\n > CREATE TABLE t1(value text);\n > INSERT INTO t1(value) VALUES ('hello'), ('world');\n > SELECT * FROM t1;\n > DROP TABLE t1;\n > COMMIT;\n\nlogfile:\n > LOG:  duration: 6.465 ms  statement: create table t1(value text);\n > LOG:  statement: insert into t1(value) values ('hello'), ('world');\n > LOG:  duration: 6.457 ms  statement: drop table t1;\n\nAs you can see CREATE and DROP were logged with duration (because it is \nDDL), but\nonly INSERT was logged (without duration) - not SELECT.\n\nTesting\n===================\nAll tests are passed - default configuration is compatible with existing \nbehaviour.\nHonestly, I could not find any TAP/regress tests that would test logging \n- some\ntests only use logs to ensure test results.\nI did not find suitable place for such tests, so no tests provided\n\nImplementation details\n===================\nI have modified signature of check_log_duration function - this accepts \nList of\nstatements that were executed. This is to decide whether we should log\ncurrent statement if transaction is sampled.\nBut this list can be empty, in that case we decide to log. NIL is passed \nonly\nin fast path, PARSE and BIND functions - by default these are logged\nif transaction is sampled, so we are compliant with existing behaviour \nagain.\nIn first implementation version, there was boolean flag (instead of \nList), but\nit was replaced by List to defer determining (function call) whether it \nis worth logging.\n\n-------------\nBest regards, Sergey Solovev", "msg_date": "Thu, 4 Jul 2024 19:45:42 +0300", "msg_from": "Sergey Solovev <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Add log_transaction setting" }, { "msg_contents": "On Thu, 4 Jul 2024 at 21:46, Sergey Solovev\n<[email protected]> wrote:\n>\n> Hi.\n>\n> We encountered a problem with excessive logging when transaction is\n> sampled.\n> When it is getting sampled every statement is logged, event SELECT. This\n> can\n> lead to performance degradation and log polluting.\n> I have added new setting to filter statements when transaction is\n> sampled - log_transaction.\n>\n> Overview\n> ===================\n> This parameter works very similar to log_statement, but it filters out\n> statements only\n> in sampled transaction. It has same values as log_statement, access\n> rights (only superuser),\n> setup in postgresql.conf or by superuser with SET.\n> But by default it has value \"all\" in compliance with existing behaviour\n> (to log every statement).\n>\n> Example usage\n> ==================\n> Log every DDL, but only subset of other statements.\n>\n> postgresql.conf\n> > log_transaction = ddl\n> > log_statement = all\n> > log_transaction_sample_rate = 1\n> > log_statement_sample_rate = 0.3\n>\n> backend:\n> > BEGIN;\n> > CREATE TABLE t1(value text);\n> > INSERT INTO t1(value) VALUES ('hello'), ('world');\n> > SELECT * FROM t1;\n> > DROP TABLE t1;\n> > COMMIT;\n>\n> logfile:\n> > LOG: duration: 6.465 ms statement: create table t1(value text);\n> > LOG: statement: insert into t1(value) values ('hello'), ('world');\n> > LOG: duration: 6.457 ms statement: drop table t1;\n>\n> As you can see CREATE and DROP were logged with duration (because it is\n> DDL), but\n> only INSERT was logged (without duration) - not SELECT.\n>\n> Testing\n> ===================\n> All tests are passed - default configuration is compatible with existing\n> behaviour.\n> Honestly, I could not find any TAP/regress tests that would test logging\n> - some\n> tests only use logs to ensure test results.\n> I did not find suitable place for such tests, so no tests provided\n>\n> Implementation details\n> ===================\n> I have modified signature of check_log_duration function - this accepts\n> List of\n> statements that were executed. This is to decide whether we should log\n> current statement if transaction is sampled.\n> But this list can be empty, in that case we decide to log. NIL is passed\n> only\n> in fast path, PARSE and BIND functions - by default these are logged\n> if transaction is sampled, so we are compliant with existing behaviour\n> again.\n> In first implementation version, there was boolean flag (instead of\n> List), but\n> it was replaced by List to defer determining (function call) whether it\n> is worth logging.\n>\n> -------------\n> Best regards, Sergey Solovev\n\nHi!\n\nAs I understand, the need of this GUC variable comes from fact, that\nis the transaction is sampled for logging, all statements within this\ntx are logged and this is not configurable now, correct?\nWell, if this is the case, why should we add a new GUC? Maybe we\nshould use `log_statement` in this case too, so there is a bug, that\nlog_statement is honored not during tx sampling?\n\n\nAlso, tests are failing[1]\n\n\n[1] https://cirrus-ci.com/task/5645711230894080\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Sat, 10 Aug 2024 18:40:00 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add log_transaction setting" }, { "msg_contents": "10.08.2024, 16:40, \"Kirill Reshke\" <[email protected]>:On Thu, 4 Jul 2024 at 21:46, Sergey Solovev<[email protected]> wrote: Hi. We encountered a problem with excessive logging when transaction is sampled. When it is getting sampled every statement is logged, event SELECT. This can lead to performance degradation and log polluting. I have added new setting to filter statements when transaction is sampled - log_transaction. Overview =================== This parameter works very similar to log_statement, but it filters out statements only in sampled transaction. It has same values as log_statement, access rights (only superuser), setup in postgresql.conf or by superuser with SET. But by default it has value \"all\" in compliance with existing behaviour (to log every statement). Example usage ================== Log every DDL, but only subset of other statements. postgresql.conf  > log_transaction = ddl  > log_statement = all  > log_transaction_sample_rate = 1  > log_statement_sample_rate = 0.3 backend:  > BEGIN;  > CREATE TABLE t1(value text);  > INSERT INTO t1(value) VALUES ('hello'), ('world');  > SELECT * FROM t1;  > DROP TABLE t1;  > COMMIT; logfile:  > LOG: duration: 6.465 ms statement: create table t1(value text);  > LOG: statement: insert into t1(value) values ('hello'), ('world');  > LOG: duration: 6.457 ms statement: drop table t1; As you can see CREATE and DROP were logged with duration (because it is DDL), but only INSERT was logged (without duration) - not SELECT. Testing =================== All tests are passed - default configuration is compatible with existing behaviour. Honestly, I could not find any TAP/regress tests that would test logging - some tests only use logs to ensure test results. I did not find suitable place for such tests, so no tests provided Implementation details =================== I have modified signature of check_log_duration function - this accepts List of statements that were executed. This is to decide whether we should log current statement if transaction is sampled. But this list can be empty, in that case we decide to log. NIL is passed only in fast path, PARSE and BIND functions - by default these are logged if transaction is sampled, so we are compliant with existing behaviour again. In first implementation version, there was boolean flag (instead of List), but it was replaced by List to defer determining (function call) whether it is worth logging. ------------- Best regards, Sergey SolovevHi!As I understand, the need of this GUC variable comes from fact, thatis the transaction is sampled for logging, all statements within thistx are logged and this is not configurable now, correct?Well, if this is the case, why should we add a new GUC? Maybe weshould use `log_statement` in this case too, so there is a bug, thatlog_statement is honored not during tx sampling?Also, tests are failing[1][1] https://cirrus-ci.com/task/5645711230894080 --Best regards,Kirill ReshkeHi, Kirill! Thanks for review.Also, tests are failing[1]That is my mistake: I have tested patch on REL_17BETA2, but set 18 version, andit seems that I run tests without proper ./configure flags. I reproduced error and fixedit in new patch.As I understand, the need of this GUC variable comes from fact, thatis the transaction is sampled for logging, all statements within thistx are logged and this is not configurable now, correct?Yes. That's the point - all or nothing.Well, if this is the case, why should we add a new GUC? Maybe weshould use `log_statement` in this case too, so there is a bug, thatlog_statement is honored not during tx sampling?That can be good solution, but I proceed from possibility of flexible logging setup.Shown example (when all DDL statements were logged) is just one of such use cases.Another example is security policy: when we must log every data modification -set `log_transaction = mod`.", "msg_date": "Wed, 14 Aug 2024 21:08:00 +0300", "msg_from": "=?utf-8?B?0KHQtdGA0LPQtdC5INCh0L7Qu9C+0LLRjNC10LI=?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add log_transaction setting" } ]
[ { "msg_contents": "Hi,\n\n\nI just built postgresql on macos sonoma (v14) and I had to install the \nfollowing packages:\n\n * * icu - https://ports.macports.org/port/icu/\n * * pkg - https://ports.macports.org/port/pkgconfig/\n\nI don't see anything related to this on \nhttps://www.postgresql.org/docs/devel/installation-platform-notes.html\n\nDid I miss something ? Should we add a note?\n\nbest,\nSaïd\n\n\n\n\n\n\n\nHi,\n\n\nI just built postgresql on macos sonoma (v14) and I had to\n install the following packages:\n\n* icu - https://ports.macports.org/port/icu/\n* pkg - https://ports.macports.org/port/pkgconfig/\n\nI don't see anything related to this on\n https://www.postgresql.org/docs/devel/installation-platform-notes.html\nDid I miss something ? Should we add a note?\nbest,\n Saïd", "msg_date": "Thu, 4 Jul 2024 14:02:43 -0400", "msg_from": "Said Assemlal <[email protected]>", "msg_from_op": true, "msg_subject": "Update platform notes to build Postgres on macos" }, { "msg_contents": "./configure —help \n\nIt will show that you can build —without-icu , \nyou can also specify a path to pkg-config via PKG_CONFIG=/path/to/pkg-config \n\nside note: I’ve had better experience building with brew on macos, rather than macports.\n\n> On 4 Jul 2024, at 9:02 PM, Said Assemlal <[email protected]> wrote:\n> \n> Hi,\n> \n> \n> \n> I just built postgresql on macos sonoma (v14) and I had to install the following packages:\n> \n> * icu - https://ports.macports.org/port/icu/\n> * pkg - https://ports.macports.org/port/pkgconfig/\n> I don't see anything related to this on https://www.postgresql.org/docs/devel/installation-platform-notes.html\n> \n> Did I miss something ? Should we add a note?\n> \n> best,\n> Saïd\n> \n> \n> \n\n\n./configure —help It will show that you can build —without-icu , you can also specify a path to pkg-config via PKG_CONFIG=/path/to/pkg-config side note: I’ve had better experience building with brew on macos, rather than macports.On 4 Jul 2024, at 9:02 PM, Said Assemlal <[email protected]> wrote:\n\nHi,\nI just built postgresql on macos sonoma (v14) and I had to\n install the following packages:\n\n* icu - https://ports.macports.org/port/icu/\n* pkg - https://ports.macports.org/port/pkgconfig/\nI don't see anything related to this on\n https://www.postgresql.org/docs/devel/installation-platform-notes.htmlDid I miss something ? Should we add a note?best,\n Saïd", "msg_date": "Thu, 4 Jul 2024 21:48:26 +0300", "msg_from": "Florents Tselai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update platform notes to build Postgres on macos" }, { "msg_contents": "Hi\n\nI think the documentation should be updated and all pre-reqs must be added.\n\n\nRegards\nKashif Zeeshan\n\n\n\nOn Thu, Jul 4, 2024 at 11:02 PM Said Assemlal <[email protected]> wrote:\n\n> Hi,\n>\n>\n> I just built postgresql on macos sonoma (v14) and I had to install the\n> following packages:\n>\n> - * icu - https://ports.macports.org/port/icu/\n> - * pkg - https://ports.macports.org/port/pkgconfig/\n>\n> I don't see anything related to this on\n> https://www.postgresql.org/docs/devel/installation-platform-notes.html\n>\n> Did I miss something ? Should we add a note?\n>\n> best,\n> Saïd\n>\n>\n>\n\nHiI think the documentation should be updated and all pre-reqs must be added.RegardsKashif ZeeshanOn Thu, Jul 4, 2024 at 11:02 PM Said Assemlal <[email protected]> wrote:\n\nHi,\n\n\nI just built postgresql on macos sonoma (v14) and I had to\n install the following packages:\n\n* icu - https://ports.macports.org/port/icu/\n* pkg - https://ports.macports.org/port/pkgconfig/\n\nI don't see anything related to this on\n https://www.postgresql.org/docs/devel/installation-platform-notes.html\nDid I miss something ? Should we add a note?\nbest,\n Saïd", "msg_date": "Fri, 5 Jul 2024 09:06:48 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update platform notes to build Postgres on macos" } ]
[ { "msg_contents": "Hello postgres hackers:\n\nI am recently working on speeding up pg_upgrade for database with over a\nmillion tables and would like to share some (maybe) optimizeable or\ninteresting findings.\n\n1: Skip Compatibility Check In \"pg_upgrade\"\n=============================================\nConcisely, we've got several databases, each with a million-plus tables.\nRunning the compatibility check before pg_dump can eat up like half an hour.\nIf I have performed an online check before the actual upgrade, repeating it\nseems unnecessary and just adds to the downtime in many situations.\n\nSo, I'm thinking, why not add a \"--skip-check\" option in pg_upgrade to skip it?\nSee \"1-Skip_Compatibility_Check_v1.patch\".\n\n\n2: Accelerate \"FastPathTransferRelationLocks\"\n===============================================\nIn this scenario, pg_restore costs much more time than pg_dump. And through\nmonitoring the \"postgres\" backend via perf, I found that the much time are\ntaken by \"LWLockAcquire\" and \"LWLockRelease\". Diving deeper, I think I found\nthe reason:\n\nWhen we try to create an index (pretty common in pg_restore), the \"ShareLock\"\nto the relation must be held first. Such lock is a \"strong\" lock, so to acquire\nthe lock, before we change the global lock hash table, we must traverse each\nproc to transfer their relation lock in fastpath. And the issue raise here\n(in FastPathTransferRelationLocks ):\nwe acquire \"fpInfoLock\" before accessing \"proc->databaseId\". So we must perform\nthe lock acquiring and releasing \"MaxBackends\" times for each index. The reason\nis recorded in the comment:\n```\n/* \n * proc->databaseId is set at backend startup time and never changes\n * thereafter, so it might be safe to perform this test before\n * acquiring &proc->fpInfoLock. In particular, it's certainly safe to\n * assume that if the target backend holds any fast-path locks, it\n * must have performed a memory-fencing operation (in particular, an\n * LWLock acquisition) since setting proc->databaseId. However, it's\n * less clear that our backend is certain to have performed a memory\n * fencing operation since the other backend set proc->databaseId. So\n * for now, we test it after acquiring the LWLock just to be safe.\n */\n```\n\nI agree with the reason, but it seems OK to replace LWLockAcquire with a \nmemory barrier for \"proc->databaseId\". And this can save some time.\nSee \"2-Accelerate_FastPathTransferRelationLocks_v1.patch\".\n\n\n3: Optimize Toast Index Creating\n====================================\nWhile tracing the reason mentioned in point \"2\", I notice an interesting\nperformance in creating toast index. In function \"create_toast_table\"\n\n```\n/* ShareLock is not really needed here, but take it anyway */\n toast_rel = table_open(toast_relid, ShareLock);\n/* some operation */\nindex_create(xxxx)\n```\n\nYep, ShareLock is not really needed here, since we this is the only transaction\nthat the toast relation is visible to. But by design (in \"relation_open\"), \nNoLock mode is only used when the caller confirms that it already holds the\nlock. So I wonder is it still ok to let the NoLock mode used in such scenario\nwhere the relation is created by current transaction.\nSee \"3-Optimize_Toast_Index_Creating_v1.patch\".\n\n\nThat's what I've got. Any response is appreciated.\n\nBest regards,\nYang Boyu", "msg_date": "Fri, 05 Jul 2024 15:12:37 +0800", "msg_from": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?c3BlZWQgdXAgcGdfdXBncmFkZSB3aXRoIGxhcmdlIG51bWJlciBvZiB0YWJsZXM=?=" }, { "msg_contents": "> On 5 Jul 2024, at 09:12, 杨伯宇(长堂) <[email protected]> wrote:\n\n> 1: Skip Compatibility Check In \"pg_upgrade\"\n> =============================================\n> Concisely, we've got several databases, each with a million-plus tables.\n> Running the compatibility check before pg_dump can eat up like half an hour.\n> If I have performed an online check before the actual upgrade, repeating it\n> seems unnecessary and just adds to the downtime in many situations.\n> \n> So, I'm thinking, why not add a \"--skip-check\" option in pg_upgrade to skip it?\n> See \"1-Skip_Compatibility_Check_v1.patch\".\n\nHow would a user know that nothing has changed in the cluster between running\nthe check and running the upgrade with a skipped check? Considering how\ncomplicated it is to understand exactly what pg_upgrade does it seems like\nquite a large caliber footgun.\n\nI would be much more interested in making the check phase go faster, and indeed\nthere is ongoing work in this area. Since it sounds like you have a dev and\ntest environment with a big workload, testing those patches would be helpful.\nhttps://commitfest.postgresql.org/48/4995/ is one that comes to mind.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 5 Jul 2024 10:26:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: speed up pg_upgrade with large number of tables" }, { "msg_contents": "> > So, I'm thinking, why not add a \"--skip-check\" option in pg_upgrade to skip it?\n> > See \"1-Skip_Compatibility_Check_v1.patch\".\n> \n> How would a user know that nothing has changed in the cluster between running\n> the check and running the upgrade with a skipped check? Considering how\n> complicated it is to understand exactly what pg_upgrade does it seems like\n> quite a large caliber footgun.\nIndeed, it's not feasible to execute an concise check ensuring that nothing\nhas changed. However, in many cases, a cluster consistently executes the\nsame SQL commands. Thus, if we've verified that the cluster was compatible\n30 minutes prior, there's a strong likelihood that it remains compatible now.\nTherefore, adding such an 'trust-me' option may still be beneficial.\n\n> I would be much more interested in making the check phase go faster, and indeed\n> there is ongoing work in this area. Since it sounds like you have a dev and\n> test environment with a big workload, testing those patches would be helpful.\n> https://commitfest.postgresql.org/48/4995/ is one that comes to mind.\nVery meaningful work! I will try it.\n\n--\nBest regards,\nYang Boyu\n> > So, I'm thinking, why not add a \"--skip-check\" option in pg_upgrade to skip it?> > See \"1-Skip_Compatibility_Check_v1.patch\".> > How would a user know that nothing has changed in the cluster between running> the check and running the upgrade with a skipped check? Considering how> complicated it is to understand exactly what pg_upgrade does it seems like> quite a large caliber footgun.Indeed, it's not feasible to execute an concise check ensuring that nothinghas changed. However, in many cases, a cluster consistently executes thesame SQL commands. Thus, if we've verified that the cluster was compatible30 minutes prior, there's a strong likelihood that it remains compatible now.Therefore, adding such an 'trust-me' option may still be beneficial.> I would be much more interested in making the check phase go faster, and indeed> there is ongoing work in this area. Since it sounds like you have a dev and> test environment with a big workload, testing those patches would be helpful.> https://commitfest.postgresql.org/48/4995/ is one that comes to mind.Very meaningful work! I will try it.--Best regards,Yang Boyu", "msg_date": "Fri, 05 Jul 2024 17:24:42 +0800", "msg_from": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUmU6IHNwZWVkIHVwIHBnX3VwZ3JhZGUgd2l0aCBsYXJnZSBudW1iZXIgb2Yg?=\n =?UTF-8?B?dGFibGVz?=" }, { "msg_contents": "On Fri, Jul 05, 2024 at 05:24:42PM +0800, 杨伯宇(长堂) wrote:\n>> > So, I'm thinking, why not add a \"--skip-check\" option in pg_upgrade to skip it?\n>> > See \"1-Skip_Compatibility_Check_v1.patch\".\n>> \n>> How would a user know that nothing has changed in the cluster between running\n>> the check and running the upgrade with a skipped check? Considering how\n>> complicated it is to understand exactly what pg_upgrade does it seems like\n>> quite a large caliber footgun.\n\nI am also -1 on this one for the same reasons as Daniel.\n\n>> I would be much more interested in making the check phase go faster, and indeed\n>> there is ongoing work in this area. Since it sounds like you have a dev and\n>> test environment with a big workload, testing those patches would be helpful.\n>> https://commitfest.postgresql.org/48/4995/ is one that comes to mind.\n> Very meaningful work! I will try it.\n\nThanks! Since you mentioned that you have multiple databases with 1M+\ndatabases, you might also be interested in commit 2329cad. That should\nspeed up the pg_dump step quite a bit.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 5 Jul 2024 10:12:34 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaUmU=?= =?utf-8?Q?=3A?= speed up pg_upgrade\n with large number of tables" }, { "msg_contents": "> Thanks! Since you mentioned that you have multiple databases with 1M+\n> databases, you might also be interested in commit 2329cad. That should\n> speed up the pg_dump step quite a bit.\nWow, I noticed this commit(2329cad) when it appeared in commitfest. It has\ndoubled the speed of pg_dump in this scenario. Thank you for your effort!\n\nBesides, https://commitfest.postgresql.org/48/4995/ seems insufficient to \nthis situation. Some time-consuming functions like check_for_data_types_usage\nare not yet able to run in parallel. But these patches could be a great\nstarting point for a more efficient parallelism implementation. Maybe we can\ndo it later.\n> Thanks! Since you mentioned that you have multiple databases with 1M+> databases, you might also be interested in commit 2329cad. That should> speed up the pg_dump step quite a bit.Wow, I noticed this commit(2329cad) when it appeared in commitfest. It hasdoubled the speed of pg_dump in this scenario. Thank you for your effort!Besides, https://commitfest.postgresql.org/48/4995/ seems insufficient to this situation. Some time-consuming functions like check_for_data_types_usageare not yet able to run in parallel. But these patches could be a greatstarting point for a more efficient parallelism implementation. Maybe we cando it later.", "msg_date": "Mon, 08 Jul 2024 15:22:36 +0800", "msg_from": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUmU6IOWbnuWkje+8mlJlOiBzcGVlZCB1cCBwZ191cGdyYWRlIHdpdGggbGFy?=\n =?UTF-8?B?Z2UgbnVtYmVyIG9mIHRhYmxlcw==?=" }, { "msg_contents": "On Mon, Jul 08, 2024 at 03:22:36PM +0800, 杨伯宇(长堂) wrote:\n> Besides, https://commitfest.postgresql.org/48/4995/ seems insufficient to \n> this situation. Some time-consuming functions like check_for_data_types_usage\n> are not yet able to run in parallel. But these patches could be a great\n> starting point for a more efficient parallelism implementation. Maybe we can\n> do it later.\n\nI actually just wrote up the first version of the patch for parallelizing\nthe data type checks over the weekend. I'll post it shortly.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 8 Jul 2024 09:37:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaUmU6IOWbnuWkjQ==?= =?utf-8?B?77yaUmU6?=\n speed up pg_upgrade with large number of tables" } ]
[ { "msg_contents": "Hello hackers,\n\ni compile postgresql just for fun.\nMy system is a Arch Linux.\nI get after upgrade the libxml2 package (from 2.12.7-1 to 2.13.1-1) test errors for xml:\n\n---\n...\nok 200 + largeobject 553 ms\nok 201 + with 819 ms\nnot ok 202 + xml 1464 ms\n# parallel group (15 tests): hash_part predicate partition_info explain reloptions memoize compression partition_merge stats partition_split partition_aggregate tuplesort partition_join partition_prune indexing\nok 203 + partition_merge 1282 ms\nok 204 + partition_split 1533 ms\nok 205 + partition_join 2229 ms\n...\nok 221 - fast_default 230 ms\nok 222 - tablespace 498 ms\n1..222\n# 1 of 222 tests failed.\n# The differences that caused some tests to fail can be viewed in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.diffs\".\n# A copy of the test summary that you see above is saved in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.out\".\nmake[1]: *** [GNUmakefile:118: check] Fehler 1\nmake[1]: Verzeichnis „/home/frastr/devel/postgresql/src/test/regress“ wird verlassen\nmake: *** [GNUmakefile:69: check] Fehler 2\n---\n\nWhat can I do?\nWho can help?\n\nBest regards,\nFrank", "msg_date": "Fri, 5 Jul 2024 15:33:52 +0200", "msg_from": "Frank Streitzig <[email protected]>", "msg_from_op": true, "msg_subject": "XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-05 15:33 +0200, Frank Streitzig wrote:\n> My system is a Arch Linux.\n> I get after upgrade the libxml2 package (from 2.12.7-1 to 2.13.1-1)\n> test errors for xml:\n> \n> not ok 202 + xml 1464 ms\n> [...snip...]\n> # 1 of 222 tests failed.\n> # The differences that caused some tests to fail can be viewed in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.diffs\".\n> # A copy of the test summary that you see above is saved in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.out\".\n> make[1]: *** [GNUmakefile:118: check] Fehler 1\n> make[1]: Verzeichnis „/home/frastr/devel/postgresql/src/test/regress“ wird verlassen\n> make: *** [GNUmakefile:69: check] Fehler 2\n\nHmm, I did not get this error after upgrading libxml2 on my Arch machine\na couple of weeks ago. Did you just run make check after upgrading,\nwithout recompiling? Please try make clean && make && make check\n\n-- \nErik\n\n\n", "msg_date": "Fri, 5 Jul 2024 16:41:39 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" } ]
[ { "msg_contents": "Hi hackers,\r\n&nbsp;&nbsp;\r\n# Background\r\n\r\n\r\nPostgreSQL maintains a list of temporary tables for 'on commit\r\ndrop/delete rows' via an on_commits list in the session. Once a\r\ntransaction accesses a temp table or namespace, the\r\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE flag is set. Before committing, the\r\nPreCommit_on_commit_actions function truncates all 'commit delete\r\nrows' temp tables, even those not accessed in the current transaction.\r\nCommit performance can degrade if there are many such temp tables.\r\n\r\n\r\nIn practice, users created many 'commit delete rows' temp tables in a\r\nsession, but each transaction only accessed a few. With varied access\r\nfrequency, users were reluctant to change to 'on commit drop'.\r\n\r\n\r\nBelow is an example showing the effect of the number of temp tables\r\non commit performance:\r\n```\r\n-- 100\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..100 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 1.325 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 1.330 ms\r\n```\r\n\r\n\r\n```\r\n-- 1000\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..1000 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 10.939 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 10.955 ms\r\n```\r\n\r\n\r\n```\r\n-- 10000\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..10000 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 110.253 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 175.875 ms\r\n```\r\n\r\n\r\n# Solution\r\n\r\n\r\nAn intuitive solution is to truncate only the temp tables that\r\nthe current process has accessed upon transaction commit.\r\n\r\n\r\nIn the attached patch (based on HEAD):\r\n- A Bloom filter (can also be a list or hash table) maintains\r\nthe temp tables accessed by the current transaction.\r\n- Only temp tables filtered through the Bloom filter need\r\ntruncation. False positives may occur, but they are\r\nacceptable.\r\n- The Bloom filter is reset at the start of the transaction,\r\nindicating no temp tables have been accessed by the\r\ncurrent transaction yet.\r\n\r\n\r\nAfter optimization, the performance for the same case is as\r\nfollows:\r\n```\r\n-- 100\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..100 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 0.447 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 0.453 ms\r\n```\r\n\r\n\r\n```\r\n-- 1000\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..1000 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 0.531 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 0.567 ms\r\n```\r\n\r\n\r\n```\r\n-- 10000\r\nDO $$\r\nDECLARE\r\n&nbsp; &nbsp; begin\r\n&nbsp; &nbsp; &nbsp; &nbsp; FOR i IN 1..10000 LOOP\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EXECUTE format('CREATE TEMP TABLE temp_table_%s (id int) on commit delete ROWS', i) ;\r\n&nbsp; &nbsp; &nbsp; &nbsp; END LOOP;\r\n&nbsp; &nbsp; END;\r\n$$;\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 1.370 ms\r\npostgres=# insert into temp_table_1 select 1;\r\nINSERT 0 1\r\nTime: 1.362 ms\r\n```\r\n\r\n\r\nHoping for some suggestions from hackers.\r\n\r\n\r\nBest Regards,\r\nFei Changhong\r\n\r\n\r\n&nbsp;", "msg_date": "Fri, 5 Jul 2024 23:19:22 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize commit performance with a large number of 'on commit delete\n rows' temp tables" }, { "msg_contents": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]> writes:\n> PostgreSQL maintains a list of temporary tables for 'on commit\n> drop/delete rows' via an on_commits list in the session. Once a\n> transaction accesses a temp table or namespace, the\n> XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag is set. Before committing, the\n> PreCommit_on_commit_actions function truncates all 'commit delete\n> rows' temp tables, even those not accessed in the current transaction.\n> Commit performance can degrade if there are many such temp tables.\n\nHmm. I can sympathize with wanting to improve the performance of\nthis edge case, but it is an edge case: you are the first to\ncomplain about it. You cannot trash the performance of more typical\ncases in order to get there ...\n\n> In the attached patch (based on HEAD):\n> - A Bloom filter (can also be a list or hash table) maintains\n> the temp tables accessed by the current transaction.\n\n... and I'm afraid this proposal may do exactly that. Our bloom\nfilters are pretty heavyweight objects, so making one in situations\nwhere it buys nothing is likely to add a decent amount of overhead.\n(I've not tried to quantify that for this particular patch.)\n\nI wonder if we could instead add marker fields to the OnCommitItem\nstructs indicating whether their rels were touched in the current\ntransaction, and use those to decide whether we need to truncate.\n\nAnother possibility is to make the bloom filter only when the\nnumber of OnCommitItems exceeds some threshold (compare d365ae705).\n\nBTW, I wonder if we could improve PreCommit_on_commit_actions by\nhaving it just quit immediately if XACT_FLAGS_ACCESSEDTEMPNAMESPACE\nis not set. I think that must be set if any ON COMMIT DROP tables\nhave been made, so there should be nothing to do if not. In normal\ncases that's not going to buy much because the OnCommitItems list\nis short, but in your scenario maybe it could win.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 12:15:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Thank you for your attention and suggestions.\n\n> On Jul 6, 2024, at 00:15, Tom Lane <[email protected]> wrote:\n> \n> <[email protected]> writes:\n>> PostgreSQL maintains a list of temporary tables for 'on commit\n>> drop/delete rows' via an on_commits list in the session. Once a\n>> transaction accesses a temp table or namespace, the\n>> XACT_FLAGS_ACCESSEDTEMPNAMESPACE flag is set. Before committing, the\n>> PreCommit_on_commit_actions function truncates all 'commit delete\n>> rows' temp tables, even those not accessed in the current transaction.\n>> Commit performance can degrade if there are many such temp tables.\n> \n> Hmm. I can sympathize with wanting to improve the performance of\n> this edge case, but it is an edge case: you are the first to\n> complain about it. You cannot trash the performance of more typical\n> cases in order to get there ...\n>> In the attached patch (based on HEAD):\n>> - A Bloom filter (can also be a list or hash table) maintains\n>> the temp tables accessed by the current transaction.\n> \n> ... and I'm afraid this proposal may do exactly that. Our bloom\n> filters are pretty heavyweight objects, so making one in situations\n> where it buys nothing is likely to add a decent amount of overhead.\n> (I've not tried to quantify that for this particular patch.)\nYes, this is an edge case, but we have more than one customer facing the issue,\nand unfortunately, they are not willing to modify their service code.\nWe should indeed avoid negatively impacting typical cases:\n- Each connection requires an extra 1KB for the filter (the original bloom filter\n implementation had a minimum of 1MB, which I've adjusted to this smaller value).\n- The filter is reset at the start of each transaction, which is unnecessary for\n sessions that do not access temporary tables.\n- In the PreCommit_on_commit_actions function, each 'on commit delete rows'\n temporary table has to be filtered through the bloom filter, which incurs some\n CPU overhead. However, this might be negligible compared to the IO cost of\n truncation.\n\nAdding a threshold for using the bloom filter is a good idea. We can create the\nbloom filter only when the current number of OnCommitItems exceeds the threshold\nat the start of a transaction, which should effectively avoid affecting typical\ncases. I will provide a new patch later to implement this.\n\n> I wonder if we could instead add marker fields to the OnCommitItem\n> structs indicating whether their rels were touched in the current\n> transaction, and use those to decide whether we need to truncate.\nAdding a flag to OnCommitItem to indicate whether the temp table was accessed\nby the current transaction is feasible. But, locating the OnCommitItem by relid\nefficiently when opening a relation may require an extra hash table to map relids\nto OnCommitItems.\n\n> Another possibility is to make the bloom filter only when the\n> number of OnCommitItems exceeds some threshold (compare d365ae705).\n> \n> BTW, I wonder if we could improve PreCommit_on_commit_actions by\n> having it just quit immediately if XACT_FLAGS_ACCESSEDTEMPNAMESPACE\n> is not set. I think that must be set if any ON COMMIT DROP tables\n> have been made, so there should be nothing to do if not. In normal\n> cases that's not going to buy much because the OnCommitItems list\n> is short, but in your scenario maybe it could win.\nI also think when XACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set, it's unnecessary\nto iterate over on_commits (unless I'm overlooking something), which would be\nbeneficial for the aforementioned scenarios as well.\n\nBest Regards,\nFei Changhong\n\n\nThank you for your attention and suggestions.On Jul 6, 2024, at 00:15, Tom Lane <[email protected]> wrote:<[email protected]> writes:PostgreSQL maintains a list of temporary tables for 'on commitdrop/delete rows' via an on_commits list in the session. Once atransaction accesses a temp table or namespace, theXACT_FLAGS_ACCESSEDTEMPNAMESPACE flag is set. Before committing, thePreCommit_on_commit_actions function truncates all 'commit deleterows' temp tables, even those not accessed in the current transaction.Commit performance can degrade if there are many such temp tables.Hmm.  I can sympathize with wanting to improve the performance ofthis edge case, but it is an edge case: you are the first tocomplain about it.  You cannot trash the performance of more typicalcases in order to get there ...In the attached patch (based on HEAD):- A Bloom filter (can also be a list or hash table) maintainsthe temp tables accessed by the current transaction.... and I'm afraid this proposal may do exactly that.  Our bloomfilters are pretty heavyweight objects, so making one in situationswhere it buys nothing is likely to add a decent amount of overhead.(I've not tried to quantify that for this particular patch.)Yes, this is an edge case, but we have more than one customer facing the issue,and unfortunately, they are not willing to modify their service code.We should indeed avoid negatively impacting typical cases:- Each connection requires an extra 1KB for the filter (the original bloom filter  implementation had a minimum of 1MB, which I've adjusted to this smaller value).- The filter is reset at the start of each transaction, which is unnecessary for  sessions that do not access temporary tables.- In the PreCommit_on_commit_actions function, each 'on commit delete rows'  temporary table has to be filtered through the bloom filter, which incurs some  CPU overhead. However, this might be negligible compared to the IO cost of  truncation.Adding a threshold for using the bloom filter is a good idea. We can create thebloom filter only when the current number of OnCommitItems exceeds the thresholdat the start of a transaction, which should effectively avoid affecting typicalcases. I will provide a new patch later to implement this.I wonder if we could instead add marker fields to the OnCommitItemstructs indicating whether their rels were touched in the currenttransaction, and use those to decide whether we need to truncate.Adding a flag to OnCommitItem to indicate whether the temp table was accessedby the current transaction is feasible. But, locating the OnCommitItem by relidefficiently when opening a relation may require an extra hash table to map relidsto OnCommitItems.Another possibility is to make the bloom filter only when thenumber of OnCommitItems exceeds some threshold (compare d365ae705).BTW, I wonder if we could improve PreCommit_on_commit_actions byhaving it just quit immediately if XACT_FLAGS_ACCESSEDTEMPNAMESPACEis not set.  I think that must be set if any ON COMMIT DROP tableshave been made, so there should be nothing to do if not.  In normalcases that's not going to buy much because the OnCommitItems listis short, but in your scenario maybe it could win.I also think when XACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set, it's unnecessaryto iterate over on_commits (unless I'm overlooking something), which would bebeneficial for the aforementioned scenarios as well.\nBest Regards,Fei Changhong", "msg_date": "Sat, 6 Jul 2024 02:08:30 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "The patch in the attachment, compared to the previous one, adds a threshold for\r\nusing the bloom filter. The current ON_COMMITS_FILTER_THRESHOLD is set to 64,\r\nwhich may not be the optimal value. Perhaps this threshold could be configured\r\nas a GUC parameter?\r\n\r\n\r\nBest Regards,\r\nFei Changhong", "msg_date": "Sat, 6 Jul 2024 03:39:41 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi feichanghong\n Thanks for updating the patch ,I think could be configured as a GUC\nparameter,PostgreSQL has too many static variables that are written to\ndeath and explicitly stated in the code comments may later be designed as\nparameters. Now that more and more applications that previously used oracle\nare migrating to postgresql, there will be more and more scenarios where\ntemporary tables are heavily used.Because oracle will global temporary\ntablespace optimised for this business scenario, which works well in\noracle, migrating to pg faces very tricky performance issues,I'm sure the\npatch has vaule\n\nBest Regards\n\nfeichanghong <[email protected]> 于2024年7月6日周六 03:40写道:\n\n> The patch in the attachment, compared to the previous one, adds a\n> threshold for\n> using the bloom filter. The current ON_COMMITS_FILTER_THRESHOLD is set to\n> 64,\n> which may not be the optimal value. Perhaps this threshold could be\n> configured\n> as a GUC parameter?\n> ------------------------------\n> Best Regards,\n> Fei Changhong\n>\n\nHi feichanghong     Thanks for updating the patch ,I think could be configured as a GUC parameter,PostgreSQL has too many static variables that are written to death and explicitly stated in the code comments may later be designed as parameters. Now that more and more applications that previously used oracle are migrating to postgresql, there will be more and more scenarios where temporary tables are heavily used.Because oracle will global temporary tablespace optimised for this business scenario, which works well in oracle, migrating to pg faces very tricky performance issues,I'm sure the patch has vauleBest Regardsfeichanghong <[email protected]> 于2024年7月6日周六 03:40写道:The patch in the attachment, compared to the previous one, adds a threshold forusing the bloom filter. The current ON_COMMITS_FILTER_THRESHOLD is set to 64,which may not be the optimal value. Perhaps this threshold could be configuredas a GUC parameter?Best Regards,Fei Changhong", "msg_date": "Sun, 7 Jul 2024 21:32:24 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi wenhui,\r\n\r\n\r\nThank you for your suggestions. I have supplemented some performance tests.\r\n\r\n\r\nHere is the TPS performance data for different numbers of temporary tables\r\nunder different thresholds, as compared with the head (98347b5a). The testing\r\ntool used is pgbench, with the workload being to insert into one temporary\r\ntable (when the number of temporary tables is 0, the workload is SELECT 1):\r\n\r\n\r\n| table num&nbsp; &nbsp; &nbsp;| 0&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | 1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | 5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| 10&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | 100&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| 1000&nbsp; &nbsp; &nbsp; &nbsp; |\r\n|---------------|--------------|--------------|-------------|-------------|-------------|-------------|\r\n| head 98347b5a | 39912.722209 | 10064.306268 | 7452.071689 | 5641.487369 | 1073.203851 | 114.530958&nbsp; |\r\n| threshold 1&nbsp; &nbsp;| 40332.367414 | 7078.117192&nbsp; | 7044.951156 | 7020.249434 | 6893.652062 | 5826.597260 |\r\n| threshold 5&nbsp; &nbsp;| 40173.562744 | 10017.532933 | 7023.770203 | 7024.283577 | 6919.769315 | 5806.314494 |\r\n\r\n\r\nHere is the TPS performance data for different numbers of temporary tables\r\nat a threshold of 5, compared with the head (commit 98347b5a). The testing tool\r\nis pgbench, with the workload being to insert into all temporary tables:\r\n\r\n\r\n| table num&nbsp; &nbsp; &nbsp;| 1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| 5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| 10&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | 100&nbsp; &nbsp; &nbsp; &nbsp; | 1000&nbsp; &nbsp; &nbsp; |\r\n|---------------|-------------|-------------|-------------|------------|-----------|\r\n| head 98347b5a | 7243.945042 | 3627.290594 | 2262.594766 | 297.856756 | 27.745808 |\r\n| threshold 5&nbsp; &nbsp;| 7287.764656 | 3130.814888 | 2038.308763 | 288.226032 | 27.705149 |\r\n\r\n\r\nAccording to test results, the patch does cause some performance loss with\r\nfewer temporary tables, but benefits are substantial when many temporary tables\r\nare used. The specific threshold could be set to 10 (HDDs may require a smaller\r\none).\r\n\r\n\r\nI've provided two patches in the attachments, both with a default threshold of 10.\r\nOne has the threshold configured as a GUC parameter, while the other is hardcoded\r\nto 10.\r\n\r\nBest Regards,\r\nFei Changhong", "msg_date": "Mon, 8 Jul 2024 10:35:44 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi feichanghong\n I don't think it's acceptable to introduce a patch to fix a problem\nthat leads to performance degradation, or can we take tom's suggestion to\noptimise PreCommit_on_commit_actions? I think it to miss the forest for\nthe trees\n\n\n\nBest Regards,\n\nfeichanghong <[email protected]> 于2024年7月8日周一 10:35写道:\n\n> Hi wenhui,\n>\n> Thank you for your suggestions. I have supplemented some performance tests.\n>\n> Here is the TPS performance data for different numbers of temporary tables\n> under different thresholds, as compared with the head (98347b5a). The\n> testing\n> tool used is pgbench, with the workload being to insert into one temporary\n> table (when the number of temporary tables is 0, the workload is SELECT 1):\n>\n> | table num | 0 | 1 | 5 | 10\n> | 100 | 1000 |\n>\n> |---------------|--------------|--------------|-------------|-------------|-------------|-------------|\n> | head 98347b5a | 39912.722209 | 10064.306268 | 7452.071689 | 5641.487369\n> | 1073.203851 | 114.530958 |\n> | threshold 1 | 40332.367414 | 7078.117192 | 7044.951156 | 7020.249434\n> | 6893.652062 | 5826.597260 |\n> | threshold 5 | 40173.562744 | 10017.532933 | 7023.770203 | 7024.283577\n> | 6919.769315 | 5806.314494 |\n>\n> Here is the TPS performance data for different numbers of temporary tables\n> at a threshold of 5, compared with the head (commit 98347b5a). The testing\n> tool\n> is pgbench, with the workload being to insert into all temporary tables:\n>\n> | table num | 1 | 5 | 10 | 100 |\n> 1000 |\n>\n> |---------------|-------------|-------------|-------------|------------|-----------|\n> | head 98347b5a | 7243.945042 | 3627.290594 | 2262.594766 | 297.856756 |\n> 27.745808 |\n> | threshold 5 | 7287.764656 | 3130.814888 | 2038.308763 | 288.226032 |\n> 27.705149 |\n>\n> According to test results, the patch does cause some performance loss with\n> fewer temporary tables, but benefits are substantial when many temporary\n> tables\n> are used. The specific threshold could be set to 10 (HDDs may require a\n> smaller\n> one).\n>\n> I've provided two patches in the attachments, both with a default\n> threshold of 10.\n> One has the threshold configured as a GUC parameter, while the other is\n> hardcoded\n> to 10.\n> ------------------------------\n> Best Regards,\n> Fei Changhong\n>\n\nHi feichanghong    I don't think it's acceptable to introduce a patch to fix a problem that leads to performance degradation, or can we take tom's suggestion to optimise PreCommit_on_commit_actions?  I think it to miss the forest for the treesBest Regards,feichanghong <[email protected]> 于2024年7月8日周一 10:35写道:Hi wenhui,Thank you for your suggestions. I have supplemented some performance tests.Here is the TPS performance data for different numbers of temporary tablesunder different thresholds, as compared with the head (98347b5a). The testingtool used is pgbench, with the workload being to insert into one temporarytable (when the number of temporary tables is 0, the workload is SELECT 1):| table num     | 0            | 1            | 5           | 10          | 100         | 1000        ||---------------|--------------|--------------|-------------|-------------|-------------|-------------|| head 98347b5a | 39912.722209 | 10064.306268 | 7452.071689 | 5641.487369 | 1073.203851 | 114.530958  || threshold 1   | 40332.367414 | 7078.117192  | 7044.951156 | 7020.249434 | 6893.652062 | 5826.597260 || threshold 5   | 40173.562744 | 10017.532933 | 7023.770203 | 7024.283577 | 6919.769315 | 5806.314494 |Here is the TPS performance data for different numbers of temporary tablesat a threshold of 5, compared with the head (commit 98347b5a). The testing toolis pgbench, with the workload being to insert into all temporary tables:| table num     | 1           | 5           | 10          | 100        | 1000      ||---------------|-------------|-------------|-------------|------------|-----------|| head 98347b5a | 7243.945042 | 3627.290594 | 2262.594766 | 297.856756 | 27.745808 || threshold 5   | 7287.764656 | 3130.814888 | 2038.308763 | 288.226032 | 27.705149 |According to test results, the patch does cause some performance loss withfewer temporary tables, but benefits are substantial when many temporary tablesare used. The specific threshold could be set to 10 (HDDs may require a smallerone).I've provided two patches in the attachments, both with a default threshold of 10.One has the threshold configured as a GUC parameter, while the other is hardcodedto 10.Best Regards,Fei Changhong", "msg_date": "Mon, 8 Jul 2024 12:18:17 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi wenhui,\n\n> On Jul 8, 2024, at 12:18, wenhui qiu <[email protected]> wrote:\n> \n> Hi feichanghong\n> I don't think it's acceptable to introduce a patch to fix a problem that leads to performance degradation, or can we take tom's suggestion to optimise PreCommit_on_commit_actions? I think it to miss the forest for the trees\n\nYou're right, any performance regression is certainly unacceptable. That's why\nwe've introduced a threshold. The bloom filter optimization is only applied\nwhen the number of temporary tables exceeds this threshold. Test data also\nreveals that with a threshold of 10, barring cases where all temporary tables\nare implicated in a transaction, there's hardly any performance loss.\n\n\"Improve PreCommit_on_commit_actions by having it just quit immediately if\nXACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set\" can only reduce the overhead of\ntraversing the OnCommitItem List but still doesn't address the issue with\ntemporary table truncation.\n\nLooking forward to more suggestions!\n\nBest Regards,\nFei Changhong\n\n\nHi wenhui,On Jul 8, 2024, at 12:18, wenhui qiu <[email protected]> wrote:Hi feichanghong    I don't think it's acceptable to introduce a patch to fix a problem that leads to performance degradation, or can we take tom's suggestion to optimise PreCommit_on_commit_actions?  I think it to miss the forest for the trees\nYou're right, any performance regression is certainly unacceptable. That's whywe've introduced a threshold. The bloom filter optimization is only appliedwhen the number of temporary tables exceeds this threshold. Test data alsoreveals that with a threshold of 10, barring cases where all temporary tablesare implicated in a transaction, there's hardly any performance loss.\"Improve PreCommit_on_commit_actions by having it just quit immediately ifXACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set\" can only reduce the overhead oftraversing the OnCommitItem List but still doesn't address the issue withtemporary table truncation.Looking forward to more suggestions!\nBest Regards,Fei Changhong", "msg_date": "Mon, 8 Jul 2024 12:41:43 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi feichanghong\n I think adding an intercept this way is better than implementing a\nglobal temp table,there is a path to implement a global temporary table (\nhttps://www.postgresql.org/message-id/[email protected]),you\ncan consult with them ,they work at Alibaba\n\n\nBest Regards,\n\nfeichanghong <[email protected]> 于2024年7月8日周一 12:42写道:\n\n> Hi wenhui,\n>\n> On Jul 8, 2024, at 12:18, wenhui qiu <[email protected]> wrote:\n>\n> Hi feichanghong\n> I don't think it's acceptable to introduce a patch to fix a problem\n> that leads to performance degradation, or can we take tom's suggestion to\n> optimise PreCommit_on_commit_actions? I think it to miss the forest for\n> the trees\n>\n>\n> You're right, any performance regression is certainly unacceptable. That's\n> why\n>\n> we've introduced a threshold. The bloom filter optimization is only applied\n>\n> when the number of temporary tables exceeds this threshold. Test data also\n>\n> reveals that with a threshold of 10, barring cases where all temporary\n> tables\n>\n> are implicated in a transaction, there's hardly any performance loss.\n>\n>\n> \"Improve PreCommit_on_commit_actions by having it just quit immediately if\n>\n> XACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set\" can only reduce the overhead\n> of\n>\n> traversing the OnCommitItem List but still doesn't address the issue with\n>\n> temporary table truncation.\n>\n>\n> Looking forward to more suggestions!\n>\n> Best Regards,\n> Fei Changhong\n>\n>\n\nHi feichanghong    I think adding an intercept this way is better than implementing a global temp table,there is  a path to  implement a global temporary table (https://www.postgresql.org/message-id/[email protected]),you can consult with them ,they work at AlibabaBest Regards,feichanghong <[email protected]> 于2024年7月8日周一 12:42写道:Hi wenhui,On Jul 8, 2024, at 12:18, wenhui qiu <[email protected]> wrote:Hi feichanghong    I don't think it's acceptable to introduce a patch to fix a problem that leads to performance degradation, or can we take tom's suggestion to optimise PreCommit_on_commit_actions?  I think it to miss the forest for the trees\nYou're right, any performance regression is certainly unacceptable. That's whywe've introduced a threshold. The bloom filter optimization is only appliedwhen the number of temporary tables exceeds this threshold. Test data alsoreveals that with a threshold of 10, barring cases where all temporary tablesare implicated in a transaction, there's hardly any performance loss.\"Improve PreCommit_on_commit_actions by having it just quit immediately ifXACT_FLAGS_ACCESSEDTEMPNAMESPACE is not set\" can only reduce the overhead oftraversing the OnCommitItem List but still doesn't address the issue withtemporary table truncation.Looking forward to more suggestions!\nBest Regards,Fei Changhong", "msg_date": "Mon, 8 Jul 2024 14:05:09 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" }, { "msg_contents": "Hi wenhui,\r\n\r\n\r\n\r\n\r\nI carefully analyzed the reason for the performance regression with fewer\r\n\r\ntemporary tables in the previous patch (v1-0002-): the k_hash_funcs determined\r\n\r\nby the bloom_create function were 10(MAX_HASH_FUNCS), which led to an excessive\r\n\r\ncalculation overhead for the bloom filter.\r\n\r\n\r\n\r\n\r\nBased on the calculation formula for the bloom filter, when the number of items\r\n\r\nis 100 and k_hash_funcs is 2, the false positive rate for a 1KB bloom filter is\r\n\r\n0.0006096; when the number of items is 1000, the false positive rate is\r\n\r\n0.048929094. Therefore, k_hash_funcs of 2 can already achieve a decent false\r\n\r\npositive rate, while effectively reducing the computational overhead of the\r\n\r\nbloom filter.\r\n\r\n\r\n\r\n\r\nI have re-implemented a bloom_create_v2 function to create a bloom filter with\r\n\r\na specified number of hash functions and specified memory size.\r\n\r\n\r\n\r\n\r\nFrom the test data below, it can be seen that the new patch in the attachment\r\n\r\n(v1-0003-) does not lead to performance regression in any scenario.\r\n\r\nFurthermore, the default threshold value can be lowered to 2.\r\n\r\n\r\n\r\n\r\nHere is the TPS performance data for different numbers of temporary tables\r\n\r\nunder different thresholds, as compared with the head (98347b5a). The testing\r\n\r\ntool used is pgbench, with the workload being to insert into one temporary\r\n\r\ntable (when the number of temporary tables is 0, the workload is SELECT 1):\r\n\r\n\r\n\r\n\r\n|tablenum&nbsp; &nbsp; &nbsp; |0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |5&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |10 &nbsp; &nbsp; &nbsp; &nbsp; |100&nbsp; &nbsp; &nbsp; &nbsp; |1000 &nbsp; &nbsp; &nbsp; |\r\n\r\n|--------------|------------|------------|-----------|-----------|-----------|-----------|-----------|\r\n\r\n|head(98347b5a)|39912.722209|10064.306268|9183.871298|7452.071689|5641.487369|1073.203851|114.530958 |\r\n\r\n|threshold-2 &nbsp; |40097.047974|10009.598155|9982.172866|9955.651235|9999.338901|9785.626296|8278.828828|\r\n\r\n\r\n\r\n\r\nHere is the TPS performance data for different numbers of temporary tables\r\n\r\nat a threshold of 2, compared with the head (commit 98347b5a). The testing tool\r\n\r\nis pgbench, with the workload being to insert into all temporary tables:\r\n\r\n\r\n\r\n\r\n|table num &nbsp; &nbsp; |1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | 5 &nbsp; &nbsp; &nbsp; &nbsp; |10 &nbsp; &nbsp; &nbsp; &nbsp; |100 &nbsp; &nbsp; &nbsp; |1000 &nbsp; &nbsp; |\r\n\r\n|--------------|-----------|-----------|-----------|-----------|----------|---------|\r\n\r\n|head(98347b5a)|7243.945042|5734.545012|3627.290594|2262.594766|297.856756|27.745808|\r\n\r\n|threshold-2 &nbsp; |7289.171381|5740.849676|3626.135510|2207.439931|293.145036|27.020953|\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nI have previously researched the implementation of the Global Temp Table (GTT)\r\n\r\nyou mentioned, and it have been used in Alibaba Cloud's PolarDB (Link [1]).\r\n\r\nGTT can prevent truncation operations on temporary tables that have not been\r\n\r\naccessed by the current session (those not in the OnCommitItem List), but GTT\r\n\r\nthat have been accessed by the current session still need to be truncated at\r\n\r\ncommit time.Therefore, GTT also require the optimizations mentioned in the\r\n\r\nabove patch.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n[1] https://www.alibabacloud.com/help/en/polardb/polardb-for-oracle/using-global-temporary-tables?spm=a3c0i.23458820.2359477120.1.66e16e9bUpV7cK\r\n\r\n\r\nBest Regards,\r\nFei Changhong", "msg_date": "Mon, 8 Jul 2024 22:17:09 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize commit performance with a large number of 'on commit\n delete rows' temp tables" } ]
[ { "msg_contents": "Attached patch teaches nbtree backwards scans to avoid needlessly\nrelocking a previously read page/buffer at the point where we need to\nconsider reading the next page (the page to the left).\n\nCurrently, we do this regardless of whether or not we already decided\nto end the scan, back when we read the page within _bt_readpage. We'll\nrelock the page we've just read (and just unlocked), just to follow\nits left link, while making sure to deal with concurrent page splits\ncorrectly. But why bother relocking the page, or with thinking about\nconcurrent splits, if we can already plainly see that we cannot\npossibly need to find matching tuples on the left sibling page?\n\nThe patch just adds a simple precheck, which works in the obvious way.\nSeems like this was a missed opportunity for commit 2ed5b87f96.\n\nOn HEAD, the following query requires 24 buffer hits (I'm getting a\nstandard/forward index scan for this):\n\nselect\n abalance\nfrom\n pgbench_accounts\nwhere\n aid in (1, 500, 1000, 1500, 2000, 3000) order by aid asc;\n\nHowever, we don't see that with the backwards scan variant:\n\nselect\n abalance\nfrom\n pgbench_accounts\nwhere\n aid in (1, 500, 1000, 1500, 2000, 3000) order by aid desc;\n\nWe actually see 30 buffer hits for this on HEAD. Whereas with the\npatch, both variants get only 24 buffer hits -- there's parity, at\nleast in cases like these two.\n\nNote that we only \"achieve parity\" here because we happen to be using\na SAOP, requiring multiple primitive index scans, each of which ends\nwith its own superfluous lock acquisition. Backwards scans remain at a\ndisadvantage with regard to buffer locks acquired in other cases --\ncases that happen to involve scanning several sibling leaf pages in\nsequence (no change there).\n\nIt's probably possible to fix those more complicated cases too. But\nthat would require a significantly more complicated approach. I\nhaven't touched existing comments in _bt_readnextpage that contemplate\nthis possibility.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 5 Jul 2024 12:47:49 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Avoiding superfluous buffer locking during nbtree backwards scans" }, { "msg_contents": "On Fri, 5 Jul 2024 at 18:48, Peter Geoghegan <[email protected]> wrote:\n>\n> Attached patch teaches nbtree backwards scans to avoid needlessly\n> relocking a previously read page/buffer at the point where we need to\n> consider reading the next page (the page to the left).\n\n+1, LGTM.\n\nThis changes the backward scan code in _bt_readpage to have an\napproximately equivalent handling as the forward scan case for\nend-of-scan cases, which is an improvement IMO.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Wed, 7 Aug 2024 00:31:31 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding superfluous buffer locking during nbtree backwards scans" }, { "msg_contents": "On Tue, Aug 6, 2024 at 6:31 PM Matthias van de Meent\n<[email protected]> wrote:\n> +1, LGTM.\n>\n> This changes the backward scan code in _bt_readpage to have an\n> approximately equivalent handling as the forward scan case for\n> end-of-scan cases, which is an improvement IMO.\n\nPushed just now. Thanks for the review!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 11 Aug 2024 15:44:14 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoiding superfluous buffer locking during nbtree backwards scans" }, { "msg_contents": "On Sun, 11 Aug 2024 at 21:44, Peter Geoghegan <[email protected]> wrote:\n>\n> On Tue, Aug 6, 2024 at 6:31 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > +1, LGTM.\n> >\n> > This changes the backward scan code in _bt_readpage to have an\n> > approximately equivalent handling as the forward scan case for\n> > end-of-scan cases, which is an improvement IMO.\n\nHere's a new patch that further improves the situation, so that we\ndon't try to re-lock the buffer we just accessed when we're stepping\nbackward in index scans, reducing buffer lock operations in the common\ncase by 1/2.\n\nIt also further decreases the number of code differences between\nforward and backward scans in _bt_steppage and _bt_readnextpage, with\nmostly only small differences remaining in the code paths shared\nbetween the two scan modes.\n\nThe change in lwlock.c is to silence a warning when LWLOCK_STATS is enabled.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nI've validated my results by compiling with LWLOCK_STATS enabled (e.g.\n#define LWLOCK_STATS), and testing backward index scans, e.g.\n\nCREATE TABLE test AS SELECT generate_series(1, 1000000) as num;\nCREATE INDEX ON test (num);\nVACUUM (FREEZE) test;\n\\c -- reconnect to get fresh lwlock stats\nSELECT COUNT(num ORDER BY num DESC) FROM test;\n\\c -- reconnect to dump stats of previous session\n\nBefore this patch I consistently got `BufferContent 0xYYYYYYYY: shacq\n2` in the logs, but with this patch that has been decreased to\n`BufferContent 0xYYYYYYYY: shacq 1`", "msg_date": "Mon, 19 Aug 2024 13:43:35 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding superfluous buffer locking during nbtree backwards scans" }, { "msg_contents": "On Mon, 19 Aug 2024 at 13:43, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Sun, 11 Aug 2024 at 21:44, Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Tue, Aug 6, 2024 at 6:31 PM Matthias van de Meent\n> > <[email protected]> wrote:\n> > > +1, LGTM.\n> > >\n> > > This changes the backward scan code in _bt_readpage to have an\n> > > approximately equivalent handling as the forward scan case for\n> > > end-of-scan cases, which is an improvement IMO.\n>\n> Here's a new patch that further improves the situation, so that we\n> don't try to re-lock the buffer we just accessed when we're stepping\n> backward in index scans, reducing buffer lock operations in the common\n> case by 1/2.\n\nAttached is an updated version of the patch, now v2, which fixes some\nassertion failures for parallel plans by passing the correct\nparameters to _bt_parallel_release for forward scans.\n\nWith the test setup below, it unifies the number of buffer accesses\nbetween forward and backward scans:\n\nCREATE TABLE test AS\n SELECT generate_series(1, 1000000) as num,\n '' j;\nCREATE INDEX ON test (num);\nVACUUM (FREEZE) test;\nSET enable_seqscan = off; SET max_parallel_workers_per_gather = 0;\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT COUNT(j ORDER BY num DESC) -- or ASC\n FROM test;\n\nThe test case will have an Index Scan, which in DESC case is backward.\nWithout this patch, the IS will have 7160 accesses for the ASC\nordering, but 9892 in the DESC case (an increase of 2732,\napproximately equivalent to the number leaf pages in the index), while\nwith this patch, the IndexScan will have 7160 buffer accesses for both\nASC and DESC ordering.\n\nIn my previous mail I used buffer lock stats from an index-only scan\nas proof of the patch working. It's been pointed out to me that an\nIndexScan is easier to extract this data from, as it drops the pin on\nthe page after getting some results from a page.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Fri, 30 Aug 2024 21:43:22 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding superfluous buffer locking during nbtree backwards scans" }, { "msg_contents": "On Fri, 30 Aug 2024 at 21:43, Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Mon, 19 Aug 2024 at 13:43, Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Sun, 11 Aug 2024 at 21:44, Peter Geoghegan <[email protected]> wrote:\n> > >\n> > > On Tue, Aug 6, 2024 at 6:31 PM Matthias van de Meent\n> > > <[email protected]> wrote:\n> > > > +1, LGTM.\n> > > >\n> > > > This changes the backward scan code in _bt_readpage to have an\n> > > > approximately equivalent handling as the forward scan case for\n> > > > end-of-scan cases, which is an improvement IMO.\n> >\n> > Here's a new patch that further improves the situation, so that we\n> > don't try to re-lock the buffer we just accessed when we're stepping\n> > backward in index scans, reducing buffer lock operations in the common\n> > case by 1/2.\n>\n> Attached is an updated version of the patch, now v2, which fixes some\n> assertion failures for parallel plans by passing the correct\n> parameters to _bt_parallel_release for forward scans.\n\nI noticed I attached an older version of the patch which still had 1\nassertion failure case remaining (thanks cfbot), so here's v3 which\nsolves that problem.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Mon, 2 Sep 2024 13:31:55 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding superfluous buffer locking during nbtree backwards scans" } ]
[ { "msg_contents": "Hi Hackers,\n\nI noticed that this query wasn't using my GiST index:\n\npostgres=# create extension btree_gist;\nCREATE EXTENSION\npostgres=# create table t (id bigint, valid_at daterange, exclude using gist (id with =, valid_at \nwith &&));\nCREATE TABLE\npostgres=# explain select * from t where id = 5;\n QUERY PLAN\n---------------------------------------------------\n Seq Scan on t (cost=0.00..25.00 rows=6 width=40)\n Filter: (id = 5)\n(2 rows)\n\nBut if I add a cast to bigint, it does:\n\npostgres=# explain select * from t where id = 5::bigint;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=4.19..13.66 rows=6 width=40)\n Recheck Cond: (id = '5'::bigint)\n -> Bitmap Index Scan on t_id_valid_at_excl (cost=0.00..4.19 rows=6 width=0)\n Index Cond: (id = '5'::bigint)\n(4 rows)\n\nThere is a StackOverflow question about this with 5 upvotes, so it's not just me who was surprised \nby it.[1]\n\nThe reason is that btree_gist only creates pg_amop entries for symmetrical operators, unlike btree \nwhich has =(int2,int8), etc. So this commit adds support for all combinations of int2/int4/int8 for \nall five btree operators (</<=/=/>=/>). After doing that, my query uses the index without a cast.\n\nOne complication is that while btree has just one opfamily for everything (integer_ops), btree_gist \nsplits things up into gist_int2_ops, gist_int4_ops, and gist_int8_ops. So where to put the \noperators? I thought it made the most sense for a larger width to support smaller ones, so I added \n=(int2,int8) and =(int4,int8) to gist_int8_ops, and I added =(int2,int4) to gist_int4_ops.\n\n[1] https://stackoverflow.com/questions/71788182/postgres-not-using-btree-gist-index\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Fri, 5 Jul 2024 11:46:07 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Add GiST support for mixed-width integer operators" }, { "msg_contents": "\n\n> On 5 Jul 2024, at 23:46, Paul Jungwirth <[email protected]> wrote:\n> \n> this commit adds support for all combinations of int2/int4/int8 for all five btree operators (</<=/=/>=/>).\n\nLooks like a nice feature to have.\nWould it make sense to do something similar to float8? Or, perhaps, some other types from btree_gist?\n\nAlso, are we sure such tests will be stable?\n+SET enable_seqscan=on;\n+-- It should use the index with a different integer width:\n+EXPLAIN (COSTS OFF)\n+SELECT count(*) FROM int4tmp WHERE a = smallint '42';\n+ QUERY PLAN \n+------------------------------------------------\n+ Aggregate\n+ -> Bitmap Heap Scan on int4tmp\n+ Recheck Cond: (a = '42'::smallint)\n+ -> Bitmap Index Scan on int4idx\n+ Index Cond: (a = '42'::smallint)\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 6 Jul 2024 17:04:27 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add GiST support for mixed-width integer operators" }, { "msg_contents": "On 7/6/24 05:04, Andrey M. Borodin wrote:>> On 5 Jul 2024, at 23:46, Paul Jungwirth \n<[email protected]> wrote:\n>>\n>> this commit adds support for all combinations of int2/int4/int8 for all five btree operators (</<=/=/>=/>).\n> \n> Looks like a nice feature to have.\n> Would it make sense to do something similar to float8? Or, perhaps, some other types from btree_gist?\n\nHere is another patch adding float4/8 and also date/timestamp/timestamptz, in the same combinations \nas btree.\n\nNo other types seem like they deserve this treatment. For example btree doesn't mix oids with ints.\n\n> Also, are we sure such tests will be stable?\n\nYou're right, it was questionable. We hadn't analyzed the table, and after doing that the plan \nchanges from a bitmap scan to an index-only scan. That makes more sense, and I doubt it will change \nnow that it's based on statistics.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Mon, 8 Jul 2024 09:32:55 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add GiST support for mixed-width integer operators" }, { "msg_contents": "Paul Jungwirth <[email protected]> writes:\n> On 7/6/24 05:04, Andrey M. Borodin wrote:>> On 5 Jul 2024, at 23:46, Paul Jungwirth \n> <[email protected]> wrote:\n>>> this commit adds support for all combinations of int2/int4/int8 for all five btree operators (</<=/=/>=/>).\n\nPerhaps I'm missing something, but how can this possibly work without\nany changes to the C code?\n\nFor example, gbt_int4_consistent assumes that the comparison value\nis always an int4. Due to the way we represent Datums in-memory,\nthis will kind of work if it's really an int2 or int8 --- unless the\ncomparison value doesn't fit in int4, and then you will get a\ncompletely wrong answer based on a value truncated to int4. (But I\nwould argue that even the cases where it seems to work are a type\nviolation, and getting the right answer is accidental if you have not\napplied the correct PG_GETARG conversion macro.) Plus, int4-vs-int8\ncomparisons will fail in very obvious ways, up to and including core\ndumps, on 32-bit machines where int8 is pass-by-reference.\n\n> Here is another patch adding float4/8 and also date/timestamp/timestamptz, in the same combinations \n> as btree.\n\nSimilar problems, plus comparing timestamp to timestamptz requires a\ntimezone conversion that this code isn't doing.\n\nI think to make this work, you'd need to define a batch of new\nopclass-specific strategy numbers that convey both the kind of\ncomparison to perform and the type of the RHS value. And then\nthere would need to be a nontrivial amount of new code in the\nconsistent functions to deal with cross-type cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Sep 2024 14:45:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add GiST support for mixed-width integer operators" } ]
[ { "msg_contents": "Hi,\n\nREL_14_STABLE and REL_15_STABLE have commit de8feb1f3, which turned on\n-Wcast-function-type, but don't have commit 101c37cd, which disabled\n-Wcast-function-type-strict as we agreed to do[1]. I noticed this on\nmy local system that has clang 18 as compiler, but you can see it on\nany build farm animal using clang 16+.\n\nSo I propose to back-patch 101c37cd to those two branches.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKpHPDTv67Y%2Bs6yiC8KH5OXeDg6a-twWo_xznKTcG0kSA%40mail.gmail.com\n\n\n", "msg_date": "Sat, 6 Jul 2024 12:23:26 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Clang function pointer type warnings in v14, v15" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> REL_14_STABLE and REL_15_STABLE have commit de8feb1f3, which turned on\n> -Wcast-function-type, but don't have commit 101c37cd, which disabled\n> -Wcast-function-type-strict as we agreed to do[1]. I noticed this on\n> my local system that has clang 18 as compiler, but you can see it on\n> any build farm animal using clang 16+.\n\n> So I propose to back-patch 101c37cd to those two branches.\n\n+1. I see that there are a boatload of related warnings in older\nbranches too; do we want to try to do anything about that? (I doubt\ncode changes would be in-scope, but maybe adding a -Wno-foo switch\nto the build flags would be appropriate.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jul 2024 22:35:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clang function pointer type warnings in v14, v15" }, { "msg_contents": "On Sat, Jul 6, 2024 at 2:35 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > So I propose to back-patch 101c37cd to those two branches.\n>\n> +1.\n\nDone.\n\n> I see that there are a boatload of related warnings in older\n> branches too; do we want to try to do anything about that? (I doubt\n> code changes would be in-scope, but maybe adding a -Wno-foo switch\n> to the build flags would be appropriate.)\n\nI don't see any on clang 16 in the 12 and 13 branches. Where are you\nseeing them?\n\nHas anyone thought about the -Wclobbered warnings from eg tamandua?\n\n\n", "msg_date": "Fri, 12 Jul 2024 13:17:23 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clang function pointer type warnings in v14, v15" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sat, Jul 6, 2024 at 2:35 PM Tom Lane <[email protected]> wrote:\n>> I see that there are a boatload of related warnings in older\n>> branches too; do we want to try to do anything about that? (I doubt\n>> code changes would be in-scope, but maybe adding a -Wno-foo switch\n>> to the build flags would be appropriate.)\n\n> I don't see any on clang 16 in the 12 and 13 branches. Where are you\n> seeing them?\n\nIn the buildfarm. \"adder\" and a bunch of other machines are throwing\n-Wcast-function-type in about two dozen places in v13, and \"jay\" is\nemitting several hundred -Wdeprecated-non-prototype warnings.\n\n> Has anyone thought about the -Wclobbered warnings from eg tamandua?\n\nI decided long ago that gcc's algorithm for emitting that warning\nhas no detectable connection to real problems. Maybe it's worth\nsilencing them on general principles, but I've seen no sign that\nit would actually fix any bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2024 22:26:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clang function pointer type warnings in v14, v15" }, { "msg_contents": "On Fri, Jul 12, 2024 at 2:26 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Sat, Jul 6, 2024 at 2:35 PM Tom Lane <[email protected]> wrote:\n> >> I see that there are a boatload of related warnings in older\n> >> branches too; do we want to try to do anything about that? (I doubt\n> >> code changes would be in-scope, but maybe adding a -Wno-foo switch\n> >> to the build flags would be appropriate.)\n>\n> > I don't see any on clang 16 in the 12 and 13 branches. Where are you\n> > seeing them?\n>\n> In the buildfarm. \"adder\" and a bunch of other machines are throwing\n> -Wcast-function-type in about two dozen places in v13, and \"jay\" is\n> emitting several hundred -Wdeprecated-non-prototype warnings.\n\nAh, I see. A few animals running with -Wextra. Whereas in v14+ we\nhave -Wcast-function-type in the actual tree, which affects people's\nworkflows more directly. Like my regular machine, or CI, when a\ncouple of the OSes' house compilers eventually reach clang 16.\n\nI gess we have to decide if it's a matter for the tree, or for the\npeople who add -Wextra, ie to decide if they want to filter that down\na bit with some -Wno-XXX. Adder already has some of those:\n\n 'CFLAGS' => '-O1 -ggdb -g3\n-fno-omit-frame-pointer -Wall -Wextra -Wno-unused-parameter\n-Wno-sign-compare -Wno-missing-field-initializers -O0',\n\n\n", "msg_date": "Fri, 12 Jul 2024 16:29:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clang function pointer type warnings in v14, v15" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Fri, Jul 12, 2024 at 2:26 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> I don't see any on clang 16 in the 12 and 13 branches. Where are you\n>>> seeing them?\n\n>> In the buildfarm. \"adder\" and a bunch of other machines are throwing\n>> -Wcast-function-type in about two dozen places in v13, and \"jay\" is\n>> emitting several hundred -Wdeprecated-non-prototype warnings.\n\n> Ah, I see. A few animals running with -Wextra.\n\nOh, got it.\n\n> I gess we have to decide if it's a matter for the tree, or for the\n> people who add -Wextra, ie to decide if they want to filter that down\n> a bit with some -Wno-XXX.\n\nI think it's reasonable to say that if you add -Wextra then it's\nup to you to deal with the results. If we were contemplating\nenabling -Wextra as standard, then it'd be our problem --- but\nnobody has proposed that AFAIR. In any case we'd surely not\nadd it now to near-dead branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jul 2024 00:34:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clang function pointer type warnings in v14, v15" } ]
[ { "msg_contents": "I am seeking advice. For now I hope for a suggestion about changes from\n17beta1 to 17beta2 that might cause the problem -- assuming there is a\nproblem, and not a mistake in my testing.\n\nOne of the sysbench microbenchmarks that I run does a table scan with a\nWHERE clause that filters out all rows. That WHERE clause is there to\nreduce network IO.\n\nWhile running it on a server with 16 real cores, 12 concurrent queries and\na cached database the query takes ~5% more time on 17beta2 than on 17beta1\nor 16.3. Alas, this is a Google Cloud server and perf doesn't work there.\n\nOn small servers I have at home I can reproduce the problem without\nconcurrent queries and 17beta2 is 5% to 10% slower there.\n\nThe SQL statement for the scan microbenchmark is:\nSELECT * from %s WHERE LENGTH(c) < 0\n\nI will call my small home servers SER4 and PN53. They are described here:\nhttps://smalldatum.blogspot.com/2022/10/small-servers-for-performance-testing-v4.html\n\nThe SER4 is a SER 4700u from Beelink and the PN53 is an ASUS ExpertCenter\nPN53. Both use an AMD CPU with 8 cores, AMD SMT disabled and Ubuntu 22.04.\nThe SER4 has an older, slower CPU than the PN53. In all cases I compile\nfrom source using a configure command line like:\n\n./configure --prefix=$pfx --enable-debug CFLAGS=\"-O2\n-fno-omit-frame-pointer\"\n\nI used perf to get flamegraphs during the scan microbenchmark and they are\narchived here:\nhttps://github.com/mdcallag/mytools/tree/master/bench/bugs/pg17beta2/24Jul5.sysbench.scan\n\nFor both SER4 and PN53 the time to finish the scan microbenchmark is ~10%\nlonger in 17beta2 than it was in 17beta1 and 16.3. On the PN53 the query\ntakes ~20 seconds with 16.3 and 17beta1 vs ~22.5 seconds for 17beta2 when\nthe table has 60M rows.\n\n From the SVG files for SER4 and 17beta2 I see ~2X more time in\nslot_getsomeattrs_int vs 17beta1 or 16.3 with all of that time spent in its\nchild -- tts_buffer_heap_getsomeattrs\n<https://draft.blogger.com/blog/post/edit/9149523927864751087/2076930226137683424#>.\nThat function is defined in src/backend/executor/execTuples.c and that file\nhas not changed from 17beta1 to 17beta2. But I don't keep up with\nindividual commits to Postgres so I won't guess as to the root cause.\n\nBut the SVG files for PN53 don't show the same problem:\n\n - for 16.3 I see 85.24% in ExecInterpExpr vs 11.64% in SeqNext\n - for 17beta1 I see 82.82% in ExecInterpExpr vs 14.51% in SeqNext\n - for 17beta2 I see 85.03% in ExecInterpExpr vs 12.31% in SeqNext\n - for 17beta1 and 17beta2 the flamegraphs shows time spent handling page\n faults during SeqNext, and that isn't visible on the 16.3 flamegraph\n\nAnd then for PN53 looking at slot_getsomeattrs_int, a child of\nExecInterpExpr\n\n - for 16.3 I see 6.99% in slot_getsomeattrs_int\n - for 17beta1 I see 4.29% in slot_getsomeattrs_int\n - for 17beta2 I see 3.99% in slot_getsomeattrs_int\n\nSo at this point I am confused and repeating the test with a slightly\nlarger table, but I am trying to keep the table small enough to fit in the\nPostgres buffer pool. I also have results from tables that are much larger\nthan memory, and even in that case the problem can be reproduced.\n\n-- \nMark Callaghan\[email protected]\n\nI am seeking advice. For now I hope for a suggestion about changes from 17beta1 to 17beta2 that might cause the problem -- assuming there is a problem, and not a mistake in my testing.One of the sysbench microbenchmarks that I run does a table scan with a WHERE clause that filters out all rows. That WHERE clause is there to reduce network IO.While running it on a server with 16 real cores, 12 concurrent queries and a cached database the query takes ~5% more time on 17beta2 than on 17beta1 or 16.3. Alas, this is a Google Cloud server and perf doesn't work there.On small servers I have at home I can reproduce the problem without concurrent queries and 17beta2 is 5% to 10% slower there.The SQL statement for the scan microbenchmark is:SELECT * from %s WHERE LENGTH(c) < 0 I will call my small home servers SER4 and PN53. They are described here:https://smalldatum.blogspot.com/2022/10/small-servers-for-performance-testing-v4.htmlThe SER4 is a SER 4700u from Beelink and the PN53 is an ASUS ExpertCenter PN53. Both use an AMD CPU with 8 cores, AMD SMT disabled and Ubuntu 22.04. The SER4 has an older, slower CPU than the PN53. In all cases I compile from source using a configure command line like:./configure --prefix=$pfx --enable-debug CFLAGS=\"-O2 -fno-omit-frame-pointer\"I used perf to get flamegraphs during the scan microbenchmark and they are archived here:https://github.com/mdcallag/mytools/tree/master/bench/bugs/pg17beta2/24Jul5.sysbench.scanFor both SER4 and PN53 the time to finish the scan microbenchmark is ~10% longer in 17beta2 than it was in 17beta1 and 16.3. On the PN53 the query takes ~20 seconds with 16.3 and 17beta1 vs ~22.5 seconds for 17beta2 when the table has 60M rows.From the SVG files for SER4 and 17beta2 I see ~2X more time in slot_getsomeattrs_int vs 17beta1 or 16.3 with all of that time spent in its child -- tts_buffer_heap_getsomeattrs. That function is defined in src/backend/executor/execTuples.c and that file has not changed from 17beta1 to 17beta2. But I don't keep up with individual commits to Postgres so I won't guess as to the root cause.But the SVG files for PN53 don't show the same problem:for 16.3 I see 85.24% in ExecInterpExpr vs 11.64% in SeqNextfor 17beta1 I see 82.82% in ExecInterpExpr vs 14.51% in SeqNextfor 17beta2 I see 85.03% in ExecInterpExpr vs 12.31% in SeqNextfor 17beta1 and 17beta2 the flamegraphs shows time spent handling page faults during SeqNext, and that isn't visible on the 16.3 flamegraphAnd then for PN53 looking at slot_getsomeattrs_int, a child of ExecInterpExprfor 16.3 I see 6.99% in slot_getsomeattrs_intfor 17beta1 I see 4.29% in slot_getsomeattrs_intfor 17beta2 I see 3.99% in slot_getsomeattrs_intSo at this point I am confused and repeating the test with a slightly larger table, but I am trying to keep the table small enough to fit in the Postgres buffer pool. I also have results from tables that are much larger than memory, and even in that case the problem can be reproduced.-- Mark [email protected]", "msg_date": "Fri, 5 Jul 2024 20:11:08 -0700", "msg_from": "MARK CALLAGHAN <[email protected]>", "msg_from_op": true, "msg_subject": "debugging what might be a perf regression in 17beta2" }, { "msg_contents": "On Sat, 6 Jul 2024 at 15:11, MARK CALLAGHAN <[email protected]> wrote:\n> On small servers I have at home I can reproduce the problem without concurrent queries and 17beta2 is 5% to 10% slower there.\n>\n> The SQL statement for the scan microbenchmark is:\n> SELECT * from %s WHERE LENGTH(c) < 0\n\nCan you share the CREATE TABLE and script to populate so others can try?\n\nAlso, have you tried with another compiler? It does not seem\nimpossible that the refactoring done in heapam.c or the read stream\ncode might have changed something to make the binary more sensitive to\ncaching effects in this area. One thing I often try when I can't\npinpoint the exact offending commit is to write a script to try the\nfirst commit of each day for, say, 30 days to see if there is any\nsaw-toothing in performance numbers over that period.\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jul 2024 15:48:01 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: debugging what might be a perf regression in 17beta2" }, { "msg_contents": "My results have too much variance so this is a false alarm. One day I might\nlearn whether the noise is from HW, Postgres or my test method.\nI ended up trying 10 builds between 17beta1 and 17beta2, but even with that\nI don't have a clear signal.\n\nOn Fri, Jul 5, 2024 at 8:48 PM David Rowley <[email protected]> wrote:\n\n> On Sat, 6 Jul 2024 at 15:11, MARK CALLAGHAN <[email protected]> wrote:\n> > On small servers I have at home I can reproduce the problem without\n> concurrent queries and 17beta2 is 5% to 10% slower there.\n> >\n> > The SQL statement for the scan microbenchmark is:\n> > SELECT * from %s WHERE LENGTH(c) < 0\n>\n> Can you share the CREATE TABLE and script to populate so others can try?\n>\n> Also, have you tried with another compiler? It does not seem\n> impossible that the refactoring done in heapam.c or the read stream\n> code might have changed something to make the binary more sensitive to\n> caching effects in this area. One thing I often try when I can't\n> pinpoint the exact offending commit is to write a script to try the\n> first commit of each day for, say, 30 days to see if there is any\n> saw-toothing in performance numbers over that period.\n>\n> David\n>\n\n\n-- \nMark Callaghan\[email protected]\n\nMy results have too much variance so this is a false alarm. One day I might learn whether the noise is from HW, Postgres or my test method.I ended up trying 10 builds between 17beta1 and 17beta2, but even with that I don't have a clear signal.On Fri, Jul 5, 2024 at 8:48 PM David Rowley <[email protected]> wrote:On Sat, 6 Jul 2024 at 15:11, MARK CALLAGHAN <[email protected]> wrote:\n> On small servers I have at home I can reproduce the problem without concurrent queries and 17beta2 is 5% to 10% slower there.\n>\n> The SQL statement for the scan microbenchmark is:\n> SELECT * from %s WHERE LENGTH(c) < 0\n\nCan you share the CREATE TABLE and script to populate so others can try?\n\nAlso, have you tried with another compiler?  It does not seem\nimpossible that the refactoring done in heapam.c or the read stream\ncode might have changed something to make the binary more sensitive to\ncaching effects in this area.  One thing I often try when I can't\npinpoint the exact offending commit is to write a script to try the\nfirst commit of each day for, say, 30 days to see if there is any\nsaw-toothing in performance numbers over that period.\n\nDavid\n-- Mark [email protected]", "msg_date": "Mon, 8 Jul 2024 10:49:31 -0700", "msg_from": "MARK CALLAGHAN <[email protected]>", "msg_from_op": true, "msg_subject": "Re: debugging what might be a perf regression in 17beta2" }, { "msg_contents": "A writeup for the benchmark results is here -\nhttps://smalldatum.blogspot.com/2024/07/postgres-17beta2-vs-sysbench-looking.html\npg17beta2 and pg17beta1 look good so far\n\nOn Mon, Jul 8, 2024 at 10:49 AM MARK CALLAGHAN <[email protected]> wrote:\n\n> My results have too much variance so this is a false alarm. One day I\n> might learn whether the noise is from HW, Postgres or my test method.\n> I ended up trying 10 builds between 17beta1 and 17beta2, but even with\n> that I don't have a clear signal.\n>\n> On Fri, Jul 5, 2024 at 8:48 PM David Rowley <[email protected]> wrote:\n>\n>> On Sat, 6 Jul 2024 at 15:11, MARK CALLAGHAN <[email protected]> wrote:\n>> > On small servers I have at home I can reproduce the problem without\n>> concurrent queries and 17beta2 is 5% to 10% slower there.\n>> >\n>> > The SQL statement for the scan microbenchmark is:\n>> > SELECT * from %s WHERE LENGTH(c) < 0\n>>\n>> Can you share the CREATE TABLE and script to populate so others can try?\n>>\n>> Also, have you tried with another compiler? It does not seem\n>> impossible that the refactoring done in heapam.c or the read stream\n>> code might have changed something to make the binary more sensitive to\n>> caching effects in this area. One thing I often try when I can't\n>> pinpoint the exact offending commit is to write a script to try the\n>> first commit of each day for, say, 30 days to see if there is any\n>> saw-toothing in performance numbers over that period.\n>>\n>> David\n>>\n>\n>\n> --\n> Mark Callaghan\n> [email protected]\n>\n\n\n-- \nMark Callaghan\[email protected]\n\nA writeup for the benchmark results is here - https://smalldatum.blogspot.com/2024/07/postgres-17beta2-vs-sysbench-looking.htmlpg17beta2 and pg17beta1 look good so farOn Mon, Jul 8, 2024 at 10:49 AM MARK CALLAGHAN <[email protected]> wrote:My results have too much variance so this is a false alarm. One day I might learn whether the noise is from HW, Postgres or my test method.I ended up trying 10 builds between 17beta1 and 17beta2, but even with that I don't have a clear signal.On Fri, Jul 5, 2024 at 8:48 PM David Rowley <[email protected]> wrote:On Sat, 6 Jul 2024 at 15:11, MARK CALLAGHAN <[email protected]> wrote:\n> On small servers I have at home I can reproduce the problem without concurrent queries and 17beta2 is 5% to 10% slower there.\n>\n> The SQL statement for the scan microbenchmark is:\n> SELECT * from %s WHERE LENGTH(c) < 0\n\nCan you share the CREATE TABLE and script to populate so others can try?\n\nAlso, have you tried with another compiler?  It does not seem\nimpossible that the refactoring done in heapam.c or the read stream\ncode might have changed something to make the binary more sensitive to\ncaching effects in this area.  One thing I often try when I can't\npinpoint the exact offending commit is to write a script to try the\nfirst commit of each day for, say, 30 days to see if there is any\nsaw-toothing in performance numbers over that period.\n\nDavid\n-- Mark [email protected]\n-- Mark [email protected]", "msg_date": "Mon, 8 Jul 2024 10:57:37 -0700", "msg_from": "MARK CALLAGHAN <[email protected]>", "msg_from_op": true, "msg_subject": "Re: debugging what might be a perf regression in 17beta2" } ]
[ { "msg_contents": "Hi,\n\nI noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\nalways create new partitions in the default tablespace, regardless of\nthe parent's tablespace. However, the indexes of these new partitions inherit\nthe tablespaces of their parent indexes. This inconsistency seems odd.\nIs this an oversight or intentional?\n\nHere are the steps I used to test this:\n\n-------------------------------------------------------\nCREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\nCREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n PARTITION BY RANGE (i) TABLESPACE tblspc;\n\nCREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n\nALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n\nSELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n tablename | tablespace\n-----------+------------\n t | tblspc\n tp_0_2 | (null)\n(2 rows)\n\nSELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n indexname | tablespace\n-------------+------------\n t_pkey | tblspc\n tp_0_2_pkey | tblspc\n-------------------------------------------------------\n\n\nIf it's an oversight, I've attached a patch to ensure these commands create\nnew partitions in the parent's tablespace.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 6 Jul 2024 16:05:56 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "MERGE/SPLIT partition commands should create new partitions in the\n parent's tablespace?" }, { "msg_contents": "On Sat, Jul 6, 2024 at 4:06 PM Fujii Masao <[email protected]> wrote:\n>\n> Hi,\n>\n> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n> always create new partitions in the default tablespace, regardless of\n> the parent's tablespace. However, the indexes of these new partitions inherit\n> the tablespaces of their parent indexes. This inconsistency seems odd.\n> Is this an oversight or intentional?\n>\n> Here are the steps I used to test this:\n>\n> -------------------------------------------------------\n> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n> PARTITION BY RANGE (i) TABLESPACE tblspc;\n>\n> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n>\n> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n>\n> SELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n> tablename | tablespace\n> -----------+------------\n> t | tblspc\n> tp_0_2 | (null)\n> (2 rows)\n>\n> SELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n> indexname | tablespace\n> -------------+------------\n> t_pkey | tblspc\n> tp_0_2_pkey | tblspc\n> -------------------------------------------------------\n>\n>\n> If it's an oversight, I've attached a patch to ensure these commands create\n> new partitions in the parent's tablespace.\n\n+1\n\nSince creating a child table through the CREATE TABLE statement sets\nits parent table's tablespace as the child table's tablespace, it is\nlogical to set the parent table's tablespace as the merged table's\ntablespace.\n\nWhile the patch does not include test cases for SPLIT PARTITIONS,\nwhich is understandable as these commands use the common function that\nwe have fixed, I believe it would be prudent to test SPLIT PARTITIONS\nas well since we could change it in the future development.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 12:13:36 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "\n\nOn 2024/07/10 12:13, Masahiko Sawada wrote:\n> On Sat, Jul 6, 2024 at 4:06 PM Fujii Masao <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n>> always create new partitions in the default tablespace, regardless of\n>> the parent's tablespace. However, the indexes of these new partitions inherit\n>> the tablespaces of their parent indexes. This inconsistency seems odd.\n>> Is this an oversight or intentional?\n>>\n>> Here are the steps I used to test this:\n>>\n>> -------------------------------------------------------\n>> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n>> CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n>> PARTITION BY RANGE (i) TABLESPACE tblspc;\n>>\n>> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n>> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n>>\n>> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n>>\n>> SELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n>> tablename | tablespace\n>> -----------+------------\n>> t | tblspc\n>> tp_0_2 | (null)\n>> (2 rows)\n>>\n>> SELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n>> indexname | tablespace\n>> -------------+------------\n>> t_pkey | tblspc\n>> tp_0_2_pkey | tblspc\n>> -------------------------------------------------------\n>>\n>>\n>> If it's an oversight, I've attached a patch to ensure these commands create\n>> new partitions in the parent's tablespace.\n> \n> +1\n> \n> Since creating a child table through the CREATE TABLE statement sets\n> its parent table's tablespace as the child table's tablespace, it is\n> logical to set the parent table's tablespace as the merged table's\n> tablespace.\n\nThanks for the review!\n\n\n> While the patch does not include test cases for SPLIT PARTITIONS,\n> which is understandable as these commands use the common function that\n> we have fixed, I believe it would be prudent to test SPLIT PARTITIONS\n> as well since we could change it in the future development.\n\nUnless I'm mistaken, the patch already includes tests for the split case.\nCould you please check the tests added to partition_split.sql?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 10 Jul 2024 16:14:21 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On Wed, Jul 10, 2024 at 4:14 PM Fujii Masao <[email protected]> wrote:\n>\n>\n>\n> On 2024/07/10 12:13, Masahiko Sawada wrote:\n> > On Sat, Jul 6, 2024 at 4:06 PM Fujii Masao <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n> >> always create new partitions in the default tablespace, regardless of\n> >> the parent's tablespace. However, the indexes of these new partitions inherit\n> >> the tablespaces of their parent indexes. This inconsistency seems odd.\n> >> Is this an oversight or intentional?\n> >>\n> >> Here are the steps I used to test this:\n> >>\n> >> -------------------------------------------------------\n> >> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> >> CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n> >> PARTITION BY RANGE (i) TABLESPACE tblspc;\n> >>\n> >> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> >> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> >>\n> >> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n> >>\n> >> SELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n> >> tablename | tablespace\n> >> -----------+------------\n> >> t | tblspc\n> >> tp_0_2 | (null)\n> >> (2 rows)\n> >>\n> >> SELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n> >> indexname | tablespace\n> >> -------------+------------\n> >> t_pkey | tblspc\n> >> tp_0_2_pkey | tblspc\n> >> -------------------------------------------------------\n> >>\n> >>\n> >> If it's an oversight, I've attached a patch to ensure these commands create\n> >> new partitions in the parent's tablespace.\n> >\n> > +1\n> >\n> > Since creating a child table through the CREATE TABLE statement sets\n> > its parent table's tablespace as the child table's tablespace, it is\n> > logical to set the parent table's tablespace as the merged table's\n> > tablespace.\n>\n> Thanks for the review!\n>\n>\n> > While the patch does not include test cases for SPLIT PARTITIONS,\n> > which is understandable as these commands use the common function that\n> > we have fixed, I believe it would be prudent to test SPLIT PARTITIONS\n> > as well since we could change it in the future development.\n>\n> Unless I'm mistaken, the patch already includes tests for the split case.\n> Could you please check the tests added to partition_split.sql?\n>\n\nOops, sorry, I missed that part for some reason.So the patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 17:14:17 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On Sat, 6 Jul 2024 at 19:06, Fujii Masao <[email protected]> wrote:\n> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n> always create new partitions in the default tablespace, regardless of\n> the parent's tablespace. However, the indexes of these new partitions inherit\n> the tablespaces of their parent indexes. This inconsistency seems odd.\n> Is this an oversight or intentional?\n\nMy expectation of this feature is that the tablespace choice would\nwork the same as what was done in ca4103025 to make it inherit from\nthe partition table's tablespace. I imagine we might get complaints if\nit does not follow the same logic.\n\nI've not looked at your patch, but if the behaviour is as you describe\nand the patch changes that to follow ca4103025, then +1 from me.\n\nDavid\n\n\n", "msg_date": "Wed, 10 Jul 2024 21:58:25 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On Wed, Jul 10, 2024 at 5:14 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 10, 2024 at 4:14 PM Fujii Masao <[email protected]> wrote:\n> >\n> >\n> >\n> > On 2024/07/10 12:13, Masahiko Sawada wrote:\n> > > On Sat, Jul 6, 2024 at 4:06 PM Fujii Masao <[email protected]> wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n> > >> always create new partitions in the default tablespace, regardless of\n> > >> the parent's tablespace. However, the indexes of these new partitions inherit\n> > >> the tablespaces of their parent indexes. This inconsistency seems odd.\n> > >> Is this an oversight or intentional?\n> > >>\n> > >> Here are the steps I used to test this:\n> > >>\n> > >> -------------------------------------------------------\n> > >> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> > >> CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n> > >> PARTITION BY RANGE (i) TABLESPACE tblspc;\n> > >>\n> > >> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> > >> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> > >>\n> > >> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n> > >>\n> > >> SELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n> > >> tablename | tablespace\n> > >> -----------+------------\n> > >> t | tblspc\n> > >> tp_0_2 | (null)\n> > >> (2 rows)\n> > >>\n> > >> SELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n> > >> indexname | tablespace\n> > >> -------------+------------\n> > >> t_pkey | tblspc\n> > >> tp_0_2_pkey | tblspc\n> > >> -------------------------------------------------------\n> > >>\n> > >>\n> > >> If it's an oversight, I've attached a patch to ensure these commands create\n> > >> new partitions in the parent's tablespace.\n> > >\n> > > +1\n> > >\n> > > Since creating a child table through the CREATE TABLE statement sets\n> > > its parent table's tablespace as the child table's tablespace, it is\n> > > logical to set the parent table's tablespace as the merged table's\n> > > tablespace.\n\nOne expectation I had for MERGE PARTITION was that if all partition\ntables to be merged are in the same tablespace, the merged table is\nalso created in the same tablespace. But it would be an exceptional\ncase in a sense, and I agree with the proposed behavior as it's\nconsistent. It might be a good idea that we can specify the tablespace\nfor each merged/split table in the future.\n\nBTW the new regression tests don't check the table and index names.\nIsn't it better to show table and index names for better\ndiagnosability?\n\n+-- Check the new partition inherits parent's tablespace\n+CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace)\n+ PARTITION BY RANGE (i) TABLESPACE regress_tblspace;\n+CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n+CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n+ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n+SELECT tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2')\nORDER BY tablespace;\n+ tablespace\n+------------------\n+ regress_tblspace\n+ regress_tblspace\n+(2 rows)\n+\n+SELECT tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2')\nORDER BY tablespace;\n+ tablespace\n+------------------\n+ regress_tblspace\n+ regress_tblspace\n+(2 rows)\n+\n+DROP TABLE t;\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 22:35:37 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On 2024/07/10 22:35, Masahiko Sawada wrote:\n> BTW the new regression tests don't check the table and index names.\n> Isn't it better to show table and index names for better\n> diagnosability?\n\nSounds good to me. I've updated the patch as suggested.\nPlease see the attached patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 11 Jul 2024 20:14:39 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On Wed, Jul 10, 2024 at 9:36 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 10, 2024 at 5:14 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Jul 10, 2024 at 4:14 PM Fujii Masao <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > On 2024/07/10 12:13, Masahiko Sawada wrote:\n> > > > On Sat, Jul 6, 2024 at 4:06 PM Fujii Masao <[email protected]> wrote:\n> > > >>\n> > > >> Hi,\n> > > >>\n> > > >> I noticed that ALTER TABLE MERGE PARTITIONS and SPLIT PARTITION commands\n> > > >> always create new partitions in the default tablespace, regardless of\n> > > >> the parent's tablespace. However, the indexes of these new partitions inherit\n> > > >> the tablespaces of their parent indexes. This inconsistency seems odd.\n> > > >> Is this an oversight or intentional?\n> > > >>\n> > > >> Here are the steps I used to test this:\n> > > >>\n> > > >> -------------------------------------------------------\n> > > >> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> > > >> CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE tblspc)\n> > > >> PARTITION BY RANGE (i) TABLESPACE tblspc;\n> > > >>\n> > > >> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> > > >> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> > > >>\n> > > >> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n> > > >>\n> > > >> SELECT tablename, tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2') ORDER BY tablename;\n> > > >> tablename | tablespace\n> > > >> -----------+------------\n> > > >> t | tblspc\n> > > >> tp_0_2 | (null)\n> > > >> (2 rows)\n> > > >>\n> > > >> SELECT indexname, tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2') ORDER BY indexname;\n> > > >> indexname | tablespace\n> > > >> -------------+------------\n> > > >> t_pkey | tblspc\n> > > >> tp_0_2_pkey | tblspc\n> > > >> -------------------------------------------------------\n> > > >>\n> > > >>\n> > > >> If it's an oversight, I've attached a patch to ensure these commands create\n> > > >> new partitions in the parent's tablespace.\n> > > >\n> > > > +1\n> > > >\n> > > > Since creating a child table through the CREATE TABLE statement sets\n> > > > its parent table's tablespace as the child table's tablespace, it is\n> > > > logical to set the parent table's tablespace as the merged table's\n> > > > tablespace.\n>\n> One expectation I had for MERGE PARTITION was that if all partition\n> tables to be merged are in the same tablespace, the merged table is\n> also created in the same tablespace. But it would be an exceptional\n> case in a sense, and I agree with the proposed behavior as it's\n> consistent. It might be a good idea that we can specify the tablespace\n> for each merged/split table in the future.\n\nI agree this is a good idea, so I tried to support this feature.\n\nThe attached patch v3-0001 is exactly the same as v2-0001, v3-0002 is\na patch for specifying tablespace for each merged/split table.\n\nI'm not sure this addressed David's concern about the tablespace choice\nin ca4103025 though.\n\n>\n> BTW the new regression tests don't check the table and index names.\n> Isn't it better to show table and index names for better\n> diagnosability?\n>\n> +-- Check the new partition inherits parent's tablespace\n> +CREATE TABLE t (i int PRIMARY KEY USING INDEX TABLESPACE regress_tblspace)\n> + PARTITION BY RANGE (i) TABLESPACE regress_tblspace;\n> +CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> +CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> +ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2;\n> +SELECT tablespace FROM pg_tables WHERE tablename IN ('t', 'tp_0_2')\n> ORDER BY tablespace;\n> + tablespace\n> +------------------\n> + regress_tblspace\n> + regress_tblspace\n> +(2 rows)\n> +\n> +SELECT tablespace FROM pg_indexes WHERE tablename IN ('t', 'tp_0_2')\n> ORDER BY tablespace;\n> + tablespace\n> +------------------\n> + regress_tblspace\n> + regress_tblspace\n> +(2 rows)\n> +\n> +DROP TABLE t;\n>\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 12 Jul 2024 18:15:37 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On Thu, Jul 11, 2024 at 8:14 PM Fujii Masao <[email protected]> wrote:\n>\n>\n>\n> On 2024/07/10 22:35, Masahiko Sawada wrote:\n> > BTW the new regression tests don't check the table and index names.\n> > Isn't it better to show table and index names for better\n> > diagnosability?\n>\n> Sounds good to me. I've updated the patch as suggested.\n> Please see the attached patch.\n>\n\nThank you for updating the patch! LGTM.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jul 2024 21:17:55 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "\n\nOn 2024/07/12 21:17, Masahiko Sawada wrote:\n> On Thu, Jul 11, 2024 at 8:14 PM Fujii Masao <[email protected]> wrote:\n>>\n>>\n>>\n>> On 2024/07/10 22:35, Masahiko Sawada wrote:\n>>> BTW the new regression tests don't check the table and index names.\n>>> Isn't it better to show table and index names for better\n>>> diagnosability?\n>>\n>> Sounds good to me. I've updated the patch as suggested.\n>> Please see the attached patch.\n>>\n> \n> Thank you for updating the patch! LGTM.\n\nThanks for reviewing the patch! I've pushed it.\n\nHowever, some buildfarm members reported errors, so I'll investigate further.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Jul 2024 13:33:33 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "On 2024/07/15 13:33, Fujii Masao wrote:\n> \n> \n> On 2024/07/12 21:17, Masahiko Sawada wrote:\n>> On Thu, Jul 11, 2024 at 8:14 PM Fujii Masao <[email protected]> wrote:\n>>>\n>>>\n>>>\n>>> On 2024/07/10 22:35, Masahiko Sawada wrote:\n>>>> BTW the new regression tests don't check the table and index names.\n>>>> Isn't it better to show table and index names for better\n>>>> diagnosability?\n>>>\n>>> Sounds good to me. I've updated the patch as suggested.\n>>> Please see the attached patch.\n>>>\n>>\n>> Thank you for updating the patch! LGTM.\n> \n> Thanks for reviewing the patch! I've pushed it.\n> \n> However, some buildfarm members reported errors, so I'll investigate further.\n\nAttached patch fixes unstable tests. Currently testing before pushing.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 15 Jul 2024 14:00:29 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" }, { "msg_contents": "\n\nOn 2024/07/15 14:00, Fujii Masao wrote:\n> Attached patch fixes unstable tests. Currently testing before pushing.\n\nI pushed the patch at commit 4e5d6c4091, and some buildfarm animals\nare back to green, but crake still reported an error. At first glance,\nthe error messages don't seem related to the recent patches.\nI'll investigate further.\n\nWaiting for replication conn standby_1's replay_lsn to pass 0/15428F78 on primary\n[01:31:11.920](206.483s) # poll_query_until timed out executing this query:\n# SELECT '0/15428F78' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for catchup at /home/andrew/bf/root/HEAD/pgsql/src/test/recovery/t/027_stream_regress.pl line 103.\n# Postmaster PID for node \"primary\" is 99205\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:32:31 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MERGE/SPLIT partition commands should create new partitions in\n the parent's tablespace?" } ]
[ { "msg_contents": ">> My system is a Arch Linux.\n>> I get after upgrade the libxml2 package (from 2.12.7-1 to 2.13.1-1)\n>> test errors for xml:\n>> \n>> not ok 202 + xml 1464 ms\n>> [...snip...]\n>> # 1 of 222 tests failed.\n>> # The differences that caused some tests to fail can be viewed in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.diffs\".\n>> # A copy of the test summary that you see above is saved in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.out\".\n>> make[1]: *** [GNUmakefile:118: check] Fehler 1\n>> make[1]: Verzeichnis „/home/frastr/devel/postgresql/src/test/regress“ wird verlassen\n>> make: *** [GNUmakefile:69: check] Fehler 2\n> \n> Hmm, I did not get this error after upgrading libxml2 on my Arch machine\n> a couple of weeks ago. Did you just run make check after upgrading,\n> without recompiling? Please try make clean && make && make check\n> \n\nHi,\n\ni get same error.\n\nFrank\n\n\n\n", "msg_date": "Sat, 6 Jul 2024 11:57:02 +0200", "msg_from": "Frank Streitzig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-06 11:57 +0200, Frank Streitzig wrote:\n> >> My system is a Arch Linux.\n> >> I get after upgrade the libxml2 package (from 2.12.7-1 to 2.13.1-1)\n> >> test errors for xml:\n> >> \n> >> not ok 202 + xml 1464 ms\n> >> [...snip...]\n> >> # 1 of 222 tests failed.\n> >> # The differences that caused some tests to fail can be viewed in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.diffs\".\n> >> # A copy of the test summary that you see above is saved in the file \"/home/frastr/devel/postgresql/src/test/regress/regression.out\".\n> >> make[1]: *** [GNUmakefile:118: check] Fehler 1\n> >> make[1]: Verzeichnis „/home/frastr/devel/postgresql/src/test/regress“ wird verlassen\n> >> make: *** [GNUmakefile:69: check] Fehler 2\n> > \n> > Hmm, I did not get this error after upgrading libxml2 on my Arch machine\n> > a couple of weeks ago. Did you just run make check after upgrading,\n> > without recompiling? Please try make clean && make && make check\n> \n> i get same error.\n\nAh! I forgot to run ./configure --with-libxml. I thought it was\nenabled by default if libxml2 is available. Now I get the same errors.\nAlso with 2.13.2-1 which is currently still in core-testing.\n\nSo, there must be breaking changes in 2.13.0:\nhttps://gitlab.gnome.org/GNOME/libxml2/-/releases/v2.13.0\n\nMaybe you can downgrade in the meantime, if you still have 2.12.7 in the\ncache:\n\n pacman -U /var/cache/pacman/pkg/libxml2-2.12.7-1-x86_64.pkg.tar.zst\n\n-- \nErik\n\n\n", "msg_date": "Sat, 6 Jul 2024 13:52:22 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> So, there must be breaking changes in 2.13.0:\n> https://gitlab.gnome.org/GNOME/libxml2/-/releases/v2.13.0\n\nYeah, apparently --- I get what look like the same diffs with\nlibxml2 2.13.0 recently supplied by MacPorts. Grumble.\nSomebody's going to have to look into that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2024 10:25:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-06 16:25 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > So, there must be breaking changes in 2.13.0:\n> > https://gitlab.gnome.org/GNOME/libxml2/-/releases/v2.13.0\n> \n> Yeah, apparently --- I get what look like the same diffs with\n> libxml2 2.13.0 recently supplied by MacPorts. Grumble.\n> Somebody's going to have to look into that.\n\nHere's a patch that fixes just the xmlserialize and namespace errors.\n\nUse xmlAddChildList instead of xmlAddChild for xmlserialize. That also\nworks with 2.12.7, but I don't know about older libxml2 versions. Maybe\nadd a version check to be safe:\n\n #if LIBXML_VERSION >= 21300\n xmlAddChildList(root, content_nodes);\n #else\n xmlAddChild(root, content_nodes);\n #endif\n\nI don't know if using xmlAddChild in this context was ever correct.\n\nThe namespace errors are tricky because xmlParseBalancedChunkMemory now\nreturns res_code != 0 for invalid or unknown namespaces (probably other\nerrors as well). So I just added an additional check to ignore those\nerrors for >=2.13. But that's rather hackish. I don't know how to\nhandle it in xml_errorHandler where those error codes are already dealt\nwith in order to compensate for differences in error reporting across\ndifferent libxml2 versions. Looks like xmlerrcxt is ignored by\nxmlParseBalancedChunkMemory.\n\nNo idea how to deal with the remaining errors for invalid and undefined\nentities which appear to include less details now. That seems to be\nexpected, judging from the release notes:\n\n> A few error messages were improved and consolidated. Please update\n> downstream test suites accordingly.\n\nHow to deal with that in a manner that still works for pre-2.13, other\nthan filtering out those details that are no longer included in 2.13?\nOr just \\set VERBOSITY terse for those few test cases? But that omits\nthe entire error detail.\n\n-- \nErik", "msg_date": "Sat, 6 Jul 2024 20:24:35 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-07-06 16:25 +0200, Tom Lane wrote:\n>> Yeah, apparently --- I get what look like the same diffs with\n>> libxml2 2.13.0 recently supplied by MacPorts. Grumble.\n>> Somebody's going to have to look into that.\n\n> Here's a patch that fixes just the xmlserialize and namespace errors.\n\nOne angle that ought to be considered is that some of this stuff may\nbe flat-out bugs in 2.13.0. I see at\n\nhttps://gitlab.gnome.org/GNOME/libxml2/-/releases\n\nthat both 2.13.1 and 2.13.2 contain fixes for \"regressions\" in 2.13.0.\nI'm disinclined to spend much effort on working around dot-zero bugs.\n\n> Use xmlAddChildList instead of xmlAddChild for xmlserialize. That also\n> works with 2.12.7, but I don't know about older libxml2 versions. Maybe\n> ...\n> I don't know if using xmlAddChild in this context was ever correct.\n\nGood question. A look at \n\nhttps://github.com/ConradIrwin/libxml2/blame/master/include/libxml/tree.h\n\nshows that xmlAddChildList has been there for decades, and I also\nsee in the 2.13.0 notes\n\n* tree: Align xmlAddChild with other node insertion functions\n\nso it sort of looks like this is something that we got away with\nbut it wasn't right. I'm inclined to just replace the call and see\nwhat the buildfarm says.\n\n> The namespace errors are tricky because xmlParseBalancedChunkMemory now\n> returns res_code != 0 for invalid or unknown namespaces (probably other\n> errors as well). So I just added an additional check to ignore those\n> errors for >=2.13. But that's rather hackish. I don't know how to\n> handle it in xml_errorHandler where those error codes are already dealt\n> with in order to compensate for differences in error reporting across\n> different libxml2 versions. Looks like xmlerrcxt is ignored by\n> xmlParseBalancedChunkMemory.\n\n2.13.0 mentions having added some new error-handler-setting functions;\nmaybe we need to use one of those to get xmlParseBalancedChunkMemory's\nattention now?\n\n> No idea how to deal with the remaining errors for invalid and undefined\n> entities which appear to include less details now.\n\nThat's fairly annoying. One answer is to create a variant\nexpected-file, but the long-term maintenance costs could be high.\nOn the other hand, our xml support is a bit of a backwater, so\nmaybe it wouldn't be that bad.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2024 14:43:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "I wrote:\n> One angle that ought to be considered is that some of this stuff may\n> be flat-out bugs in 2.13.0. I see at\n> https://gitlab.gnome.org/GNOME/libxml2/-/releases\n> that both 2.13.1 and 2.13.2 contain fixes for \"regressions\" in 2.13.0.\n> I'm disinclined to spend much effort on working around dot-zero bugs.\n\nMeh ... 2.13.1, at least, changes nothing here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jul 2024 15:03:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-06 20:43 +0200, Tom Lane wrote:\n> One angle that ought to be considered is that some of this stuff may\n> be flat-out bugs in 2.13.0. I see at\n> \n> https://gitlab.gnome.org/GNOME/libxml2/-/releases\n> \n> that both 2.13.1 and 2.13.2 contain fixes for \"regressions\" in 2.13.0.\n> I'm disinclined to spend much effort on working around dot-zero bugs.\n\nFound an open issue about ABI compatibility that affects 2.12.7 and\npossibly also 2.13: https://gitlab.gnome.org/GNOME/libxml2/-/issues/751.\nMaybe just wait this one out, in case they'll bump SONAME.\n\n> 2.13.0 mentions having added some new error-handler-setting functions;\n> maybe we need to use one of those to get xmlParseBalancedChunkMemory's\n> attention now?\n\nI tried xmlCtxtSetErrorHandler but xmlParseBalancedChunkMemory doesn't\nreport to that. It's a known issue:\nhttps://gitlab.gnome.org/GNOME/libxml2/-/issues/727\n\n-- \nErik\n\n\n", "msg_date": "Sun, 7 Jul 2024 19:24:10 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-07-06 20:43 +0200, Tom Lane wrote:\n>> I'm disinclined to spend much effort on working around dot-zero bugs.\n\n> Found an open issue about ABI compatibility that affects 2.12.7 and\n> possibly also 2.13: https://gitlab.gnome.org/GNOME/libxml2/-/issues/751.\n> Maybe just wait this one out, in case they'll bump SONAME.\n\nWhether they bump SONAME or put back the removed functions, there's no\nway that production-oriented distros are going to ship 2.13.x until\nsomething's done about that. That's good news for us, in that it\ngives us an excuse to not work around any dot-zero issues that get\nfixed before that point. But we'd better analyze our concerns and\nmake sure anything we don't want to work around gets fixed by then.\n\n> I tried xmlCtxtSetErrorHandler but xmlParseBalancedChunkMemory doesn't\n> report to that. It's a known issue:\n> https://gitlab.gnome.org/GNOME/libxml2/-/issues/727\n\nI saw that one. It would be good to have a replacement for\nxmlParseBalancedChunkMemory, because after looking at the libxml2\nsources I realize that that's classed as a SAX1 function, which means\nit will likely go away at some point (maybe it's already not there in\nsome builds). That's a long-term consideration though.\n\nIn the short term, I dug in the libxml2 sources and found that\nxmlParseBalancedChunkMemory got pretty heavily refactored between 2.12\nand 2.13. The reason for our immediate problem is that the return\nvalue in the old code is defined by\n\n if (!ctxt->wellFormed) {\n if (ctxt->errNo == 0)\n ret = 1;\n else\n ret = ctxt->errNo;\n } else {\n ret = 0;\n }\n\nwhereas the new code just returns ctxt->errNo unconditionally.\nThis does not agree with the phraseology in libxml2's documentation:\n\n 0 if the chunk is well balanced, -1 in case of args problem and\n the parser error code otherwise\n\nso I filed an issue at\nhttps://gitlab.gnome.org/GNOME/libxml2/-/issues/765\nWe'll see what they say about that.\n\nI think we could work around it as attached. This relies on seeing\nthat the 2.13 code will return a node list if and only if\nctxt->wellFormed is true (and we already eliminated the empty-input\ncase, so an empty node list shouldn't happen). But it's not a lot\nless ugly than your proposal.\n\nAlso, this only fixes the two wrong-output-from-xmlserialize\ntest cases. I'm not so stressed about the cases where the errdetail\nchanges, but I think we need to find an answer for the places where\nit fails and didn't before, like:\n\n SELECT xmlparse(content '<invalidns xmlns=''&lt;''/>');\n- xmlparse \n----------------------------\n- <invalidns xmlns='&lt;'/>\n-(1 row)\n-\n+ERROR: invalid XML content\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 07 Jul 2024 14:28:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "I wrote:\n> I think we could work around it as attached. This relies on seeing\n> that the 2.13 code will return a node list if and only if\n> ctxt->wellFormed is true (and we already eliminated the empty-input\n> case, so an empty node list shouldn't happen). But it's not a lot\n> less ugly than your proposal.\n\n> Also, this only fixes the two wrong-output-from-xmlserialize\n> test cases. I'm not so stressed about the cases where the errdetail\n> changes, but I think we need to find an answer for the places where\n> it fails and didn't before, like:\n\n> SELECT xmlparse(content '<invalidns xmlns=''&lt;''/>');\n> - xmlparse \n> ----------------------------\n> - <invalidns xmlns='&lt;'/>\n> -(1 row)\n> -\n> +ERROR: invalid XML content\n\nOh! That's actually the same bug, and my patch was faulty because\nI didn't think about the case where the caller of xml_parse passes\nparsed_nodes = NULL. (And it wasn't doing the right thing in the\nother case either :-(.) The attached works significantly better,\nand cleans up these bogus errors.\n\nWe're still left with missing \"chunk is not well balanced\" errcontext\nentries, which we could live without if we have to, but I wonder why\nthose are not there.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 07 Jul 2024 15:20:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "I wrote:\n> I saw that one. It would be good to have a replacement for\n> xmlParseBalancedChunkMemory, because after looking at the libxml2\n> sources I realize that that's classed as a SAX1 function, which means\n> it will likely go away at some point (maybe it's already not there in\n> some builds). That's a long-term consideration though.\n\nActually ... after nosing around in libxml2 some more, I noticed\nxmlParseInNodeContext, which is the only other function specified\nto parse a Well Balanced Chunk. It requires a context node,\nbut AFAICS we can just gin up a dummy root node and use that.\nIt's existed for plenty long enough for our purposes, and it's\nnot semi-deprecated, and it lacks the bug at hand. So I'm now\nthinking about the attached.\n\nAs far as the errcontext changes go: I think we have to just bite\nthe bullet and accept them. It looks like 2.13 has a completely\ndifferent mechanism than prior versions for deciding when to issue\nXML_ERR_NOT_WELL_BALANCED. And it's not even clear that it's wrong;\nfor example, in our first failing case\n\n DETAIL: line 1: xmlParseEntityRef: no name\n <invalidentity>&</invalidentity>\n ^\n-line 1: chunk is not well balanced\n-<invalidentity>&</invalidentity>\n- ^\n\nit's kind of hard to argue that the chunk isn't well-balanced.\n\nSo we can either suppress errdetails from the expected output,\nor set up an additional expected-file. I'm leaning to the\n\"\\set VERBOSITY terse\" solution.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 07 Jul 2024 16:43:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-07 22:43 +0200, Tom Lane wrote:\n> As far as the errcontext changes go: I think we have to just bite\n> the bullet and accept them. It looks like 2.13 has a completely\n> different mechanism than prior versions for deciding when to issue\n> XML_ERR_NOT_WELL_BALANCED. And it's not even clear that it's wrong;\n> for example, in our first failing case\n> \n> DETAIL: line 1: xmlParseEntityRef: no name\n> <invalidentity>&</invalidentity>\n> ^\n> -line 1: chunk is not well balanced\n> -<invalidentity>&</invalidentity>\n> - ^\n> \n> it's kind of hard to argue that the chunk isn't well-balanced.\n> \n> So we can either suppress errdetails from the expected output,\n> or set up an additional expected-file. I'm leaning to the\n> \"\\set VERBOSITY terse\" solution.\n\n+1 for \\set VERBOSITY terse as a last resort.\n\nBut it looks to me as if \"chunk is not well balanced\" is just noise\nbecause libxml2 reports more specific errors before that. For example:\n\n SELECT xmlparse(content '<twoerrors>&idontexist;</unbalanced>');\n ERROR: invalid XML content\n DETAIL: line 1: Entity 'idontexist' not defined\n <twoerrors>&idontexist;</unbalanced>\n ^\n line 1: Opening and ending tag mismatch: twoerrors line 1 and unbalanced\n <twoerrors>&idontexist;</unbalanced>\n ^\n line 1: chunk is not well balanced\n <twoerrors>&idontexist;</unbalanced>\n ^\n\nHere, \"Opening and ending tag mismatch\" already covers the unbalanced\nclosing tag.\n\nSo how about just ignoring XML_ERR_NOT_WELL_BALANCED like in the\nattached? This also adds test cases for an unclosed tag because I\nwanted to see if I can trigger just \"chunk is not well balanced\", but\nwithout success.\n\n SELECT xmlparse(content '<unclosed>');\n ERROR: invalid XML content\n DETAIL: line 1: Premature end of data in tag unclosed line 1\n <unclosed>\n ^\n line 1: chunk is not well balanced\n <unclosed>\n ^\n\nlibxml2 2.13 doesn't report \"chunk ...\" here either.\n\nThere's also this more explicit test case for unbalanced tags:\n\n <parent><child></parent></child>\n\nBut I'm not sure if that's really necessary if we already have:\n\n <twoerrors>&idontexist;</unbalanced>\n\nThe error messages are the same, except for the additional entity error.\n\n-- \nErik", "msg_date": "Tue, 9 Jul 2024 18:20:46 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> So how about just ignoring XML_ERR_NOT_WELL_BALANCED like in the\n> attached?\n\nOh, that's a good idea. Given that we know they've changed the\nbehavior around this at least once, I'm not sure that it's safe\nto unconditionally ignore this error --- but we could ignore it\nas long as we've already recorded some libxml2 error.\n\nI dug through the 2.13 sources and I think that in that version,\nit is actually impossible to get XML_ERR_NOT_WELL_BALANCED without\nany prior error. It's only issued if parsing stops short of the\nend of input, and that only happens if PARSER_STOPPED() becomes\ntrue, and that only happens if xmlHaltParser() is called, and\nall the calls to that seem to follow other errors being issued.\nBut the 2.12 code looks quite different and I'm not sure that\nthere's no such code path there; much less that it can't happen\nin older versions.\n\n> This also adds test cases for an unclosed tag because I\n> wanted to see if I can trigger just \"chunk is not well balanced\", but\n> without success.\n\nNo particular objection to adding these.\n\n> But I'm not sure if that's really necessary if we already have:\n> <twoerrors>&idontexist;</unbalanced>\n> The error messages are the same, except for the additional entity error.\n\nI think the point of that test is different: it's showing that we\nactually can report multiple errors out of a single libxml2 parsing\ncall. Before seeing your message I was thinking we'd have to\n\"\\set VERBOSITY terse\" that test, which was annoying me mightily\nbecause it could no longer prove any such thing.\n\nI've updated indri's host to current MacPorts including libxml2\n2.13.1, and as expected it's now showing failures:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=indri&dt=2024-07-09%2018%3A17%3A23\n\nI'll push this change in a little bit (still gotta write commit\nmessage) and indri should go back to green. Unless one of the\nother animals complains, I'll set about back-patching in a\nday or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 14:52:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "I wrote:\n> I'll push this change in a little bit (still gotta write commit\n> message) and indri should go back to green. Unless one of the\n> other animals complains, I'll set about back-patching in a\n> day or two.\n\nWell, that didn't take long: several animals are reporting\ndifferent error text for one or both of those new test cases.\nIt looks like I did indeed guess wrong about the output for\nthe libxml2 versions that match xml_2.out; but more\ninterestingly we have\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=widowbird&dt=2024-07-09%2019%3A30%3A07\n\n@@ -287,7 +287,7 @@\n \n SELECT xmlparse(content '<unclosed>');\n ERROR: invalid XML content\n-DETAIL: line 1: Premature end of data in tag unclosed line 1\n+DETAIL: line 1: chunk is not well balanced\n <unclosed>\n ^\n SELECT xmlparse(content '<parent><child></parent></child>');\n@@ -358,7 +358,7 @@\n \n SELECT xmlparse(document '<unclosed>');\n ERROR: invalid XML document\n-DETAIL: line 1: Premature end of data in tag unclosed line 1\n+DETAIL: line 1: EndTag: '</' not found\n <unclosed>\n ^\n SELECT xmlparse(document '<parent><child></parent></child>');\n\nwhich proves that whatever version widowbird is running is\nindeed capable of emitting XML_ERR_NOT_WELL_BALANCED with\nno other error. So my instinct to not suppress that\nunconditionally was right.\n\nAt the moment I'm thinking that we should just remove those\nnew test cases again. They are not valuable enough to\njustify a new variant expected-file, while suppressing the\nerrdetail would remove whatever value they do have.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 15:53:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" }, { "msg_contents": "On 2024-07-09 21:53 +0200, Tom Lane wrote:\n> Well, that didn't take long: several animals are reporting\n> different error text for one or both of those new test cases.\n> [...snip...]\n> At the moment I'm thinking that we should just remove those\n> new test cases again.\n\n+1\n\n-- \nErik\n\n\n", "msg_date": "Tue, 9 Jul 2024 22:46:07 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XML test error on Arch Linux" } ]
[ { "msg_contents": "1eff8279d4 added memory/disk usage for materialize nodes in EXPLAIN\nANALYZE.\nIn the commit message:\n> There are a few other executor node types that use tuplestores, so we\n> could also consider adding these details to the EXPLAIN ANALYZE for\n> those nodes too.\n\nSo I wanted to Add memory/disk usage for WindowAgg. Patch attached.\n\nSince WindowAgg node could create multiple tuplestore for each Window\npartition, we need to track each tuplestore storage usage so that the\nmaximum storage usage is determined. For this purpose I added new\nfields to the WindowAggState.\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Sat, 06 Jul 2024 20:22:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Sat, 6 Jul 2024 at 23:23, Tatsuo Ishii <[email protected]> wrote:\n> So I wanted to Add memory/disk usage for WindowAgg. Patch attached.\n\nThanks for working on that.\n\n> Since WindowAgg node could create multiple tuplestore for each Window\n> partition, we need to track each tuplestore storage usage so that the\n> maximum storage usage is determined. For this purpose I added new\n> fields to the WindowAggState.\n\nI'd recently been looking at the code that recreates the tuplestore\nfor each partition and thought we could do a bit better. In [1], I\nproposed a patch to make this better.\n\nIf you based your patch on [1], maybe a better way of doing this is\nhaving tuplestore.c track the maximum space used on disk in an extra\nfield which is updated with tuplestore_clear(). It's probably ok to\nupdate a new field called maxDiskSpace in tuplestore_clear() if\nstate->status != TSS_INMEM. If the tuplestore went to disk then an\nextra call to BufFileSize() isn't going to be noticeable, even in\ncases where we only just went over work_mem. You could then adjust\ntuplestore_space_used() to look at maxDiskSpace and return that value\nif it's larger than BufFileSize(state->myfile) and state->maxSpace.\nYou could check if maxDiskSpace == 0 to determine if the tuplestore\nhas ever gone to disk. tuplestore_storage_type_name() would also need\nto check maxDiskSpace and return \"Disk\" if that's non-zero.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxtXninSMEv+gbJN66-y6prM3f4WkEHw@mail.gmail.com\n\n\n", "msg_date": "Tue, 9 Jul 2024 10:30:04 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Sat, 6 Jul 2024 at 23:23, Tatsuo Ishii <[email protected]> wrote:\n>> So I wanted to Add memory/disk usage for WindowAgg. Patch attached.\n> \n> Thanks for working on that.\n\nThank you for the infrastructure you created in tuplestore.c and explain.c.\n\nBTW, it seems these executor nodes (other than Materialize and Window\nAggregate node) use tuplestore for their own purpose.\n\nCTE Scan\nRecursive Union\nTable Function Scan\n\nI have already implemented that for CTE Scan. Do you think other two\nnodes are worth to add the information? I think for consistency sake,\nit will better to add the info Recursive Union and Table Function\nScan.\n\n>> Since WindowAgg node could create multiple tuplestore for each Window\n>> partition, we need to track each tuplestore storage usage so that the\n>> maximum storage usage is determined. For this purpose I added new\n>> fields to the WindowAggState.\n> \n> I'd recently been looking at the code that recreates the tuplestore\n> for each partition and thought we could do a bit better. In [1], I\n> proposed a patch to make this better.\n> \n> If you based your patch on [1], maybe a better way of doing this is\n> having tuplestore.c track the maximum space used on disk in an extra\n> field which is updated with tuplestore_clear(). It's probably ok to\n> update a new field called maxDiskSpace in tuplestore_clear() if\n> state->status != TSS_INMEM. If the tuplestore went to disk then an\n> extra call to BufFileSize() isn't going to be noticeable, even in\n> cases where we only just went over work_mem. You could then adjust\n> tuplestore_space_used() to look at maxDiskSpace and return that value\n> if it's larger than BufFileSize(state->myfile) and state->maxSpace.\n> You could check if maxDiskSpace == 0 to determine if the tuplestore\n> has ever gone to disk. tuplestore_storage_type_name() would also need\n> to check maxDiskSpace and return \"Disk\" if that's non-zero.\n\nThank you for the suggestion. Yes, I noticed [1] and once it is\ncommitted, I will start to study tuplestore.c in this direction.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 09 Jul 2024 11:44:12 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Tue, 9 Jul 2024 at 14:44, Tatsuo Ishii <[email protected]> wrote:\n> BTW, it seems these executor nodes (other than Materialize and Window\n> Aggregate node) use tuplestore for their own purpose.\n>\n> CTE Scan\n> Recursive Union\n> Table Function Scan\n>\n> I have already implemented that for CTE Scan. Do you think other two\n> nodes are worth to add the information?\n\nYes, I think so. I'd keep each as a separate patch so they can be\nconsidered independently. Doing all of them should hopefully ensure we\nstrike the right balance of what code to put in explain.c and what\ncode to put in tuplestore.c. I think the WindowAgg's tuplestore usage\npattern might show that the API I picked isn't well suited when a\ntuplestore is cleared and refilled over and over.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Jul 2024 14:49:41 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Tue, Jul 9, 2024 at 8:20 AM David Rowley <[email protected]> wrote:\n>\n> On Tue, 9 Jul 2024 at 14:44, Tatsuo Ishii <[email protected]> wrote:\n> > BTW, it seems these executor nodes (other than Materialize and Window\n> > Aggregate node) use tuplestore for their own purpose.\n> >\n> > CTE Scan\n> > Recursive Union\n> > Table Function Scan\n> >\n> > I have already implemented that for CTE Scan. Do you think other two\n> > nodes are worth to add the information?\n>\n> Yes, I think so. I'd keep each as a separate patch so they can be\n> considered independently. Doing all of them should hopefully ensure we\n> strike the right balance of what code to put in explain.c and what\n> code to put in tuplestore.c.\n+1\n\n+ if (es->format != EXPLAIN_FORMAT_TEXT)\n+ {\n+ ExplainPropertyText(\"Storage\", storageType, es);\n+ ExplainPropertyInteger(\"Maximum Storage\", \"kB\", spaceUsedKB, es);\n+ }\n+ else\n+ {\n+ ExplainIndentText(es);\n+ appendStringInfo(es->str,\n+ \"Storage: %s Maximum Storage: \" INT64_FORMAT \"kB\\n\",\n+ storageType,\n+ spaceUsedKB);\n+ }\n\nIt will be good to move this code to a function which will be called\nby show_*_info functions(). We might even convert it into a tuplestore\nspecific implementation hook after David's work.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 9 Jul 2024 11:54:15 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": ">> Yes, I think so. I'd keep each as a separate patch so they can be\n>> considered independently. Doing all of them should hopefully ensure we\n>> strike the right balance of what code to put in explain.c and what\n>> code to put in tuplestore.c.\n> +1\n> \n> + if (es->format != EXPLAIN_FORMAT_TEXT)\n> + {\n> + ExplainPropertyText(\"Storage\", storageType, es);\n> + ExplainPropertyInteger(\"Maximum Storage\", \"kB\", spaceUsedKB, es);\n> + }\n> + else\n> + {\n> + ExplainIndentText(es);\n> + appendStringInfo(es->str,\n> + \"Storage: %s Maximum Storage: \" INT64_FORMAT \"kB\\n\",\n> + storageType,\n> + spaceUsedKB);\n> + }\n> \n> It will be good to move this code to a function which will be called\n> by show_*_info functions().\n\nI have already implemented that in this direction in my working in\nprogress patch:\n\n/*\n * Show information regarding storage method and maximum memory/disk space\n * used.\n */\nstatic void\nshow_storage_info(Tuplestorestate *tupstore, ExplainState *es)\n\nWhich can be shared by Material and CTE scan node. I am going to post\nit after I take care Recursive Union and Table Function Scan node.\n\n> We might even convert it into a tuplestore\n> specific implementation hook after David's work.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 09 Jul 2024 15:44:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": ">>> Yes, I think so. I'd keep each as a separate patch so they can be\n>>> considered independently. Doing all of them should hopefully ensure we\n>>> strike the right balance of what code to put in explain.c and what\n>>> code to put in tuplestore.c.\n>> +1\n>> \n>> + if (es->format != EXPLAIN_FORMAT_TEXT)\n>> + {\n>> + ExplainPropertyText(\"Storage\", storageType, es);\n>> + ExplainPropertyInteger(\"Maximum Storage\", \"kB\", spaceUsedKB, es);\n>> + }\n>> + else\n>> + {\n>> + ExplainIndentText(es);\n>> + appendStringInfo(es->str,\n>> + \"Storage: %s Maximum Storage: \" INT64_FORMAT \"kB\\n\",\n>> + storageType,\n>> + spaceUsedKB);\n>> + }\n>> \n>> It will be good to move this code to a function which will be called\n>> by show_*_info functions().\n> \n> I have already implemented that in this direction in my working in\n> progress patch:\n\nAttached are the v2 patches. As suggested by David, I split them\ninto multiple patches so that each patch implements the feature for\neach node. You need to apply the patches in the order of patch number\n(if you want to apply all of them, \"git apply v2-*.patch\" should\nwork).\n\nv2-0001-Refactor-show_material_info.patch:\nThis refactors show_material_info(). The guts are moved to new\nshow_storage_info() so that it can be shared by not only Materialized\nnode.\n\nv2-0002-Add-memory-disk-usage-for-CTE-Scan-nodes-in-EXPLA.patch:\nThis adds memory/disk usage for CTE Scan nodes in EXPLAIN (ANALYZE) command.\n\nv2-0003-Add-memory-disk-usage-for-Table-Function-Scan-nod.patch:\nThis adds memory/disk usage for Table Function Scan nodes in EXPLAIN (ANALYZE) command.\n\nv2-0004-Add-memory-disk-usage-for-Recursive-Union-nodes-i.patch:\nThis adds memory/disk usage for Recursive Union nodes in EXPLAIN\n(ANALYZE) command. Also show_storage_info() is changed so that it\naccepts int64 storage_used, char *storage_type arguments. They are\nused if the target node uses multiple tuplestores, in case a simple\ncall to tuplestore_space_used() does not work. Such executor nodes\nneed to collect storage_used while running the node. This type of node\nincludes Recursive Union and Window Aggregate.\n\nv2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This\nadds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)\ncommand. Note that if David's proposal\nhttps://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxtXninSMEv%2BgbJN66-y6prM3f4WkEHw%40mail.gmail.com\nis committed, this will need to be adjusted.\n\nFor a demonstration, how storage/memory usage is shown in EXPLAIN\n(notice \"Storage: Memory Maximum Storage: 120kB\" etc.). The script\nused is attached (test.sql.txt). The SQLs are shamelessly copied from\nDavid's example and the regression test (some of them were modified by\nme).\n\nEXPLAIN (ANALYZE, COSTS OFF)\nSELECT count(t1.b) FROM (VALUES(1),(2)) t2(x) LEFT JOIN (SELECT * FROM t1 WHERE a <= 100) t1 ON TRUE;\n QUERY PLAN \n---------------------------------------------------------------------------------\n Aggregate (actual time=0.345..0.346 rows=1 loops=1)\n -> Nested Loop Left Join (actual time=0.015..0.330 rows=200 loops=1)\n -> Values Scan on \"*VALUES*\" (actual time=0.001..0.003 rows=2 loops=1)\n -> Materialize (actual time=0.006..0.152 rows=100 loops=2)\n Storage: Memory Maximum Storage: 120kB\n -> Seq Scan on t1 (actual time=0.007..0.213 rows=100 loops=1)\n Filter: (a <= 100)\n Rows Removed by Filter: 900\n Planning Time: 0.202 ms\n Execution Time: 0.377 ms\n(10 rows)\n\n-- CTE Scan node\nEXPLAIN (ANALYZE, COSTS OFF)\nWITH RECURSIVE t(n) AS (\n VALUES (1)\nUNION ALL\n SELECT n+1 FROM t WHERE n < 100\n)\nSELECT sum(n) OVER() FROM t;\n QUERY PLAN \n-----------------------------------------------------------------------------------\n WindowAgg (actual time=0.151..0.169 rows=100 loops=1)\n Storage: Memory Maximum Storage: 20kB\n CTE t\n -> Recursive Union (actual time=0.001..0.105 rows=100 loops=1)\n Storage: Memory Maximum Storage: 17kB\n -> Result (actual time=0.001..0.001 rows=1 loops=1)\n -> WorkTable Scan on t t_1 (actual time=0.000..0.000 rows=1 loops=100)\n Filter: (n < 100)\n Rows Removed by Filter: 0\n -> CTE Scan on t (actual time=0.002..0.127 rows=100 loops=1)\n Storage: Memory Maximum Storage: 20kB\n Planning Time: 0.053 ms\n Execution Time: 0.192 ms\n(13 rows)\n\n-- Table Function Scan node\nCREATE OR REPLACE VIEW public.jsonb_table_view6 AS\n SELECT js2,\n jsb2w,\n jsb2q,\n ia,\n ta,\n jba\n FROM JSON_TABLE(\n 'null'::jsonb, '$[*]' AS json_table_path_0\n PASSING\n 1 + 2 AS a,\n '\"foo\"'::json AS \"b c\"\n COLUMNS (\n js2 json PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n jsb2w jsonb PATH '$' WITH UNCONDITIONAL WRAPPER KEEP QUOTES,\n jsb2q jsonb PATH '$' WITHOUT WRAPPER OMIT QUOTES,\n ia integer[] PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n ta text[] PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n jba jsonb[] PATH '$' WITHOUT WRAPPER KEEP QUOTES\n )\n );\nCREATE VIEW\nEXPLAIN (ANALYZE, COSTS OFF) SELECT * FROM jsonb_table_view6;\n QUERY PLAN \n-------------------------------------------------------------------------------\n Table Function Scan on \"json_table\" (actual time=0.024..0.025 rows=1 loops=1)\n Storage: Memory Maximum Storage: 17kB\n Planning Time: 0.100 ms\n Execution Time: 0.054 ms\n(4 rows)\n\n\n\n\n\n\n\n-- Mateialize node\nDROP TABLE t1;\nCREATE TABLE t1 (a INT, b TEXT);\nINSERT INTO t1 SELECT x,repeat('a',1024) from generate_series(1,1000)x;\nCREATE INDEX ON t1(a);\nEXPLAIN (ANALYZE, COSTS OFF)\nSELECT count(t1.b) FROM (VALUES(1),(2)) t2(x) LEFT JOIN (SELECT * FROM t1 WHERE a <= 100) t1 ON TRUE;\n\n-- CTE Scan node\nEXPLAIN (ANALYZE, COSTS OFF)\nWITH RECURSIVE t(n) AS (\n VALUES (1)\nUNION ALL\n SELECT n+1 FROM t WHERE n < 100\n)\nSELECT sum(n) OVER() FROM t;\n\n-- Table Function Scan node\nCREATE OR REPLACE VIEW public.jsonb_table_view6 AS\n SELECT js2,\n jsb2w,\n jsb2q,\n ia,\n ta,\n jba\n FROM JSON_TABLE(\n 'null'::jsonb, '$[*]' AS json_table_path_0\n PASSING\n 1 + 2 AS a,\n '\"foo\"'::json AS \"b c\"\n COLUMNS (\n js2 json PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n jsb2w jsonb PATH '$' WITH UNCONDITIONAL WRAPPER KEEP QUOTES,\n jsb2q jsonb PATH '$' WITHOUT WRAPPER OMIT QUOTES,\n ia integer[] PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n ta text[] PATH '$' WITHOUT WRAPPER KEEP QUOTES,\n jba jsonb[] PATH '$' WITHOUT WRAPPER KEEP QUOTES\n )\n );\n\nEXPLAIN (ANALYZE, COSTS OFF) SELECT * FROM jsonb_table_view6;", "msg_date": "Wed, 10 Jul 2024 18:36:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Hi!\n\n+1 for the idea of the patch. Consider it useful.\n\nI looked at the patch set and don't see any obvious defects. It applies\nwithout any problems and looks pretty good for me.\nOnly one thing is left to do. Add basic tests for the added functionality\nto make it committable. For example, as in the\nmentioned 1eff8279d494b9.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!+1 for the idea of the patch. Consider it useful.I looked at the patch set and don't see any obvious defects. It applies without any problems and looks pretty good for me.Only one thing is left to do. Add basic tests for the added functionality to make it committable.  For example, as in the mentioned 1eff8279d494b9.-- Best regards,Maxim Orlov.", "msg_date": "Tue, 3 Sep 2024 16:53:47 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> Hi!\n> \n> +1 for the idea of the patch. Consider it useful.\n> \n> I looked at the patch set and don't see any obvious defects. It applies\n> without any problems and looks pretty good for me.\n\nThank you for reviewing my patch.\n\n> Only one thing is left to do. Add basic tests for the added functionality\n> to make it committable. For example, as in the\n> mentioned 1eff8279d494b9.\n\nAgreed. Probably add to explain.sql?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 04 Sep 2024 09:06:57 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Wed, 4 Sept 2024 at 03:07, Tatsuo Ishii <[email protected]> wrote:\n\n>\n> Agreed. Probably add to explain.sql?\n>\n\nYeah, I think this is an appropriate place.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Wed, 4 Sept 2024 at 03:07, Tatsuo Ishii <[email protected]> wrote:\nAgreed. Probably add to explain.sql?Yeah, I think this is an appropriate place.-- Best regards,Maxim Orlov.", "msg_date": "Wed, 4 Sep 2024 08:34:49 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Wed, Jul 10, 2024 at 5:36 PM Tatsuo Ishii <[email protected]> wrote:\n>\n>\n> Attached are the v2 patches. As suggested by David, I split them\n> into multiple patches so that each patch implements the feature for\n> each node. You need to apply the patches in the order of patch number\n> (if you want to apply all of them, \"git apply v2-*.patch\" should\n> work).\n>\n> v2-0001-Refactor-show_material_info.patch:\n> This refactors show_material_info(). The guts are moved to new\n> show_storage_info() so that it can be shared by not only Materialized\n> node.\n>\n> v2-0002-Add-memory-disk-usage-for-CTE-Scan-nodes-in-EXPLA.patch:\n> This adds memory/disk usage for CTE Scan nodes in EXPLAIN (ANALYZE) command.\n>\n> v2-0003-Add-memory-disk-usage-for-Table-Function-Scan-nod.patch:\n> This adds memory/disk usage for Table Function Scan nodes in EXPLAIN (ANALYZE) command.\n>\n> v2-0004-Add-memory-disk-usage-for-Recursive-Union-nodes-i.patch:\n> This adds memory/disk usage for Recursive Union nodes in EXPLAIN\n> (ANALYZE) command. Also show_storage_info() is changed so that it\n> accepts int64 storage_used, char *storage_type arguments. They are\n> used if the target node uses multiple tuplestores, in case a simple\n> call to tuplestore_space_used() does not work. Such executor nodes\n> need to collect storage_used while running the node. This type of node\n> includes Recursive Union and Window Aggregate.\n>\n> v2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This\n> adds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)\n> command. Note that if David's proposal\n> https://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxtXninSMEv%2BgbJN66-y6prM3f4WkEHw%40mail.gmail.com\n> is committed, this will need to be adjusted.\n>\n> For a demonstration, how storage/memory usage is shown in EXPLAIN\n> (notice \"Storage: Memory Maximum Storage: 120kB\" etc.). The script\n> used is attached (test.sql.txt). The SQLs are shamelessly copied from\n> David's example and the regression test (some of them were modified by\n> me).\n>\n\nhi. I can roughly understand it.\n\nI have one minor issue with the comment.\n\ntypedef struct RecursiveUnionState\n{\n PlanState ps; /* its first field is NodeTag */\n bool recursing;\n bool intermediate_empty;\n Tuplestorestate *working_table;\n Tuplestorestate *intermediate_table;\n int64 storageSize; /* max storage size Tuplestore */\n char *storageType; /* the storage type above */\n....\n}\n\n\"/* the storage type above */\"\nis kind of ambiguous, since there is more than one Tuplestorestate.\n\ni think it roughly means: the storage type of working_table\nwhile the max storage of working_table.\n\n\n\ntypedef struct WindowAggState\n{\n ScanState ss; /* its first field is NodeTag */\n\n /* these fields are filled in by ExecInitExpr: */\n List *funcs; /* all WindowFunc nodes in targetlist */\n int numfuncs; /* total number of window functions */\n int numaggs; /* number that are plain aggregates */\n\n WindowStatePerFunc perfunc; /* per-window-function information */\n WindowStatePerAgg peragg; /* per-plain-aggregate information */\n ExprState *partEqfunction; /* equality funcs for partition columns */\n ExprState *ordEqfunction; /* equality funcs for ordering columns */\n Tuplestorestate *buffer; /* stores rows of current partition */\n int64 storageSize; /* max storage size in buffer */\n char *storageType; /* the storage type above */\n}\n\n\" /* the storage type above */\"\nI think it roughly means:\n\" the storage type of WindowAggState->buffer while the max storage of\nWindowAggState->buffer\".\n\n\n", "msg_date": "Wed, 4 Sep 2024 16:57:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Wed, 10 Jul 2024 at 21:36, Tatsuo Ishii <[email protected]> wrote:\n> v2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This\n> adds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)\n> command. Note that if David's proposal\n> https://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxtXninSMEv%2BgbJN66-y6prM3f4WkEHw%40mail.gmail.com\n> is committed, this will need to be adjusted.\n\nHi,\n\nI pushed the changes to WindowAgg so as not to call tuplestore_end()\non every partition. Can you rebase this patch over that change?\n\nIt would be good to do this in a way that does not add any new state\nto WindowAggState, you can see that I had to shuffle fields around in\nthat struct because the next_parition field would have caused the\nstruct to become larger. I've not looked closely, but I expect this\ncan be done by adding more code to tuplestore_updatemax() to also\ntrack the disk space used if the current storage has gone to disk. I\nexpect the maxSpace field can be used for both, but we'd need another\nbool field to track if the max used was by disk or memory.\n\nI think the performance of this would also need to be tested as it\nmeans doing an lseek() on every tuplestore_clear() when we've gone to\ndisk. Probably that will be dominated by all the other overheads of a\ntuplestore going to disk (i.e. dumptuples() etc), but it would be good\nto check this. I suggest setting work_mem = 64 and making a test case\nthat only just spills to disk. Maybe do a few thousand partitions\nworth of that and see if you can measure any slowdown.\n\nDavid\n\n\n", "msg_date": "Thu, 5 Sep 2024 16:52:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Hi,\n\n> On Wed, 10 Jul 2024 at 21:36, Tatsuo Ishii <[email protected]> wrote:\n>> v2-0005-Add-memory-disk-usage-for-Window-Aggregate-nodes-.patch: This\n>> adds memory/disk usage for Window Aggregate nodes in EXPLAIN (ANALYZE)\n>> command. Note that if David's proposal\n>> https://www.postgresql.org/message-id/CAHoyFK9n-QCXKTUWT_xxtXninSMEv%2BgbJN66-y6prM3f4WkEHw%40mail.gmail.com\n>> is committed, this will need to be adjusted.\n> \n> Hi,\n> \n> I pushed the changes to WindowAgg so as not to call tuplestore_end()\n> on every partition. Can you rebase this patch over that change?\n> \n> It would be good to do this in a way that does not add any new state\n> to WindowAggState, you can see that I had to shuffle fields around in\n> that struct because the next_parition field would have caused the\n> struct to become larger. I've not looked closely, but I expect this\n> can be done by adding more code to tuplestore_updatemax() to also\n> track the disk space used if the current storage has gone to disk. I\n> expect the maxSpace field can be used for both, but we'd need another\n> bool field to track if the max used was by disk or memory.\n> \n> I think the performance of this would also need to be tested as it\n> means doing an lseek() on every tuplestore_clear() when we've gone to\n> disk. Probably that will be dominated by all the other overheads of a\n> tuplestore going to disk (i.e. dumptuples() etc), but it would be good\n> to check this. I suggest setting work_mem = 64 and making a test case\n> that only just spills to disk. Maybe do a few thousand partitions\n> worth of that and see if you can measure any slowdown.\n\nThank you for the suggestion. I will look into this.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 05 Sep 2024 14:38:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Hi,\n\n> hi. I can roughly understand it.\n> \n> I have one minor issue with the comment.\n> \n> typedef struct RecursiveUnionState\n> {\n> PlanState ps; /* its first field is NodeTag */\n> bool recursing;\n> bool intermediate_empty;\n> Tuplestorestate *working_table;\n> Tuplestorestate *intermediate_table;\n> int64 storageSize; /* max storage size Tuplestore */\n> char *storageType; /* the storage type above */\n> ....\n> }\n> \n> \"/* the storage type above */\"\n> is kind of ambiguous, since there is more than one Tuplestorestate.\n> \n> i think it roughly means: the storage type of working_table\n> while the max storage of working_table.\n> \n> \n> \n> typedef struct WindowAggState\n> {\n> ScanState ss; /* its first field is NodeTag */\n> \n> /* these fields are filled in by ExecInitExpr: */\n> List *funcs; /* all WindowFunc nodes in targetlist */\n> int numfuncs; /* total number of window functions */\n> int numaggs; /* number that are plain aggregates */\n> \n> WindowStatePerFunc perfunc; /* per-window-function information */\n> WindowStatePerAgg peragg; /* per-plain-aggregate information */\n> ExprState *partEqfunction; /* equality funcs for partition columns */\n> ExprState *ordEqfunction; /* equality funcs for ordering columns */\n> Tuplestorestate *buffer; /* stores rows of current partition */\n> int64 storageSize; /* max storage size in buffer */\n> char *storageType; /* the storage type above */\n> }\n> \n> \" /* the storage type above */\"\n> I think it roughly means:\n> \" the storage type of WindowAggState->buffer while the max storage of\n> WindowAggState->buffer\".\n\nThank you for looking into my patch. Unfortunately I need to work on\nother issue before adjusting the comments because the fields might go\naway if I change the tuplestore infrastructure per David's suggestion:\nhttps://www.postgresql.org/message-id/CAApHDvoY8cibGcicLV0fNh%3D9JVx9PANcWvhkdjBnDCc9Quqytg%40mail.gmail.com\n\nAfter this I will rebase the patches. This commit requires changes.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=908a968612f9ed61911d8ca0a185b262b82f1269\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 05 Sep 2024 15:10:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Hi David,\n\n>> I pushed the changes to WindowAgg so as not to call tuplestore_end()\n>> on every partition. Can you rebase this patch over that change?\n>> \n>> It would be good to do this in a way that does not add any new state\n>> to WindowAggState, you can see that I had to shuffle fields around in\n>> that struct because the next_parition field would have caused the\n>> struct to become larger. I've not looked closely, but I expect this\n>> can be done by adding more code to tuplestore_updatemax() to also\n>> track the disk space used if the current storage has gone to disk. I\n>> expect the maxSpace field can be used for both, but we'd need another\n>> bool field to track if the max used was by disk or memory.\n\nI have created a patch in the direction you suggested. See attached\npatch (v1-0001-Enhance-tuplestore.txt). To not confuse CFbot, the\nextension is \"txt\", not \"patch\".\n\n>> I think the performance of this would also need to be tested as it\n>> means doing an lseek() on every tuplestore_clear() when we've gone to\n>> disk. Probably that will be dominated by all the other overheads of a\n>> tuplestore going to disk (i.e. dumptuples() etc), but it would be good\n>> to check this. I suggest setting work_mem = 64 and making a test case\n>> that only just spills to disk. Maybe do a few thousand partitions\n>> worth of that and see if you can measure any slowdown.\n\nI copied your shell script and slightly modified it then ran pgbench\nwith (1 10 100 1000 5000 10000) window partitions (see attached shell\nscript). In the script I set work_mem to 64kB. It seems for 10000,\n1000 and 100 partitions, the performance difference seems\nnoises. However, for 10, 2, 1 partitions. I see large performance\ndegradation with the patched version: patched is slower than stock\nmaster in 1.5% (10 partitions), 41% (2 partitions) and 55.7% (1\npartition). See the attached graph.\n\n From 4749d2018f33e883c292eb904f3253d393a47c99 Mon Sep 17 00:00:00 2001\nFrom: Tatsuo Ishii <[email protected]>\nDate: Thu, 5 Sep 2024 20:59:01 +0900\nSubject: [PATCH v1] Enhance tuplestore.\n\nLet tuplestore_updatemax() handle both memory and disk case.\n---\n src/backend/utils/sort/tuplestore.c | 27 ++++++++++++++++++---------\n 1 file changed, 18 insertions(+), 9 deletions(-)\n\ndiff --git a/src/backend/utils/sort/tuplestore.c b/src/backend/utils/sort/tuplestore.c\nindex 444c8e25c2..854121fc11 100644\n--- a/src/backend/utils/sort/tuplestore.c\n+++ b/src/backend/utils/sort/tuplestore.c\n@@ -107,9 +107,10 @@ struct Tuplestorestate\n \tbool\t\tbackward;\t\t/* store extra length words in file? */\n \tbool\t\tinterXact;\t\t/* keep open through transactions? */\n \tbool\t\ttruncated;\t\t/* tuplestore_trim has removed tuples? */\n+\tbool\t\tinMem;\t\t\t/* true if maxSpace is for memory */\n \tint64\t\tavailMem;\t\t/* remaining memory available, in bytes */\n \tint64\t\tallowedMem;\t\t/* total memory allowed, in bytes */\n-\tint64\t\tmaxSpace;\t\t/* maximum space used in memory */\n+\tint64\t\tmaxSpace;\t\t/* maximum space used in memory or disk */\n \tint64\t\ttuples;\t\t\t/* number of tuples added */\n \tBufFile *myfile;\t\t\t/* underlying file, or NULL if none */\n \tMemoryContext context;\t\t/* memory context for holding tuples */\n@@ -262,6 +263,7 @@ tuplestore_begin_common(int eflags, bool interXact, int maxKBytes)\n \tstate->eflags = eflags;\n \tstate->interXact = interXact;\n \tstate->truncated = false;\n+\tstate->inMem = true;\n \tstate->allowedMem = maxKBytes * 1024L;\n \tstate->availMem = state->allowedMem;\n \tstate->maxSpace = 0;\n@@ -1497,8 +1499,17 @@ static void\n tuplestore_updatemax(Tuplestorestate *state)\n {\n \tif (state->status == TSS_INMEM)\n+\t{\n \t\tstate->maxSpace = Max(state->maxSpace,\n \t\t\t\t\t\t\t state->allowedMem - state->availMem);\n+\t\tstate->inMem = true;\n+\t}\n+\telse\n+\t{\n+\t\tstate->maxSpace = Max(state->maxSpace,\n+\t\t\t\t\t\t\t BufFileSize(state->myfile));\n+\t\tstate->inMem = false;\n+\t}\n }\n \n /*\n@@ -1509,7 +1520,7 @@ tuplestore_updatemax(Tuplestorestate *state)\n const char *\n tuplestore_storage_type_name(Tuplestorestate *state)\n {\n-\tif (state->status == TSS_INMEM)\n+\tif (state->inMem)\n \t\treturn \"Memory\";\n \telse\n \t\treturn \"Disk\";\n@@ -1517,8 +1528,7 @@ tuplestore_storage_type_name(Tuplestorestate *state)\n \n /*\n * tuplestore_space_used\n- *\t\tReturn the maximum space used in memory unless the tuplestore has spilled\n- *\t\tto disk, in which case, return the disk space used.\n+ *\t\tReturn the maximum space used in memory or disk.\n */\n int64\n tuplestore_space_used(Tuplestorestate *state)\n@@ -1526,10 +1536,7 @@ tuplestore_space_used(Tuplestorestate *state)\n \t/* First, update the maxSpace field */\n \ttuplestore_updatemax(state);\n \n-\tif (state->status == TSS_INMEM)\n-\t\treturn state->maxSpace;\n-\telse\n-\t\treturn BufFileSize(state->myfile);\n+\treturn state->maxSpace;\n }\n \n /*\n@@ -1601,7 +1608,9 @@ writetup_heap(Tuplestorestate *state, void *tup)\n \tif (state->backward)\t\t/* need trailing length word? */\n \t\tBufFileWrite(state->myfile, &tuplen, sizeof(tuplen));\n \n-\t/* no need to call tuplestore_updatemax() when not in TSS_INMEM */\n+\t/* update maxSpace */\n+\ttuplestore_updatemax(state);\n+\n \tFREEMEM(state, GetMemoryChunkSpace(tuple));\n \theap_free_minimal_tuple(tuple);\n }\n-- \n2.25.1\n\n\ndbname=test\nrows=10000\nsecs=10\nntests=3\n\npsql -c \"drop table if exists part_test;\" $dbname\npsql -c \"create table part_test (a int not null);\" $dbname\npsql -c \"insert into part_test select g.s from generate_series(1,$rows) g(s);\" $dbname\npsql -c \"vacuum freeze analyze part_test;\" $dbname\npsql -c \"alter system set work_mem = '64kB';\" $dbname\npsql -c \"alter system set jit = 0;\" $dbname\npsql -c \"select pg_reload_conf();\" $dbname\n\nfor c in 1 10 100 1000 5000 10000\ndo\n\techo \"SELECT a,count(*) OVER (PARTITION BY a / $c) FROM part_test OFFSET $rows\" > bench.sql\n\techo \"Testing with $(($rows / $c)) partitions\"\n\tfor i in $(seq 1 $ntests)\n\tdo\n\t\tpgbench -n -f bench.sql -M prepared -T $secs $dbname | grep latency\n\tdone\ndone", "msg_date": "Fri, 06 Sep 2024 13:21:31 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, 6 Sept 2024 at 16:21, Tatsuo Ishii <[email protected]> wrote:\n> However, for 10, 2, 1 partitions. I see large performance\n> degradation with the patched version: patched is slower than stock\n> master in 1.5% (10 partitions), 41% (2 partitions) and 55.7% (1\n> partition). See the attached graph.\n\nThanks for making the adjustments to this.\n\nI don't think there is any need to call tuplestore_updatemax() from\nwithin writetup_heap(). That means having to update the maximum space\nused every time a tuple is written to disk. That's a fairly massive\noverhead.\n\nInstead, it should be fine to modify tuplestore_updatemax() to set a\nflag to true if state->status != TSS_INMEM and then record the disk\nspace used. That flag won't ever be set to false again.\ntuplestore_storage_type_name() should just return \"Disk\" if the new\ndisk flag is set, even if state->status == TSS_INMEM. Since the\nwork_mem size won't change between tuplestore_clear() calls, if we've\nonce spilt to disk, then we shouldn't care about the memory used for\nruns that didn't. Those will always have used less memory.\n\nI did this quickly, but playing around with the attached, I didn't see\nany slowdown.\n\nHere's the results I got on my Zen2 AMD machine:\n\nparts master yours mine mine_v_master\n10000 5.01 5.12 5.09 99%\n1000 4.30 4.25 4.24 101%\n100 4.17 4.13 4.12 101%\n10 4.16 4.12 4.10 101%\n2 4.75 7.64 4.73 100%\n1 4.75 8.57 4.73 100%\n\nDavid", "msg_date": "Fri, 6 Sep 2024 17:07:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> Thanks for making the adjustments to this.\n> \n> I don't think there is any need to call tuplestore_updatemax() from\n> within writetup_heap(). That means having to update the maximum space\n> used every time a tuple is written to disk. That's a fairly massive\n> overhead.\n> \n> Instead, it should be fine to modify tuplestore_updatemax() to set a\n> flag to true if state->status != TSS_INMEM and then record the disk\n> space used. That flag won't ever be set to false again.\n> tuplestore_storage_type_name() should just return \"Disk\" if the new\n> disk flag is set, even if state->status == TSS_INMEM. Since the\n> work_mem size won't change between tuplestore_clear() calls, if we've\n> once spilt to disk, then we shouldn't care about the memory used for\n> runs that didn't. Those will always have used less memory.\n> \n> I did this quickly, but playing around with the attached, I didn't see\n> any slowdown.\n\nYour patch looks good to me and I confirmed that with your patch I\ndidn't see any slowdown either. Thanks!\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 06 Sep 2024 15:02:37 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, Sep 6, 2024 at 11:32 AM Tatsuo Ishii <[email protected]> wrote:\n>\n> > Thanks for making the adjustments to this.\n> >\n> > I don't think there is any need to call tuplestore_updatemax() from\n> > within writetup_heap(). That means having to update the maximum space\n> > used every time a tuple is written to disk. That's a fairly massive\n> > overhead.\n> >\n> > Instead, it should be fine to modify tuplestore_updatemax() to set a\n> > flag to true if state->status != TSS_INMEM and then record the disk\n> > space used. That flag won't ever be set to false again.\n> > tuplestore_storage_type_name() should just return \"Disk\" if the new\n> > disk flag is set, even if state->status == TSS_INMEM. Since the\n> > work_mem size won't change between tuplestore_clear() calls, if we've\n> > once spilt to disk, then we shouldn't care about the memory used for\n> > runs that didn't. Those will always have used less memory.\n> >\n> > I did this quickly, but playing around with the attached, I didn't see\n> > any slowdown.\n>\n> Your patch looks good to me and I confirmed that with your patch I\n> didn't see any slowdown either. Thanks!\n\nThe changes look better. A nitpick though. With their definitions\nchanged, I think it's better to change the names of the functions\nsince their purpose has changed. Right now they report the storage\ntype and size used, respectively, at the time of calling the function.\nWith this patch, they report maximum space ever used and the storage\ncorresponding to the maximum space. tuplestore_space_used() may be\nchanged to tuplestore_maxspace_used(). I am having difficulty with\ntuplestore_storage_type_name(); tuplestore_largest_storage_type_name()\nseems mouthful and yet not doing justice to the functionality. It\nmight be better to just have one funciton tuplestore_maxspace_used()\nwhich returns both the maximum space used as well as the storage type\nwhen maximum space was used.\n\nThe comments need a bit of grammar fixes, but that can be done when\nfinalizing the patches.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Sep 2024 12:37:55 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> The changes look better. A nitpick though. With their definitions\n> changed, I think it's better to change the names of the functions\n> since their purpose has changed. Right now they report the storage\n> type and size used, respectively, at the time of calling the function.\n> With this patch, they report maximum space ever used and the storage\n> corresponding to the maximum space. tuplestore_space_used() may be\n> changed to tuplestore_maxspace_used(). I am having difficulty with\n> tuplestore_storage_type_name(); tuplestore_largest_storage_type_name()\n> seems mouthful and yet not doing justice to the functionality. It\n> might be better to just have one funciton tuplestore_maxspace_used()\n> which returns both the maximum space used as well as the storage type\n> when maximum space was used.\n\n+1. Returning the storage type by the same function, not by a separate\nfunction looks more natural.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 06 Sep 2024 16:50:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat\n<[email protected]> wrote:\n> The changes look better. A nitpick though. With their definitions\n> changed, I think it's better to change the names of the functions\n> since their purpose has changed. Right now they report the storage\n> type and size used, respectively, at the time of calling the function.\n> With this patch, they report maximum space ever used and the storage\n> corresponding to the maximum space. tuplestore_space_used() may be\n> changed to tuplestore_maxspace_used(). I am having difficulty with\n> tuplestore_storage_type_name(); tuplestore_largest_storage_type_name()\n> seems mouthful and yet not doing justice to the functionality. It\n> might be better to just have one funciton tuplestore_maxspace_used()\n> which returns both the maximum space used as well as the storage type\n> when maximum space was used.\n\nHow about just removing tuplestore_storage_type_name() and\ntuplestore_space_used() and adding tuplestore_get_stats(). I did take\nsome inspiration from tuplesort.c for this, so maybe we can defer back\nthere for further guidance. I'm not so sure it's worth having a stats\nstruct type like tuplesort.c has. All we need is a char ** and an\nint64 * output parameter to pass to the stats function. I don't think\nwe need to copy the tuplesort_method_name(). It seems fine just to\npoint the output parameter of the stats function at the statically\nallocated constant.\n\nDavid\n\n\n", "msg_date": "Sat, 7 Sep 2024 01:51:34 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, Sep 6, 2024 at 7:21 PM David Rowley <[email protected]> wrote:\n>\n> On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat\n> <[email protected]> wrote:\n> > The changes look better. A nitpick though. With their definitions\n> > changed, I think it's better to change the names of the functions\n> > since their purpose has changed. Right now they report the storage\n> > type and size used, respectively, at the time of calling the function.\n> > With this patch, they report maximum space ever used and the storage\n> > corresponding to the maximum space. tuplestore_space_used() may be\n> > changed to tuplestore_maxspace_used(). I am having difficulty with\n> > tuplestore_storage_type_name(); tuplestore_largest_storage_type_name()\n> > seems mouthful and yet not doing justice to the functionality. It\n> > might be better to just have one funciton tuplestore_maxspace_used()\n> > which returns both the maximum space used as well as the storage type\n> > when maximum space was used.\n>\n> How about just removing tuplestore_storage_type_name() and\n> tuplestore_space_used() and adding tuplestore_get_stats(). I did take\n> some inspiration from tuplesort.c for this, so maybe we can defer back\n> there for further guidance. I'm not so sure it's worth having a stats\n> struct type like tuplesort.c has. All we need is a char ** and an\n> int64 * output parameter to pass to the stats function. I don't think\n> we need to copy the tuplesort_method_name(). It seems fine just to\n> point the output parameter of the stats function at the statically\n> allocated constant.\n\ntuplestore_get_stats() similar to tuplesort_get_stats() looks fine. In\nfuture the stats reported by this function might expand e.g. maximum\nnumber of readers may be included in the stats. If it expands beyond\ntwo values, we could think of a separate structure, but for now it\nlooks fine given its limited use. A comment explaining why we aren't\nusing a stats structure and some guidance on when that would be\nappropriate will be better.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 9 Sep 2024 15:42:34 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Fri, 6 Sept 2024 at 19:08, Ashutosh Bapat\n> <[email protected]> wrote:\n>> The changes look better. A nitpick though. With their definitions\n>> changed, I think it's better to change the names of the functions\n>> since their purpose has changed. Right now they report the storage\n>> type and size used, respectively, at the time of calling the function.\n>> With this patch, they report maximum space ever used and the storage\n>> corresponding to the maximum space. tuplestore_space_used() may be\n>> changed to tuplestore_maxspace_used(). I am having difficulty with\n>> tuplestore_storage_type_name(); tuplestore_largest_storage_type_name()\n>> seems mouthful and yet not doing justice to the functionality. It\n>> might be better to just have one funciton tuplestore_maxspace_used()\n>> which returns both the maximum space used as well as the storage type\n>> when maximum space was used.\n> \n> How about just removing tuplestore_storage_type_name() and\n> tuplestore_space_used() and adding tuplestore_get_stats(). I did take\n> some inspiration from tuplesort.c for this, so maybe we can defer back\n> there for further guidance. I'm not so sure it's worth having a stats\n> struct type like tuplesort.c has. All we need is a char ** and an\n> int64 * output parameter to pass to the stats function. I don't think\n> we need to copy the tuplesort_method_name(). It seems fine just to\n> point the output parameter of the stats function at the statically\n> allocated constant.\n\nAre you going to push the changes to tuplestore.c anytime soon? I\nwould like to rebase my patch[1] but the patch could be affected by\nthe tuplestore API change.\n\nBest reagards,\n\n[1] https://www.postgresql.org/message-id/CAApHDvoY8cibGcicLV0fNh%3D9JVx9PANcWvhkdjBnDCc9Quqytg%40mail.gmail.com\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 12 Sep 2024 11:03:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:04, Tatsuo Ishii <[email protected]> wrote:\n> Are you going to push the changes to tuplestore.c anytime soon? I\n> would like to rebase my patch[1] but the patch could be affected by\n> the tuplestore API change.\n\nOk, I'll look at that. I had thought you were taking care of writing the patch.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:28:14 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Thu, 12 Sept 2024 at 14:04, Tatsuo Ishii <[email protected]> wrote:\n>> Are you going to push the changes to tuplestore.c anytime soon? I\n>> would like to rebase my patch[1] but the patch could be affected by\n>> the tuplestore API change.\n> \n> Ok, I'll look at that.\n\nThanks.\n\nI had thought you were taking care of writing the patch.\n\nSorry, I should have asked you first if you are going to write the API\nchange patch.\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 11:42:06 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:42, Tatsuo Ishii <[email protected]> wrote:\n> Sorry, I should have asked you first if you are going to write the API\n> change patch.\n\nI pushed a patch to change the API.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 17:07:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> I pushed a patch to change the API.\n\nThank you!\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:09:59 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Here is the v3 patch. This time I only include patch for the Window\nAggregate node. Patches for other node types will come after this\npatch getting committed or come close to commitable state.\n\nDavid,\nIn this patch I refactored show_material_info. I divided it into\nshow_material_info and show_storage_info so that the latter can be\nused by other node types including window aggregate node. What do you\nthink?\n\nI also added a test case in explain.sql per discussion with Maxim\nOrlov.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Thu, 12 Sep 2024 21:12:06 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, 13 Sept 2024 at 00:12, Tatsuo Ishii <[email protected]> wrote:\n> In this patch I refactored show_material_info. I divided it into\n> show_material_info and show_storage_info so that the latter can be\n> used by other node types including window aggregate node. What do you\n> think?\n\nYes, I think it's a good idea to move that into a helper function. If\nyou do the other node types, without that helper the could would have\nto be repeated quite a few times. Maybe show_storage_info() can be\nmoved up with the other helper functions, say below\nshow_sortorder_options() ? It might be a good idea to keep the \"if\n(!es->analyze || tupstore == NULL)\" checks in the calling function\nrather than the helper too.\n\nI thought about the location of the test for a while and read the\n\"This file is concerned with testing EXPLAIN in its own right.\"\ncomment at the top of that explain.out. I was trying to decide if\ntesting output of a specific node type met this or not. I can't pick\nout any other tests there which are specific to a node type, so I'm\nunsure if this is the location for it or not. However, to put it\nanywhere else means having to add a plpgsql function to mask out the\nunstable parts of EXPLAIN, so maybe the location is good as it saves\nfrom having to do that. I'm 50/50 on this, so I'm happy to let you\ndecide. You could also shrink that 100 rows into a smaller number for\nthe generate_series without losing any coverage.\n\nAside from that, I think the patch is good. Thanks for working on it.\n\nDavid\n\n\n", "msg_date": "Fri, 13 Sep 2024 09:20:11 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "David,\n\nThank you for your review.\n\n> Yes, I think it's a good idea to move that into a helper function. If\n> you do the other node types, without that helper the could would have\n> to be repeated quite a few times. Maybe show_storage_info() can be\n> moved up with the other helper functions, say below\n> show_sortorder_options() ?\n\nYeah, that makes sense. Looks less random.\n\n> It might be a good idea to keep the \"if\n> (!es->analyze || tupstore == NULL)\" checks in the calling function\n> rather than the helper too.\n\nI agree with this. This kind of check should be done in the calling\nfunction.\n\n> I thought about the location of the test for a while and read the\n> \"This file is concerned with testing EXPLAIN in its own right.\"\n> comment at the top of that explain.out. I was trying to decide if\n> testing output of a specific node type met this or not. I can't pick\n> out any other tests there which are specific to a node type, so I'm\n> unsure if this is the location for it or not. However, to put it\n> anywhere else means having to add a plpgsql function to mask out the\n> unstable parts of EXPLAIN, so maybe the location is good as it saves\n> from having to do that. I'm 50/50 on this, so I'm happy to let you\n> decide.\n\nYeah. Maybe we should move the function to elsewhere so that it can be\nshared by other tests. However in this case it's purpose is testing an\nadditional output in an explain command. I think this is not far from\n\"This file is concerned with testing EXPLAIN in its own right.\". So I\nwould like to keep the test in explain.sql.\n\n> You could also shrink that 100 rows into a smaller number for\n> the generate_series without losing any coverage.\n\nRight. I will make the change.\n\n> Aside from that, I think the patch is good. Thanks for working on it.\n\nThanks. Attached is the v4 patch. I am going push it if there's no\nobjection.\n\nAfter this, I will work on remaining node types.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Fri, 13 Sep 2024 15:11:33 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, 13 Sept 2024 at 09:11, Tatsuo Ishii <[email protected]> wrote:\n\n> Thanks. Attached is the v4 patch. I am going push it if there's no\n> objection.\n>\n\nLooks good to me. Thank you for your work.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 13 Sept 2024 at 09:11, Tatsuo Ishii <[email protected]> wrote:\nThanks. Attached is the v4 patch. I am going push it if there's no\nobjection.Looks good to me. Thank you for your work. -- Best regards,Maxim Orlov.", "msg_date": "Fri, 13 Sep 2024 11:57:25 +0300", "msg_from": "Maxim Orlov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "The patch looks fine but it doesn't add a test case where Storage is\nDisk or the case when the last usage fit in memory but an earlier\nusage spilled to disk. Do we want to cover those. This test would be\nthe only one where those code paths could be tested.\n\nOn Fri, Sep 13, 2024 at 11:41 AM Tatsuo Ishii <[email protected]> wrote:\n>\n> David,\n>\n> Thank you for your review.\n>\n> > Yes, I think it's a good idea to move that into a helper function. If\n> > you do the other node types, without that helper the could would have\n> > to be repeated quite a few times. Maybe show_storage_info() can be\n> > moved up with the other helper functions, say below\n> > show_sortorder_options() ?\n>\n> Yeah, that makes sense. Looks less random.\n>\n> > It might be a good idea to keep the \"if\n> > (!es->analyze || tupstore == NULL)\" checks in the calling function\n> > rather than the helper too.\n>\n> I agree with this. This kind of check should be done in the calling\n> function.\n>\n> > I thought about the location of the test for a while and read the\n> > \"This file is concerned with testing EXPLAIN in its own right.\"\n> > comment at the top of that explain.out. I was trying to decide if\n> > testing output of a specific node type met this or not. I can't pick\n> > out any other tests there which are specific to a node type, so I'm\n> > unsure if this is the location for it or not. However, to put it\n> > anywhere else means having to add a plpgsql function to mask out the\n> > unstable parts of EXPLAIN, so maybe the location is good as it saves\n> > from having to do that. I'm 50/50 on this, so I'm happy to let you\n> > decide.\n>\n> Yeah. Maybe we should move the function to elsewhere so that it can be\n> shared by other tests. However in this case it's purpose is testing an\n> additional output in an explain command. I think this is not far from\n> \"This file is concerned with testing EXPLAIN in its own right.\". So I\n> would like to keep the test in explain.sql.\n>\n> > You could also shrink that 100 rows into a smaller number for\n> > the generate_series without losing any coverage.\n>\n> Right. I will make the change.\n>\n> > Aside from that, I think the patch is good. Thanks for working on it.\n>\n> Thanks. Attached is the v4 patch. I am going push it if there's no\n> objection.\n>\n> After this, I will work on remaining node types.\n>\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 13 Sep 2024 14:36:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> The patch looks fine but it doesn't add a test case where Storage is\n> Disk\n\nWe can test the case by setting work_mem to the minimum size (64kB)\nand giving slightly larger \"stop\" parameter to generate_series.\n\n> or the case when the last usage fit in memory but an earlier\n> usage spilled to disk.\n\nIn my understanding once tuplestore changes the storage type to disk,\nit never returns to the memory storage type in terms of\ntuplestore_get_stats. i.e. once state->usedDisk is set to true, it\nnever goes back to false. So the test case is not necessary.\nDavid, am I correct?\n\n> Do we want to cover those. This test would be\n> the only one where those code paths could be tested.\n\nI am fine to add the first test case.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 13 Sep 2024 18:31:53 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Fri, Sep 13, 2024 at 3:02 PM Tatsuo Ishii <[email protected]> wrote:\n>\n> > The patch looks fine but it doesn't add a test case where Storage is\n> > Disk\n>\n> We can test the case by setting work_mem to the minimum size (64kB)\n> and giving slightly larger \"stop\" parameter to generate_series.\n>\n\nWFM\n\n> > or the case when the last usage fit in memory but an earlier\n> > usage spilled to disk.\n>\n> In my understanding once tuplestore changes the storage type to disk,\n> it never returns to the memory storage type in terms of\n> tuplestore_get_stats. i.e. once state->usedDisk is set to true, it\n> never goes back to false. So the test case is not necessary.\n> David, am I correct?\n\nI understand that. I am requesting a testcase to test that same logic.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:14:04 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": ">> > or the case when the last usage fit in memory but an earlier\n>> > usage spilled to disk.\n>>\n>> In my understanding once tuplestore changes the storage type to disk,\n>> it never returns to the memory storage type in terms of\n>> tuplestore_get_stats. i.e. once state->usedDisk is set to true, it\n>> never goes back to false. So the test case is not necessary.\n>> David, am I correct?\n> \n> I understand that. I am requesting a testcase to test that same logic.\n\nMaybe something like this? In the example below there are 2\npartitions. the first one has 1998 rows and the second one has 2\nrows. Assuming that work_mem is 64kB, the first one does not fit the\nmemory and spills to disk. The second partition fits memory. However as\nstate->usedDisk remains true, explain shows \"Storage: Disk\".\n\ntest=# explain (analyze,costs off) select sum(n) over(partition by m) from (SELECT n < 3 as m, n from generate_series(1,2000) a(n));\nn QUERY PLAN \n \n--------------------------------------------------------------------------------\n-------------\n WindowAgg (actual time=1.958..473328.589 rows=2000 loops=1)\n Storage: Disk Maximum Storage: 65kB\n -> Sort (actual time=1.008..1.277 rows=2000 loops=1)\n Sort Key: ((a.n < 3))\n Sort Method: external merge Disk: 48kB\n -> Function Scan on generate_series a (actual time=0.300..0.633 rows=2\n000 loops=1)\n Planning Time: 0.069 ms\n Execution Time: 474515.476 ms\n(8 rows)\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 14 Sep 2024 17:12:01 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Sat, Sep 14, 2024 at 1:42 PM Tatsuo Ishii <[email protected]> wrote:\n>\n> >> > or the case when the last usage fit in memory but an earlier\n> >> > usage spilled to disk.\n> >>\n> >> In my understanding once tuplestore changes the storage type to disk,\n> >> it never returns to the memory storage type in terms of\n> >> tuplestore_get_stats. i.e. once state->usedDisk is set to true, it\n> >> never goes back to false. So the test case is not necessary.\n> >> David, am I correct?\n> >\n> > I understand that. I am requesting a testcase to test that same logic.\n>\n> Maybe something like this? In the example below there are 2\n> partitions. the first one has 1998 rows and the second one has 2\n> rows. Assuming that work_mem is 64kB, the first one does not fit the\n> memory and spills to disk. The second partition fits memory. However as\n> state->usedDisk remains true, explain shows \"Storage: Disk\".\n>\n> test=# explain (analyze,costs off) select sum(n) over(partition by m) from (SELECT n < 3 as m, n from generate_series(1,2000) a(n));\n> n QUERY PLAN\n>\n> --------------------------------------------------------------------------------\n> -------------\n> WindowAgg (actual time=1.958..473328.589 rows=2000 loops=1)\n> Storage: Disk Maximum Storage: 65kB\n> -> Sort (actual time=1.008..1.277 rows=2000 loops=1)\n> Sort Key: ((a.n < 3))\n> Sort Method: external merge Disk: 48kB\n> -> Function Scan on generate_series a (actual time=0.300..0.633 rows=2\n> 000 loops=1)\n> Planning Time: 0.069 ms\n> Execution Time: 474515.476 ms\n> (8 rows)\n\nThanks. This will do. Is there a way to force the larger partition to\nbe computed first? That way we definitely know that the last\ncomputation was done when all the tuples in the tuplestore were in\nmemory.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 16 Sep 2024 16:41:17 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "Hi Ashutosh,\n\nThank you for the review.\n\n> Thanks. This will do. Is there a way to force the larger partition to\n> be computed first? That way we definitely know that the last\n> computation was done when all the tuples in the tuplestore were in\n> memory.\n\nNot sure if there's any way to force it in the SQL standard. However\nin term of implementation, PostgreSQL sorts the function\n(generate_series) scan result using a sort key \"a.n < 3\", which\nresults in rows being >= 2 first (as false == 0), then rows being < 3\n(as true == 1). So unless PostgreSQL changes the way to sort boolean\ndata type, I think the result should be stable.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 16 Sep 2024 21:13:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> Not sure if there's any way to force it in the SQL standard. However\n> in term of implementation, PostgreSQL sorts the function\n> (generate_series) scan result using a sort key \"a.n < 3\", which\n> results in rows being >= 2 first (as false == 0), then rows being < 3\n> (as true == 1). So unless PostgreSQL changes the way to sort boolean\n> data type, I think the result should be stable.\n\nAttached is the v5 patch. The difference from v4 is addtion of two\nmore tests to explain.sql:\n\n1) spils to disk case\n2) splis to disk then switch back to memory case\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Tue, 17 Sep 2024 11:40:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Tue, 17 Sept 2024 at 14:40, Tatsuo Ishii <[email protected]> wrote:\n> Attached is the v5 patch. The difference from v4 is addtion of two\n> more tests to explain.sql:\n>\n> 1) spils to disk case\n> 2) splis to disk then switch back to memory case\n\nLooks ok to me, aside from the missing \"reset work_mem;\" after you're\ndone with testing the disk spilling code.\n\nDavid\n\n\n", "msg_date": "Tue, 17 Sep 2024 15:16:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Tue, 17 Sept 2024 at 14:40, Tatsuo Ishii <[email protected]> wrote:\n>> Attached is the v5 patch. The difference from v4 is addtion of two\n>> more tests to explain.sql:\n>>\n>> 1) spils to disk case\n>> 2) splis to disk then switch back to memory case\n> \n> Looks ok to me, aside from the missing \"reset work_mem;\" after you're\n> done with testing the disk spilling code.\n\nThanks. I have added it and pushed the patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 17 Sep 2024 15:06:26 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> Thanks. I have added it and pushed the patch.\n\nSo I have created patches to do the same for CTE scan and table\nfunction scan node. Patch attached.\n\nActually there's one more executor node type that uses tuplestore:\nrecursive union (used in \"with recursive\"). The particular node type\nuses two tuplestore and we cannot simply apply tuplestore_get_stats()\nto the node type. We need to modify RecursiveUnionState to track the\nmaximum tuplestore usage. I am not sure this would be worth the\neffort. Opinion?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Wed, 18 Sep 2024 21:12:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 00:13, Tatsuo Ishii <[email protected]> wrote:\n> Actually there's one more executor node type that uses tuplestore:\n> recursive union (used in \"with recursive\"). The particular node type\n> uses two tuplestore and we cannot simply apply tuplestore_get_stats()\n> to the node type. We need to modify RecursiveUnionState to track the\n> maximum tuplestore usage. I am not sure this would be worth the\n> effort. Opinion?\n\nCould you add the two sizes together and take the storage type from\nthe tuplestore with the highest storage size?\n\nDavid\n\n\n", "msg_date": "Thu, 19 Sep 2024 09:57:51 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Thu, 19 Sept 2024 at 00:13, Tatsuo Ishii <[email protected]> wrote:\n>> Actually there's one more executor node type that uses tuplestore:\n>> recursive union (used in \"with recursive\"). The particular node type\n>> uses two tuplestore and we cannot simply apply tuplestore_get_stats()\n>> to the node type. We need to modify RecursiveUnionState to track the\n>> maximum tuplestore usage. I am not sure this would be worth the\n>> effort. Opinion?\n> \n> Could you add the two sizes together and take the storage type from\n> the tuplestore with the highest storage size?\n\nI don't think this works because tuplestore_begin/tuplestore_end are\ncalled while executing the node (ExecRecursiveUnion).\n\nI think the way you are proposing only shows the stats last time when\nthose tuplestore are created.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 19 Sep 2024 09:01:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 12:01, Tatsuo Ishii <[email protected]> wrote:\n> > Could you add the two sizes together and take the storage type from\n> > the tuplestore with the highest storage size?\n>\n> I don't think this works because tuplestore_begin/tuplestore_end are\n> called while executing the node (ExecRecursiveUnion).\n>\n> I think the way you are proposing only shows the stats last time when\n> those tuplestore are created.\n\nThat code could be modified to swap the tuplestores and do a\ntuplestore_clear() instead of tuplestore_end() followed by\ntuplestore_begin_heap().\n\nIt's likely worthwhile from a performance point of view. Here's a\nsmall test as an example:\n\nmaster:\npostgres=# with recursive cte (a) as (select 1 union all select\ncte.a+1 from cte where cte.a+1 <= 1000000) select count(*) from cte;\nTime: 219.023 ms\nTime: 218.828 ms\nTime: 219.093 ms\n\nwith attached patched:\npostgres=# with recursive cte (a) as (select 1 union all select\ncte.a+1 from cte where cte.a+1 <= 1000000) select count(*) from cte;\nTime: 169.734 ms\nTime: 164.841 ms\nTime: 169.168 ms\n\nDavid", "msg_date": "Thu, 19 Sep 2024 12:49:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> That code could be modified to swap the tuplestores and do a\n> tuplestore_clear() instead of tuplestore_end() followed by\n> tuplestore_begin_heap().\n> \n> It's likely worthwhile from a performance point of view. Here's a\n> small test as an example:\n> \n> master:\n> postgres=# with recursive cte (a) as (select 1 union all select\n> cte.a+1 from cte where cte.a+1 <= 1000000) select count(*) from cte;\n> Time: 219.023 ms\n> Time: 218.828 ms\n> Time: 219.093 ms\n> \n> with attached patched:\n> postgres=# with recursive cte (a) as (select 1 union all select\n> cte.a+1 from cte where cte.a+1 <= 1000000) select count(*) from cte;\n> Time: 169.734 ms\n> Time: 164.841 ms\n> Time: 169.168 ms\n\nImpressive result. I also ran your query with count 1000.\n\nwithout the patch:\nTime: 3.655 ms\nTime: 4.123 ms\nTime: 2.163 ms\n\nwit the patch:\nTime: 3.641 ms\nTime: 2.356 ms\nTime: 2.347 ms\n\nIt seems with the patch the performance is slightly better or almost\nsame. I think the patch improves the performance without sacrificing\nthe smaller iteration case.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 19 Sep 2024 10:49:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 13:49, Tatsuo Ishii <[email protected]> wrote:\n> I also ran your query with count 1000.\n>\n> without the patch:\n> Time: 3.655 ms\n> Time: 4.123 ms\n> Time: 2.163 ms\n>\n> wit the patch:\n> Time: 3.641 ms\n> Time: 2.356 ms\n> Time: 2.347 ms\n>\n> It seems with the patch the performance is slightly better or almost\n> same. I think the patch improves the performance without sacrificing\n> the smaller iteration case.\n\nYou might need to use pgbench to get more stable results with such a\nsmall test. If your CPU clocks down when idle, it's not going to clock\nup as fast as you might like it to when you give it something to do.\n\nHere's what I got when running 1000 iterations:\n\n$ cat bench.sql\nwith recursive cte (a) as (select 1 union all select cte.a+1 from cte\nwhere cte.a+1 <= 1000) select count(*) from cte;\n\nmaster\n$ for i in {1..5}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep latency; done\nlatency average = 0.251 ms\nlatency average = 0.254 ms\nlatency average = 0.251 ms\nlatency average = 0.252 ms\nlatency average = 0.253 ms\n\npatched\n$ for i in {1..5}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep latency; done\nlatency average = 0.202 ms\nlatency average = 0.202 ms\nlatency average = 0.207 ms\nlatency average = 0.202 ms\nlatency average = 0.202 ms (~24.2% faster)\n\nDavid\n\n\n", "msg_date": "Thu, 19 Sep 2024 14:11:05 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 13:49, Tatsuo Ishii <[email protected]> wrote:\n>\n> > That code could be modified to swap the tuplestores and do a\n> > tuplestore_clear() instead of tuplestore_end() followed by\n> > tuplestore_begin_heap().\n> >\n> Impressive result. I also ran your query with count 1000.\n\nI've pushed that patch. That should now unblock you on the\nnodeRecursiveunion.c telemetry.\n\nDavid\n\n\n", "msg_date": "Thu, 19 Sep 2024 15:22:11 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": ">> > That code could be modified to swap the tuplestores and do a\n>> > tuplestore_clear() instead of tuplestore_end() followed by\n>> > tuplestore_begin_heap().\n>> >\n>> Impressive result. I also ran your query with count 1000.\n> \n> I've pushed that patch. That should now unblock you on the\n> nodeRecursiveunion.c telemetry.\n\nThanks. Attached is a patch for CTE scan, table function scan and\nrecursive union scan nodes.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Thu, 19 Sep 2024 13:21:40 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 16:21, Tatsuo Ishii <[email protected]> wrote:\n> Thanks. Attached is a patch for CTE scan, table function scan and\n> recursive union scan nodes.\n\n1. It's probably a minor detail, but in show_recursive_union_info(), I\ndon't think the tuplestores can ever be NULL.\n\n+ if (working_table != NULL)\n+ tuplestore_get_stats(working_table, &tempStorageType, &tempSpaceUsed);\n+\n+ if (intermediate_table != NULL)\n+ tuplestore_get_stats(intermediate_table, &maxStorageType, &maxSpaceUsed);\n\nI added the NULL tests for the Materialize case as the tuplestore is\ncreated in ExecMaterial() rather than ExecInitMaterial(). For the two\ntuplestorestates above, they're both created in\nExecInitRecursiveUnion().\n\n2. I imagined you'd always do maxSpaceUsed += tempSpaceUsed; or\nmaxSpaceUsedKB = BYTES_TO_KILOBYTES(maxSpaceUsed + tempSpaceUsed);\n\n+ if (tempSpaceUsed > maxSpaceUsed)\n+ {\n+ maxStorageType = tempStorageType;\n+ maxSpaceUsed = tempSpaceUsed;\n+ }\n\nWhy do you think the the space used by the smaller tuplestore should\nbe ignored in the storage size output?\n\nDavid\n\n\n", "msg_date": "Thu, 19 Sep 2024 17:37:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> 1. It's probably a minor detail, but in show_recursive_union_info(), I\n> don't think the tuplestores can ever be NULL.\n> \n> + if (working_table != NULL)\n> + tuplestore_get_stats(working_table, &tempStorageType, &tempSpaceUsed);\n> +\n> + if (intermediate_table != NULL)\n> + tuplestore_get_stats(intermediate_table, &maxStorageType, &maxSpaceUsed);\n> \n> I added the NULL tests for the Materialize case as the tuplestore is\n> created in ExecMaterial() rather than ExecInitMaterial(). For the two\n> tuplestorestates above, they're both created in\n> ExecInitRecursiveUnion().\n\nYou are right. Also I checked other Exec* in nodeRecursiveunion.c and\ndid not find any place where working_table and intermediate_table are\nset to NULL.\n\n> 2. I imagined you'd always do maxSpaceUsed += tempSpaceUsed; or\n> maxSpaceUsedKB = BYTES_TO_KILOBYTES(maxSpaceUsed + tempSpaceUsed);\n> \n> + if (tempSpaceUsed > maxSpaceUsed)\n> + {\n> + maxStorageType = tempStorageType;\n> + maxSpaceUsed = tempSpaceUsed;\n> + }\n> \n> Why do you think the the space used by the smaller tuplestore should\n> be ignored in the storage size output?\n\nI thought about the case when the two tuplestores have different\nstorage types. But I remember that we already use disk storage type\neven if the storage type was changed from disk to memory[1]. So\nprobably we don't need to much worry about the storage kind difference\nin the two tuplestores.\n\nAttached patch fixes 1 & 2.\n\n[1] https://www.postgresql.org/message-id/CAExHW5vRPRLvsZYLmNGcDLkPDWDHXGSWYjox-to-OsCVFETd3w%40mail.gmail.com\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Thu, 19 Sep 2024 19:17:40 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Thu, 19 Sept 2024 at 22:17, Tatsuo Ishii <[email protected]> wrote:\n> Attached patch fixes 1 & 2.\n\nI looked at this and thought that one thing you might want to consider\nis adjusting show_storage_info() to accept the size and type\nparameters so you don't have to duplicate the formatting code in\nshow_recursive_union_info().\n\nThe first of the new tests also isn't testing what you want it to\ntest. Maybe you could add a \"materialized\" in there to stop the CTE\nbeing inlined:\n\nexplain (analyze,costs off) with w(n) as materialized (select n from\ngenerate_series(1,10) a(n)) select sum(n) from w\n\nAlso, I'm on the fence about if the new tests are worthwhile. I won't\nobject to them, however. I just wanted to note that most of the\ncomplexity is in tuplestore.c of which there's already coverage for.\nThe test's value is reduced by the fact that most of the interesting\ndetails have to be masked out due to possible platform variations in\nthe number of bytes. Really the new tests are only testing that we\ndisplay the storage details and maybe that the storage type came out\nas expected. It seems unlikely these would get broken. I'd say it's\ncommitters preference, however. I just wanted to add my thoughts. You\nhave to offset the value against the fact that the expected output is\nlikely to change over the years which adds to the burden of making\nchanges to the EXPLAIN output.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 11:55:07 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> I looked at this and thought that one thing you might want to consider\n> is adjusting show_storage_info() to accept the size and type\n> parameters so you don't have to duplicate the formatting code in\n> show_recursive_union_info().\n\nI agree and made necessary changes. See attached v4 patches.\n\n> The first of the new tests also isn't testing what you want it to\n> test. Maybe you could add a \"materialized\" in there to stop the CTE\n> being inlined:\n> \n> explain (analyze,costs off) with w(n) as materialized (select n from\n> generate_series(1,10) a(n)) select sum(n) from w\n> \n> Also, I'm on the fence about if the new tests are worthwhile. I won't\n> object to them, however. I just wanted to note that most of the\n> complexity is in tuplestore.c of which there's already coverage for.\n> The test's value is reduced by the fact that most of the interesting\n> details have to be masked out due to possible platform variations in\n> the number of bytes. Really the new tests are only testing that we\n> display the storage details and maybe that the storage type came out\n> as expected. It seems unlikely these would get broken. I'd say it's\n> committers preference, however. I just wanted to add my thoughts. You\n> have to offset the value against the fact that the expected output is\n> likely to change over the years which adds to the burden of making\n> changes to the EXPLAIN output.\n\nAfter thinking more, I lean toward to your opinion. The new tests do\nnot give big value, but on the other hand they could become a burden\nover the years. I do not include the new tests in the v4 patches.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Mon, 23 Sep 2024 15:28:32 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "On Mon, 23 Sept 2024 at 18:28, Tatsuo Ishii <[email protected]> wrote:\n> I agree and made necessary changes. See attached v4 patches.\n\nLooks good to me.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 19:05:25 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" }, { "msg_contents": "> On Mon, 23 Sept 2024 at 18:28, Tatsuo Ishii <[email protected]> wrote:\n>> I agree and made necessary changes. See attached v4 patches.\n> \n> Looks good to me.\n\nThank you for the review! I have pushed the patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 23 Sep 2024 16:52:39 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory/disk usage for WindowAgg nodes in EXPLAIN" } ]
[ { "msg_contents": "In the numeric width_bucket() code, we currently do the following:\n\n mul_var(&operand_var, count_var, &operand_var,\n operand_var.dscale + count_var->dscale);\n div_var(&operand_var, &bound2_var, result_var,\n select_div_scale(&operand_var, &bound2_var), true);\n\n if (cmp_var(result_var, count_var) >= 0)\n set_var_from_var(count_var, result_var);\n else\n {\n add_var(result_var, &const_one, result_var);\n floor_var(result_var, result_var);\n }\n\nselect_div_scale() returns a value between 16 and 1000, depending on\nthe dscales and weights of its inputs, so div_var() computes that many\ndigits after the decimal point and rounds the final digit. Assuming\nthe result doesn't exceed count, we then floor the result, throwing\naway all those digits to get the integer part that we want.\n\nInstead, this can be done more simply and efficiently, using division\nwith truncation as follows:\n\n mul_var(&operand_var, count_var, &operand_var,\n operand_var.dscale + count_var->dscale);\n div_var(&operand_var, &bound2_var, result_var, 0, false);\n add_var(result_var, &const_one, result_var);\n\nThis doesn't compute any digits after the decimal point, and instead\njust returns the integer part of the quotient, which is much cheaper\nto compute, and precisely what we want.\n\nIn fact, the current code is slightly inaccurate, because the rounding\nstep can incorrectly round up into the next internal bucket, for\nexample:\n\n width_bucket(6.666666666666666, 0, 10, 3) -> 2\n width_bucket(6.6666666666666666, 0, 10, 3) -> 3\n\nthough in practice that's extremely unlikely and doesn't really matter.\n\nPatch attached. I didn't bother with any new test cases, since there\nappears to be sufficient coverage already.\n\nAs a quick performance/correctness test, I ran the following:\n\nSELECT setseed(0);\nCREATE TEMP TABLE t AS\n SELECT random(-4.000000, 8.000000) op,\n random(-4.100000, -2.000000) b1,\n random(6.000000, 8.100000) b2,\n random(1, 15) c\n FROM generate_series(1, 10000000);\n\nSELECT hash_array(array_agg(width_bucket(op, b1, b2, c))) FROM t;\n-- Result not changed by patch\n\nSELECT sum(width_bucket(op, b1, b2, c)) FROM t;\nTime: 3658.962 ms (00:03.659) -- HEAD\nTime: 3089.946 ms (00:03.090) -- with patch\n\nRegards,\nDean", "msg_date": "Sat, 6 Jul 2024 16:36:41 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Simplifying width_bucket_numeric()" }, { "msg_contents": "On Sat, Jul 6, 2024, at 17:36, Dean Rasheed wrote:\n> In the numeric width_bucket() code, we currently do the following:\n..\n> Instead, this can be done more simply and efficiently, using division\n> with truncation as follows:\n..\n>\n> Patch attached. I didn't bother with any new test cases, since there\n> appears to be sufficient coverage already.\n>\n> As a quick performance/correctness test, I ran the following:\n>\n> SELECT setseed(0);\n> CREATE TEMP TABLE t AS\n> SELECT random(-4.000000, 8.000000) op,\n> random(-4.100000, -2.000000) b1,\n> random(6.000000, 8.100000) b2,\n> random(1, 15) c\n> FROM generate_series(1, 10000000);\n>\n> SELECT hash_array(array_agg(width_bucket(op, b1, b2, c))) FROM t;\n> -- Result not changed by patch\n\nSame hash_array on all my three machines:\n\n hash_array\n-------------\n -1179801276\n(1 row)\n\n> SELECT sum(width_bucket(op, b1, b2, c)) FROM t;\n> Time: 3658.962 ms (00:03.659) -- HEAD\n> Time: 3089.946 ms (00:03.090) -- with patch\n\nSignificant improvement on all my three machines:\n\n/*\n * Apple M3 Max\n */\n\nTime: 2255.154 ms (00:02.255) -- HEAD\nTime: 1830.985 ms (00:01.831)\nTime: 1826.190 ms (00:01.826)\nTime: 1831.020 ms (00:01.831)\nTime: 1832.934 ms (00:01.833)\nTime: 1843.061 ms (00:01.843)\n\nTime: 1957.062 ms (00:01.957) -- simplify-width_bucket_numeric.patch\nTime: 1545.121 ms (00:01.545)\nTime: 1541.621 ms (00:01.542)\nTime: 1536.388 ms (00:01.536)\nTime: 1538.721 ms (00:01.539)\nTime: 1592.384 ms (00:01.592)\n\n/*\n * Intel Core i9-14900K\n */\n\nTime: 2541.959 ms (00:02.542) -- HEAD\nTime: 2534.803 ms (00:02.535)\nTime: 2532.343 ms (00:02.532)\nTime: 2529.408 ms (00:02.529)\nTime: 2528.600 ms (00:02.529)\n\nTime: 2107.901 ms (00:02.108) -- simplify-width_bucket_numeric.patch\nTime: 2095.413 ms (00:02.095)\nTime: 2093.985 ms (00:02.094)\nTime: 2093.910 ms (00:02.094)\nTime: 2094.935 ms (00:02.095)\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\n\nTime: 2226.498 ms (00:02.226) -- HEAD\nTime: 2238.083 ms (00:02.238)\nTime: 2239.075 ms (00:02.239)\nTime: 2238.488 ms (00:02.238)\nTime: 2238.166 ms (00:02.238)\n\nTime: 1853.382 ms (00:01.853) -- simplify-width_bucket_numeric.patch\nTime: 1842.630 ms (00:01.843)\nTime: 1828.309 ms (00:01.828)\nTime: 1844.654 ms (00:01.845)\nTime: 1828.520 ms (00:01.829)\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 07 Jul 2024 14:43:11 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplifying width_bucket_numeric()" }, { "msg_contents": "🔥\n\nOn Sun, Jul 7, 2024, 7:44 AM Joel Jacobson <[email protected]> wrote:\n\n> On Sat, Jul 6, 2024, at 17:36, Dean Rasheed wrote:\n> > In the numeric width_bucket() code, we currently do the following:\n> ..\n> > Instead, this can be done more simply and efficiently, using division\n> > with truncation as follows:\n> ..\n> >\n> > Patch attached. I didn't bother with any new test cases, since there\n> > appears to be sufficient coverage already.\n> >\n> > As a quick performance/correctness test, I ran the following:\n> >\n> > SELECT setseed(0);\n> > CREATE TEMP TABLE t AS\n> > SELECT random(-4.000000, 8.000000) op,\n> > random(-4.100000, -2.000000) b1,\n> > random(6.000000, 8.100000) b2,\n> > random(1, 15) c\n> > FROM generate_series(1, 10000000);\n> >\n> > SELECT hash_array(array_agg(width_bucket(op, b1, b2, c))) FROM t;\n> > -- Result not changed by patch\n>\n> Same hash_array on all my three machines:\n>\n> hash_array\n> -------------\n> -1179801276\n> (1 row)\n>\n> > SELECT sum(width_bucket(op, b1, b2, c)) FROM t;\n> > Time: 3658.962 ms (00:03.659) -- HEAD\n> > Time: 3089.946 ms (00:03.090) -- with patch\n>\n> Significant improvement on all my three machines:\n>\n> /*\n> * Apple M3 Max\n> */\n>\n> Time: 2255.154 ms (00:02.255) -- HEAD\n> Time: 1830.985 ms (00:01.831)\n> Time: 1826.190 ms (00:01.826)\n> Time: 1831.020 ms (00:01.831)\n> Time: 1832.934 ms (00:01.833)\n> Time: 1843.061 ms (00:01.843)\n>\n> Time: 1957.062 ms (00:01.957) -- simplify-width_bucket_numeric.patch\n> Time: 1545.121 ms (00:01.545)\n> Time: 1541.621 ms (00:01.542)\n> Time: 1536.388 ms (00:01.536)\n> Time: 1538.721 ms (00:01.539)\n> Time: 1592.384 ms (00:01.592)\n>\n> /*\n> * Intel Core i9-14900K\n> */\n>\n> Time: 2541.959 ms (00:02.542) -- HEAD\n> Time: 2534.803 ms (00:02.535)\n> Time: 2532.343 ms (00:02.532)\n> Time: 2529.408 ms (00:02.529)\n> Time: 2528.600 ms (00:02.529)\n>\n> Time: 2107.901 ms (00:02.108) -- simplify-width_bucket_numeric.patch\n> Time: 2095.413 ms (00:02.095)\n> Time: 2093.985 ms (00:02.094)\n> Time: 2093.910 ms (00:02.094)\n> Time: 2094.935 ms (00:02.095)\n>\n> /*\n> * AMD Ryzen 9 7950X3D\n> */\n>\n>\n> Time: 2226.498 ms (00:02.226) -- HEAD\n> Time: 2238.083 ms (00:02.238)\n> Time: 2239.075 ms (00:02.239)\n> Time: 2238.488 ms (00:02.238)\n> Time: 2238.166 ms (00:02.238)\n>\n> Time: 1853.382 ms (00:01.853) -- simplify-width_bucket_numeric.patch\n> Time: 1842.630 ms (00:01.843)\n> Time: 1828.309 ms (00:01.828)\n> Time: 1844.654 ms (00:01.845)\n> Time: 1828.520 ms (00:01.829)\n>\n> Regards,\n> Joel\n>\n>\n>\n\n🔥\nOn Sun, Jul 7, 2024, 7:44 AM Joel Jacobson <[email protected]> wrote:On Sat, Jul 6, 2024, at 17:36, Dean Rasheed wrote:\n> In the numeric width_bucket() code, we currently do the following:\n..\n> Instead, this can be done more simply and efficiently, using division\n> with truncation as follows:\n..\n>\n> Patch attached. I didn't bother with any new test cases, since there\n> appears to be sufficient coverage already.\n>\n> As a quick performance/correctness test, I ran the following:\n>\n> SELECT setseed(0);\n> CREATE TEMP TABLE t AS\n>   SELECT random(-4.000000, 8.000000) op,\n>          random(-4.100000, -2.000000) b1,\n>          random(6.000000, 8.100000) b2,\n>          random(1, 15) c\n>   FROM generate_series(1, 10000000);\n>\n> SELECT hash_array(array_agg(width_bucket(op, b1, b2, c))) FROM t;\n> -- Result not changed by patch\n\nSame hash_array on all my three machines:\n\n hash_array\n-------------\n -1179801276\n(1 row)\n\n> SELECT sum(width_bucket(op, b1, b2, c)) FROM t;\n> Time: 3658.962 ms (00:03.659)  -- HEAD\n> Time: 3089.946 ms (00:03.090)  -- with patch\n\nSignificant improvement on all my three machines:\n\n/*\n * Apple M3 Max\n */\n\nTime: 2255.154 ms (00:02.255) -- HEAD\nTime: 1830.985 ms (00:01.831)\nTime: 1826.190 ms (00:01.826)\nTime: 1831.020 ms (00:01.831)\nTime: 1832.934 ms (00:01.833)\nTime: 1843.061 ms (00:01.843)\n\nTime: 1957.062 ms (00:01.957) -- simplify-width_bucket_numeric.patch\nTime: 1545.121 ms (00:01.545)\nTime: 1541.621 ms (00:01.542)\nTime: 1536.388 ms (00:01.536)\nTime: 1538.721 ms (00:01.539)\nTime: 1592.384 ms (00:01.592)\n\n/*\n * Intel Core i9-14900K\n */\n\nTime: 2541.959 ms (00:02.542) -- HEAD\nTime: 2534.803 ms (00:02.535)\nTime: 2532.343 ms (00:02.532)\nTime: 2529.408 ms (00:02.529)\nTime: 2528.600 ms (00:02.529)\n\nTime: 2107.901 ms (00:02.108) -- simplify-width_bucket_numeric.patch\nTime: 2095.413 ms (00:02.095)\nTime: 2093.985 ms (00:02.094)\nTime: 2093.910 ms (00:02.094)\nTime: 2094.935 ms (00:02.095)\n\n/*\n * AMD Ryzen 9 7950X3D\n */\n\n\nTime: 2226.498 ms (00:02.226) -- HEAD\nTime: 2238.083 ms (00:02.238)\nTime: 2239.075 ms (00:02.239)\nTime: 2238.488 ms (00:02.238)\nTime: 2238.166 ms (00:02.238)\n\nTime: 1853.382 ms (00:01.853) -- simplify-width_bucket_numeric.patch\nTime: 1842.630 ms (00:01.843)\nTime: 1828.309 ms (00:01.828)\nTime: 1844.654 ms (00:01.845)\nTime: 1828.520 ms (00:01.829)\n\nRegards,\nJoel", "msg_date": "Sun, 7 Jul 2024 08:44:56 -0500", "msg_from": "Bryan Green <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplifying width_bucket_numeric()" }, { "msg_contents": "On Sun, 7 Jul 2024 at 13:43, Joel Jacobson <[email protected]> wrote:\n>\n> > SELECT hash_array(array_agg(width_bucket(op, b1, b2, c))) FROM t;\n> > -- Result not changed by patch\n>\n> Same hash_array on all my three machines:\n>\n> > SELECT sum(width_bucket(op, b1, b2, c)) FROM t;\n> > Time: 3658.962 ms (00:03.659) -- HEAD\n> > Time: 3089.946 ms (00:03.090) -- with patch\n>\n> Significant improvement on all my three machines:\n>\n\nThanks for testing. I have committed this now.\n\n(I also realised that the \"reversed_bounds\" code was unnecessary,\nsince the only effect was to negate both inputs to div_var(), so the\nsigns cancel out.)\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 10 Jul 2024 20:31:02 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplifying width_bucket_numeric()" } ]
[ { "msg_contents": "Reading through postmaster code, I spotted some refactoring \nopportunities to make it slightly more readable.\n\nCurrently, when a child process exits, the postmaster first scans \nthrough BackgroundWorkerList to see if it was a bgworker process. If not \nfound, it scans through the BackendList to see if it was a backend \nprocess (which it really should be then). That feels a bit silly, \nbecause every running background worker process also has an entry in \nBackendList. There's a lot of duplication between \nCleanupBackgroundWorker and CleanupBackend.\n\nBefore commit 8a02b3d732, we used to created Backend entries only for \nbackground worker processes that connected to a database, not for other \nbackground worker processes. I think that's why we have the code \nstructure we have. But now that we have a Backend entry for all bgworker \nprocesses, it's more natural to have single function to deal with both \nregular backends and bgworkers.\n\nSo I came up with the attached patches. This doesn't make any meaningful \nuser-visible changes, except for some incidental changes in log messages \n(see commit message for details).\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 6 Jul 2024 22:01:44 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 06/07/2024 22:01, Heikki Linnakangas wrote:\n> Reading through postmaster code, I spotted some refactoring \n> opportunities to make it slightly more readable.\n> \n> Currently, when a child process exits, the postmaster first scans \n> through BackgroundWorkerList to see if it was a bgworker process. If not \n> found, it scans through the BackendList to see if it was a backend \n> process (which it really should be then). That feels a bit silly, \n> because every running background worker process also has an entry in \n> BackendList. There's a lot of duplication between \n> CleanupBackgroundWorker and CleanupBackend.\n> \n> Before commit 8a02b3d732, we used to created Backend entries only for \n> background worker processes that connected to a database, not for other \n> background worker processes. I think that's why we have the code \n> structure we have. But now that we have a Backend entry for all bgworker \n> processes, it's more natural to have single function to deal with both \n> regular backends and bgworkers.\n> \n> So I came up with the attached patches. This doesn't make any meaningful \n> user-visible changes, except for some incidental changes in log messages \n> (see commit message for details).\n\nNew patch version attached. Fixed conflicts with recent commits, no \nother changes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 29 Jul 2024 23:16:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "I committed the first two trivial patches, and have continued to work on \npostmaster.c, and how it manages all the child processes.\n\nThis is a lot of patches. They're built on top of each other, because \nthat's the order I developed them in, but they probably could be applied \nin different order too. Please help me by reviewing these, before the \nstack grows even larger :-). Even partial reviews would be very helpful. \nI suggest to start reading them in order, and when you get tired, just \nsend any comments you have up to that point.\n\n\n* v3-0001-Make-BackgroundWorkerList-doubly-linked.patch\n\nThis is the same refactoring patch I started this thread with.\n\n* v3-0003-Fix-comment-on-processes-being-kept-over-a-restar.patch\n* v3-0004-Consolidate-postmaster-code-to-launch-background-.patch\n\nLittle refactoring of how postmaster launches the background processes.\n\n* v3-0005-Add-test-for-connection-limits.patch\n* v3-0006-Add-test-for-dead-end-backends.patch\n\nA few new TAP tests for dead-end backends and enforcing connection \nlimits. We didn't have much coverage for these before.\n\n* v3-0007-Use-an-shmem_exit-callback-to-remove-backend-from.patch\n* v3-0008-Introduce-a-separate-BackendType-for-dead-end-chi.patch\n\nSome preliminary refactoring towards patch \nv3-0010-Assign-a-child-slot-to-every-postmaster-child-pro.patch\n\n* v3-0009-Kill-dead-end-children-when-there-s-nothing-else-.patch\n\nI noticed that we never send SIGTERM or SIGQUIT to dead-end backends, \nwhich seems silly. If the server is shutting down, dead-end backends \nmight prevent the shutdown from completing. Dead-end backends will \nexpire after authentication_timoeut (default 60s), so it won't last for \ntoo long, but still seems like we should kill dead-end backends if \nthey're the only children preventing shutdown from completing.\n\n* 3-0010-Assign-a-child-slot-to-every-postmaster-child-pro.patch\n\nThis is what I consider the main patch in this series. Currently, only \nregular backens, bgworkers and autovacuum workers have a PMChildFlags \nslot, which is used to detect when a postmaster child exits in an \nunclean way (in addition to the exit code). This patch assigns a child \nslot for all processes, except for dead-end backends. That includes all \nthe aux processes.\n\nWhile we're at it, I created separate pools of child slots for different \nkinds of backends, which fixes the issue that opening a lot of client \nconnections can exhaust all the slots, so that background workers or \nautovacuum workers cannot start either [1].\n\n[1] \nhttps://www.postgresql.org/message-id/55d2f50c-0b81-4b33-b202-cd2a406d69a3%40iki.fi\n\n* v3-0011-Pass-MyPMChildSlot-as-an-explicit-argument-to-chi.patch\n\nOne more little refactoring, to pass MyPMChildSlot to the child process \ndifferently.\n\n\nWhere is all this leading? I'm not sure exactly, but having a postmaster \nchild slot for every postmaster child seems highly useful. We could move \nthe ProcSignal machinery to use those slot numbers for the indexes to \nthe ProcSignal array, instead of ProcSignal, for example. That would \nallow all processes to participate in the signalling, even before they \nhave a PGPROC entry. (Or with Thomas's interrupts refactoring, the \ninterrupts array). With the multithreading work, PMChild struct could \nstore a thread id, or whatever is needed for threads to communicate with \neach other. In any case, seems like it will come handy.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Fri, 2 Aug 2024 02:57:18 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On Fri, Aug 2, 2024 at 11:57 AM Heikki Linnakangas <[email protected]> wrote:\n> * v3-0001-Make-BackgroundWorkerList-doubly-linked.patch\n\nLGTM.\n\n> [v3-0002-Refactor-code-to-handle-death-of-a-backend-or-bgw.patch]\n\n Currently, when a child process exits, the postmaster first scans\n through BackgroundWorkerList, to see if it the child process was a\n background worker. If not found, then it scans through BackendList to\n see if it was a regular backend. That leads to some duplication\n between the bgworker and regular backend cleanup code, as both have an\n entry in the BackendList that needs to be cleaned up in the same way.\n Refactor that so that we scan just the BackendList to find the child\n process, and if it was a background worker, do the additional\n bgworker-specific cleanup in addition to the normal Backend cleanup.\n\nMakes sense.\n\n On Windows, if a child process exits with ERROR_WAIT_NO_CHILDREN, it's\n now logged with that exit code, instead of 0. Also, if a bgworker\n exits with ERROR_WAIT_NO_CHILDREN, it's now treated as crashed and is\n restarted. Previously it was treated as a normal exit.\n\nInteresting. So when that error was first specially handled in this thread:\n\nhttps://www.postgresql.org/message-id/flat/AANLkTimCTkNKKrHCd3Ot6kAsrSS7SeDpOTcaLsEP7i%2BM%40mail.gmail.com#41f60947571b75377f04af67ba6baf40\n\n... it went from being considered a crash, to being considered like\nexit(0). It's true that shared memory can't be corrupted by a process\nthat never enters main(), but it's better not to hide the true reason\nfor the failure (if it is still possible -- I don't find many\nreferences to that phenomenon in recent times). Clobbering exitstatus\nwith 0 doesn't seem right at all, now that we have background workers\nwhose restart behaviour is affected by that.\n\n If a child process is not found in the BackendList, the log message\n now calls it \"untracked child process\" rather than \"server process\".\n Arguably that should be a PANIC, because we do track all the child\n processes in the list, so failing to find a child process is highly\n unexpected. But if we want to change that, let's discuss and do that\n as a separate commit.\n\nYeah, it would be highly unexpected if waitpid() told you about some\nrandom other process (or we screwed up the bookkeeping and didn't\nrecognise it). So at least having a different message seems good.\n\n> * v3-0003-Fix-comment-on-processes-being-kept-over-a-restar.patch\n\n+1\n\n> * v3-0004-Consolidate-postmaster-code-to-launch-background-.patch\n\n Much of the code in process_pm_child_exit() to launch replacement\n processes when one exits or when progressing to next postmaster state\n was unnecessary, because the ServerLoop will launch any missing\n background processes anyway. Remove the redundant code and let\n ServerLoop handle it.\n\n+1, makes sense.\n\n In ServerLoop, move the code to launch all the processes to a new\n subroutine, to group it all together.\n\n+1, makes sense.\n\nMore soon...\n\n\n", "msg_date": "Thu, 8 Aug 2024 22:47:42 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 08/08/2024 13:47, Thomas Munro wrote:\n> On Windows, if a child process exits with ERROR_WAIT_NO_CHILDREN, it's\n> now logged with that exit code, instead of 0. Also, if a bgworker\n> exits with ERROR_WAIT_NO_CHILDREN, it's now treated as crashed and is\n> restarted. Previously it was treated as a normal exit.\n> \n> Interesting. So when that error was first specially handled in this thread:\n> \n> https://www.postgresql.org/message-id/flat/AANLkTimCTkNKKrHCd3Ot6kAsrSS7SeDpOTcaLsEP7i%2BM%40mail.gmail.com#41f60947571b75377f04af67ba6baf40\n> \n> ... it went from being considered a crash, to being considered like\n> exit(0). It's true that shared memory can't be corrupted by a process\n> that never enters main(), but it's better not to hide the true reason\n> for the failure (if it is still possible -- I don't find many\n> references to that phenomenon in recent times). Clobbering exitstatus\n> with 0 doesn't seem right at all, now that we have background workers\n> whose restart behaviour is affected by that.\n\nI adjusted this ERROR_WAIT_NO_CHILDREN a little more, to avoid logging \nthe death of the child twice in some cases.\n\n>> * v3-0003-Fix-comment-on-processes-being-kept-over-a-restar.patch\n> \n> +1\n\nCommitted the patches up to and including this one, with tiny comment \nchanges.\n\n>> * v3-0004-Consolidate-postmaster-code-to-launch-background-.patch\n> \n> Much of the code in process_pm_child_exit() to launch replacement\n> processes when one exits or when progressing to next postmaster state\n> was unnecessary, because the ServerLoop will launch any missing\n> background processes anyway. Remove the redundant code and let\n> ServerLoop handle it.\n\nI'm going to work a little more on the comments on this one before \ncommitting; I had just moved all the \"If we have lost the XXX, try to \nstart a new one\" comments as is, but they look pretty repetitive now.\n\nThanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 10 Aug 2024 00:13:37 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 10/08/2024 00:13, Heikki Linnakangas wrote:\n> On 08/08/2024 13:47, Thomas Munro wrote:\n>>> * v3-0004-Consolidate-postmaster-code-to-launch-background-.patch\n>>\n>>      Much of the code in process_pm_child_exit() to launch replacement\n>>      processes when one exits or when progressing to next postmaster \n>> state\n>>      was unnecessary, because the ServerLoop will launch any missing\n>>      background processes anyway. Remove the redundant code and let\n>>      ServerLoop handle it.\n> \n> I'm going to work a little more on the comments on this one before \n> committing; I had just moved all the \"If we have lost the XXX, try to \n> start a new one\" comments as is, but they look pretty repetitive now.\n\nPushed this now, after adjusting the comments a bit. Thanks again for \nthe review!\n\nHere are the remaining patches, rebased.\n\n> commit a1c43d65907d20a999b203e465db1277ec842a0a\n> Author: Heikki Linnakangas <[email protected]>\n> Date: Thu Aug 1 17:24:12 2024 +0300\n> \n> Introduce a separate BackendType for dead-end children\n> \n> And replace postmaster.c's own \"backend type\" codes with BackendType\n> \n> XXX: While working on this, many times I accidentally did something\n> like \"foo |= B_SOMETHING\" instead of \"foo |= 1 << B_SOMETHING\", when\n> constructing arguments to SignalSomeChildren or CountChildren, and\n> things broke in very subtle ways taking a long time to debug. The old\n> constants that were already bitmasks avoided that. Maybe we need some\n> macro magic or something to make this less error-prone.\n\nWhile rebasing this today, I spotted another instance of that mistake \nmentioned in the XXX comment above. I called \"CountChildren(B_BACKEND)\" \ninstead of \"CountChildren(1 << B_BACKEND)\". Some ideas on how to make \nthat less error-prone:\n\n1. Add a separate typedef for the bitmasks, and macros/functions to work \nwith it. Something like:\n\ntypedef struct {\n\tuint32\t\tmask;\n} BackendTypeMask;\n\nstatic const BackendTypeMask BTMASK_ALL = { 0xffffffff };\nstatic const BackendTypeMask BTMASK_NONE = { 0 };\n\nstatic inline BackendTypeMask\nBTMASK_ADD(BackendTypeMask mask, BackendType t)\n{\n\tmask.mask |= 1 << t;\n\treturn mask;\n}\n\nstatic inline BackendTypeMask\nBTMASK_DEL(BackendTypeMask mask, BackendType t)\n{\n\tmask.mask &= ~(1 << t);\n\treturn mask;\n}\n\nNow the compiler will complain if you try to pass a BackendType for the \nmask. We could do this just for BackendType, or we could put this in \nsrc/include/lib/ with a more generic name, like \"bitmask_u32\".\n\n2. Another idea is to redefine the BackendType values to be separate \nbits, like the current BACKEND_TYPE_* values in postmaster.c:\n\ntypedef enum BackendType\n{\n\tB_INVALID = 0,\n\n\t/* Backends and other backend-like processes */\n\tB_BACKEND = 1 << 1,\n\tB_DEAD_END_BACKEND = 1 << 2,\n\tB_AUTOVAC_LAUNCHER = 1 << 3,\n\tB_AUTOVAC_WORKER = 1 << 4,\n\n\t...\n} BackendType;\n\nThen you can use | and & on BackendTypes directly. It makes it less \nclear which function arguments are a BackendType and which are a \nbitmask, however.\n\n\nThoughts, other ideas?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 12 Aug 2024 12:55:00 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "Hello Heikki,\n\n10.08.2024 00:13, Heikki Linnakangas wrote:\n>\n> Committed the patches up to and including this one, with tiny comment changes.\n\nI've noticed that on the current HEAD server.log contains binary data\n(an invalid process name) after a child crash. For example, while playing\nwith -ftapv, I've got:\nSELECT to_date('2024 613566758 1', 'IYYY IW ID');\nserver closed the connection unexpectedly\n\ngrep -a 'was terminated' server.log\n2024-08-18 07:07:06.482 UTC|||66c19d96.3482f6|LOG:  `�!x� (PID 3441407) was terminated by signal 6: Aborted\n\nIt looks like this was introduced by commit 28a520c0b (IIUC, namebuf in\nCleanupBackend() may stay uninitialized in some code paths).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 18 Aug 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 18/08/2024 11:00, Alexander Lakhin wrote:\n> 10.08.2024 00:13, Heikki Linnakangas wrote:\n>>\n>> Committed the patches up to and including this one, with tiny comment changes.\n> \n> I've noticed that on the current HEAD server.log contains binary data\n> (an invalid process name) after a child crash. For example, while playing\n> with -ftapv, I've got:\n> SELECT to_date('2024 613566758 1', 'IYYY IW ID');\n> server closed the connection unexpectedly\n> \n> grep -a 'was terminated' server.log\n> 2024-08-18 07:07:06.482 UTC|||66c19d96.3482f6|LOG:  `�!x� (PID 3441407) was terminated by signal 6: Aborted\n> \n> It looks like this was introduced by commit 28a520c0b (IIUC, namebuf in\n> CleanupBackend() may stay uninitialized in some code paths).\n\nFixed, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 19 Aug 2024 09:49:41 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "Hi,\n\nOn 2024-08-12 12:55:00 +0300, Heikki Linnakangas wrote:\n> While rebasing this today, I spotted another instance of that mistake\n> mentioned in the XXX comment above. I called \"CountChildren(B_BACKEND)\"\n> instead of \"CountChildren(1 << B_BACKEND)\". Some ideas on how to make that\n> less error-prone:\n> \n> 1. Add a separate typedef for the bitmasks, and macros/functions to work\n> with it. Something like:\n> \n> typedef struct {\n> \tuint32\t\tmask;\n> } BackendTypeMask;\n> \n> static const BackendTypeMask BTMASK_ALL = { 0xffffffff };\n> static const BackendTypeMask BTMASK_NONE = { 0 };\n> \n> static inline BackendTypeMask\n> BTMASK_ADD(BackendTypeMask mask, BackendType t)\n> {\n> \tmask.mask |= 1 << t;\n> \treturn mask;\n> }\n> \n> static inline BackendTypeMask\n> BTMASK_DEL(BackendTypeMask mask, BackendType t)\n> {\n> \tmask.mask &= ~(1 << t);\n> \treturn mask;\n> }\n> \n> Now the compiler will complain if you try to pass a BackendType for the\n> mask. We could do this just for BackendType, or we could put this in\n> src/include/lib/ with a more generic name, like \"bitmask_u32\".\n\nI don't like the second suggestion - that just ends up creating a similar\nproblem in the future because flag values for one thing can be passed to\nsomething else.\n\n\n\n> +Running the tests\n> +=================\n> +\n> +NOTE: You must have given the --enable-tap-tests argument to configure.\n> +\n> +Run\n> + make check\n> +or\n> + make installcheck\n> +You can use \"make installcheck\" if you previously did \"make install\".\n> +In that case, the code in the installation tree is tested. With\n> +\"make check\", a temporary installation tree is built from the current\n> +sources and then tested.\n> +\n> +Either way, this test initializes, starts, and stops a test Postgres\n> +cluster.\n> +\n> +See src/test/perl/README for more info about running these tests.\n\nIs it really useful to have such instructions all over the tree?\n\n\n> From 93b9e9b6e072f63af9009e0d66ab6d0d62ea8c15 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Aug 2024 10:55:11 +0300\n> Subject: [PATCH v4 2/8] Add test for dead-end backends\n> \n> The code path for launching a dead-end backend because we're out of\n> slots was not covered by any tests, so add one. (Some tests did hit\n> the case of launching a dead-end backend because the server is still\n> starting up, though, so the gap in our test coverage wasn't as big as\n> it sounds.)\n> ---\n> src/test/perl/PostgreSQL/Test/Cluster.pm | 39 +++++++++++++++++++\n> .../postmaster/t/001_connection_limits.pl | 17 +++++++-\n> 2 files changed, 55 insertions(+), 1 deletion(-)\n\nWhy does this need to use \"raw\" connections? Can't you just create a bunch of\nconnections with BackgroundPsql?\n\n\n\n\n> From 88287a2db95e584018f1c7fa9e992feb7d179ce8 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Aug 2024 10:58:35 +0300\n> Subject: [PATCH v4 3/8] Use an shmem_exit callback to remove backend from\n> PMChildFlags on exit\n> \n> This seems nicer than having to duplicate the logic between\n> InitProcess() and ProcKill() for which child processes have a\n> PMChildFlags slot.\n> \n> Move the MarkPostmasterChildActive() call earlier in InitProcess(),\n> out of the section protected by the spinlock.\n\n> ---\n> src/backend/storage/ipc/pmsignal.c | 10 ++++++--\n> src/backend/storage/lmgr/proc.c | 38 ++++++++++--------------------\n> src/include/storage/pmsignal.h | 1 -\n> 3 files changed, 21 insertions(+), 28 deletions(-)\n> \n> diff --git a/src/backend/storage/ipc/pmsignal.c b/src/backend/storage/ipc/pmsignal.c\n> index 27844b46a2..cb99e77476 100644\n> --- a/src/backend/storage/ipc/pmsignal.c\n> +++ b/src/backend/storage/ipc/pmsignal.c\n> @@ -24,6 +24,7 @@\n> #include \"miscadmin.h\"\n> #include \"postmaster/postmaster.h\"\n> #include \"replication/walsender.h\"\n> +#include \"storage/ipc.h\"\n> #include \"storage/pmsignal.h\"\n> #include \"storage/shmem.h\"\n> #include \"utils/memutils.h\"\n> @@ -121,6 +122,8 @@ postmaster_death_handler(SIGNAL_ARGS)\n> \n> #endif\t\t\t\t\t\t\t/* USE_POSTMASTER_DEATH_SIGNAL */\n> \n> +static void MarkPostmasterChildInactive(int code, Datum arg);\n> +\n> /*\n> * PMSignalShmemSize\n> *\t\tCompute space needed for pmsignal.c's shared memory\n> @@ -328,6 +331,9 @@ MarkPostmasterChildActive(void)\n> \tslot--;\n> \tAssert(PMSignalState->PMChildFlags[slot] == PM_CHILD_ASSIGNED);\n> \tPMSignalState->PMChildFlags[slot] = PM_CHILD_ACTIVE;\n> +\n> +\t/* Arrange to clean up at exit. */\n> +\ton_shmem_exit(MarkPostmasterChildInactive, 0);\n> }\n> \n> /*\n> @@ -352,8 +358,8 @@ MarkPostmasterChildWalSender(void)\n> * MarkPostmasterChildInactive - mark a postmaster child as done using\n> * shared memory. This is called in the child process.\n> */\n> -void\n> -MarkPostmasterChildInactive(void)\n> +static void\n> +MarkPostmasterChildInactive(int code, Datum arg)\n> {\n> \tint\t\t\tslot = MyPMChildSlot;\n> \n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index ac66da8638..9536469e89 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -308,6 +308,19 @@ InitProcess(void)\n> \tif (MyProc != NULL)\n> \t\telog(ERROR, \"you already exist\");\n> \n> +\t/*\n> +\t * Before we start accessing the shared memory in a serious way, mark\n> +\t * ourselves as an active postmaster child; this is so that the postmaster\n> +\t * can detect it if we exit without cleaning up. (XXX autovac launcher\n> +\t * currently doesn't participate in this; it probably should.)\n> +\t *\n> +\t * Slot sync worker also does not participate in it, see comments atop\n> +\t * 'struct bkend' in postmaster.c.\n> +\t */\n> +\tif (IsUnderPostmaster && !AmAutoVacuumLauncherProcess() &&\n> +\t\t!AmLogicalSlotSyncWorkerProcess())\n> +\t\tMarkPostmasterChildActive();\n\nI'd not necessarily expect a call to MarkPostmasterChildActive() to register\nan shmem exit hook - but I guess it's unlikely to be moved around in a\nproblematic way. Perhaps something like RegisterPostmasterChild() or such\nwould be a bit clearer?\n\n\n> From dc53f89edbeec99f8633def8aa5f47cd98e7a150 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Aug 2024 10:59:04 +0300\n> Subject: [PATCH v4 4/8] Introduce a separate BackendType for dead-end children\n> \n> And replace postmaster.c's own \"backend type\" codes with BackendType\n\nHm - it seems a bit odd to open-code this when we actually have a \"table\ndriven configuration\" available? Why isn't the type a field in\nchild_process_kind?\n\nThat'd not solve the bitmask confusion issue, but it does seem like a better\ndirection to me?\n\n\n> From 9c832ce33667abc5aef128a17fa9c27daaad872a Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Mon, 12 Aug 2024 10:59:27 +0300\n> Subject: [PATCH v4 5/8] Kill dead-end children when there's nothing else left\n> \n> Previously, the postmaster would never try to kill dead-end child\n> processes, even if there were no other processes left. A dead-end\n> backend will eventually exit, when authentication_timeout expires, but\n> if a dead-end backend is the only thing that's preventing the server\n> from shutting down, it seems better to kill it immediately. It's\n> particularly important, if there was a bug in the early startup code\n> that prevented a dead-end child from timing out and exiting normally.\n\nI do wonder if we shouldn't instead get rid of dead end children. We now have\nan event based loop in postmaster, it'd perform vastly better to juts handle\nthese connections in postmaster. And we'd get rid of these weird backend\ntypes. But I guess this is a worthwhile improvement on its own...\n\n\n> Includes a test for that case where a dead-end backend previously kept\n> the server from shutting down.\n\nThe test hardcodes timeouts, I think we've largely come to regret that when we\ndid. Should probably just be a multiplier based on\nPostgreSQL::Test::Utils::timeout_default?\n\n\n\n> +/*\n> + * MaxLivePostmasterChildren\n> + *\n> + * This reports the number postmaster child processes that can be active. It\n> + * includes all children except for dead_end children. This allows the array\n> + * in shared memory (PMChildFlags) to have a fixed maximum size.\n> + */\n> +int\n> +MaxLivePostmasterChildren(void)\n> +{\n> +\tint\t\t\tn = 0;\n> +\n> +\t/* We know exactly how mamy worker and aux processes can be active */\n> +\tn += autovacuum_max_workers;\n> +\tn += max_worker_processes;\n> +\tn += NUM_AUXILIARY_PROCS;\n> +\n> +\t/*\n> +\t * We allow more connections here than we can have backends because some\n> +\t * might still be authenticating; they might fail auth, or some existing\n> +\t * backend might exit before the auth cycle is completed. The exact\n> +\t * MaxBackends limit is enforced when a new backend tries to join the\n> +\t * shared-inval backend array.\n> +\t */\n> +\tn += 2 * (MaxConnections + max_wal_senders);\n> +\n> +\treturn n;\n> +}\n\nI wonder if we could instead maintain at least some of this in\nchild_process_kinds? Manually listing different types of processes in\ndifferent places doesn't seem particularly sustainable.\n\n\n> +/*\n> + * Initialize at postmaster startup\n> + */\n> +void\n> +InitPostmasterChildSlots(void)\n> +{\n> +\tint\t\t\tnum_pmchild_slots;\n> +\tint\t\t\tslotno;\n> +\tPMChild *slots;\n> +\n> +\tdlist_init(&freeBackendList);\n> +\tdlist_init(&freeAutoVacWorkerList);\n> +\tdlist_init(&freeBgWorkerList);\n> +\tdlist_init(&freeAuxList);\n> +\tdlist_init(&ActiveChildList);\n> +\n> +\tnum_pmchild_slots = MaxLivePostmasterChildren();\n> +\n> +\tslots = palloc(num_pmchild_slots * sizeof(PMChild));\n> +\n> +\tslotno = 0;\n> +\tfor (int i = 0; i < 2 * (MaxConnections + max_wal_senders); i++)\n> +\t{\n> +\t\tinit_slot(&slots[slotno], slotno, &freeBackendList);\n> +\t\tslotno++;\n> +\t}\n> +\tfor (int i = 0; i < autovacuum_max_workers; i++)\n> +\t{\n> +\t\tinit_slot(&slots[slotno], slotno, &freeAutoVacWorkerList);\n> +\t\tslotno++;\n> +\t}\n> +\tfor (int i = 0; i < max_worker_processes; i++)\n> +\t{\n> +\t\tinit_slot(&slots[slotno], slotno, &freeBgWorkerList);\n> +\t\tslotno++;\n> +\t}\n> +\tfor (int i = 0; i < NUM_AUXILIARY_PROCS; i++)\n> +\t{\n> +\t\tinit_slot(&slots[slotno], slotno, &freeAuxList);\n> +\t\tslotno++;\n> +\t}\n> +\tAssert(slotno == num_pmchild_slots);\n> +}\n\nAlong the same vein - could we generalize this into one array of different\nslot types and then loop over that to initialize / acquire the slots?\n\n\n> +/* Return the appropriate free-list for the given backend type */\n> +static dlist_head *\n> +GetFreeList(BackendType btype)\n> +{\n> +\tswitch (btype)\n> +\t{\n> +\t\tcase B_BACKEND:\n> +\t\tcase B_BG_WORKER:\n> +\t\tcase B_WAL_SENDER:\n> +\t\tcase B_SLOTSYNC_WORKER:\n> +\t\t\treturn &freeBackendList;\n\nMaybe a daft question - but why are all of these in the same list? Sure,\nthey're essentially all full backends, but they're covered by different GUCs?\n\n\n> +\t\t\t/*\n> +\t\t\t * Auxiliary processes. There can be only one of each of these\n> +\t\t\t * running at a time.\n> +\t\t\t */\n> +\t\tcase B_AUTOVAC_LAUNCHER:\n> +\t\tcase B_ARCHIVER:\n> +\t\tcase B_BG_WRITER:\n> +\t\tcase B_CHECKPOINTER:\n> +\t\tcase B_STARTUP:\n> +\t\tcase B_WAL_RECEIVER:\n> +\t\tcase B_WAL_SUMMARIZER:\n> +\t\tcase B_WAL_WRITER:\n> +\t\t\treturn &freeAuxList;\n> +\n> +\t\t\t/*\n> +\t\t\t * Logger is not connected to shared memory, and does not have a\n> +\t\t\t * PGPROC entry, but we still allocate a child slot for it.\n> +\t\t\t */\n\nTangential: Why do we need a freelist for these and why do we choose a random\npgproc for these instead of assigning one statically?\n\nBackground: I'd like to not provide AIO workers with \"bounce buffers\" (for IO\nof buffers that can't be done in-place, like writes when checksums are\nenabled). The varying proc numbers make that harder than it'd have to be...\n\n\n> +PMChild *\n> +AssignPostmasterChildSlot(BackendType btype)\n> +{\n> +\tdlist_head *freelist;\n> +\tPMChild *pmchild;\n> +\n> +\tfreelist = GetFreeList(btype);\n> +\n> +\tif (dlist_is_empty(freelist))\n> +\t\treturn NULL;\n> +\n> +\tpmchild = dlist_container(PMChild, elem, dlist_pop_head_node(freelist));\n> +\tpmchild->pid = 0;\n> +\tpmchild->bkend_type = btype;\n> +\tpmchild->rw = NULL;\n> +\tpmchild->bgworker_notify = true;\n> +\n> +\t/*\n> +\t * pmchild->child_slot for each entry was initialized when the array of\n> +\t * slots was allocated.\n> +\t */\n> +\n> +\tdlist_push_head(&ActiveChildList, &pmchild->elem);\n> +\n> +\tReservePostmasterChildSlot(pmchild->child_slot);\n> +\n> +\t/* FIXME: find a more elegant way to pass this */\n> +\tMyPMChildSlot = pmchild->child_slot;\n\nWhat if we assigned one offset for each process and assigned its ID here and\nalso used that for its ProcNumber - that way we wouldn't need to manage\nfreelists in two places.\n\n\n> +PMChild *\n> +FindPostmasterChildByPid(int pid)\n> +{\n> +\tdlist_iter\titer;\n> +\n> +\tdlist_foreach(iter, &ActiveChildList)\n> +\t{\n> +\t\tPMChild *bp = dlist_container(PMChild, elem, iter.cur);\n> +\n> +\t\tif (bp->pid == pid)\n> +\t\t\treturn bp;\n> +\t}\n> +\treturn NULL;\n> +}\n\nIt's not new, but it's quite sad that postmaster's process exit handling is\neffectively O(Backends^2)...\n\n\n\n> @@ -1019,7 +980,15 @@ PostmasterMain(int argc, char *argv[])\n> \t/*\n> \t * If enabled, start up syslogger collection subprocess\n> \t */\n> -\tSysLoggerPID = SysLogger_Start();\n> +\tSysLoggerPMChild = AssignPostmasterChildSlot(B_LOGGER);\n> +\tif (!SysLoggerPMChild)\n> +\t\telog(ERROR, \"no postmaster child slot available for syslogger\");\n> +\tSysLoggerPMChild->pid = SysLogger_Start();\n> +\tif (SysLoggerPMChild->pid == 0)\n> +\t{\n> +\t\tFreePostmasterChildSlot(SysLoggerPMChild);\n> +\t\tSysLoggerPMChild = NULL;\n> +\t}\n\nMaybe it's a bit obsessive, but this seems long enough to make it worth not\ndoing inline in the already long PostmasterMain().\n\n\n> \t/*\n> \t * We're ready to rock and roll...\n> \t */\n> -\tStartupPID = StartChildProcess(B_STARTUP);\n> -\tAssert(StartupPID != 0);\n> +\tStartupPMChild = StartChildProcess(B_STARTUP);\n> +\tAssert(StartupPMChild != NULL);\n\nThis (not new) assertion is ... odd.\n\n\n> @@ -1779,21 +1748,6 @@ canAcceptConnections(int backend_type)\n> \tif (!connsAllowed && backend_type == B_BACKEND)\n> \t\treturn CAC_SHUTDOWN;\t/* shutdown is pending */\n> \n> -\t/*\n> -\t * Don't start too many children.\n> -\t *\n> -\t * We allow more connections here than we can have backends because some\n> -\t * might still be authenticating; they might fail auth, or some existing\n> -\t * backend might exit before the auth cycle is completed. The exact\n> -\t * MaxBackends limit is enforced when a new backend tries to join the\n> -\t * shared-inval backend array.\n> -\t *\n> -\t * The limit here must match the sizes of the per-child-process arrays;\n> -\t * see comments for MaxLivePostmasterChildren().\n> -\t */\n> -\tif (CountChildren(BACKEND_TYPE_ALL & ~(1 << B_DEAD_END_BACKEND)) >= MaxLivePostmasterChildren())\n> -\t\tresult = CAC_TOOMANY;\n> -\n> \treturn result;\n> }\n\nIt's nice to get rid of this source of O(N^2).\n\n\n> @@ -1961,26 +1915,6 @@ process_pm_reload_request(void)\n> \t\t\t\t(errmsg(\"received SIGHUP, reloading configuration files\")));\n> \t\tProcessConfigFile(PGC_SIGHUP);\n> \t\tSignalSomeChildren(SIGHUP, BACKEND_TYPE_ALL & ~(1 << B_DEAD_END_BACKEND));\n> -\t\tif (StartupPID != 0)\n> -\t\t\tsignal_child(StartupPID, SIGHUP);\n> -\t\tif (BgWriterPID != 0)\n> -\t\t\tsignal_child(BgWriterPID, SIGHUP);\n> -\t\tif (CheckpointerPID != 0)\n> -\t\t\tsignal_child(CheckpointerPID, SIGHUP);\n> -\t\tif (WalWriterPID != 0)\n> -\t\t\tsignal_child(WalWriterPID, SIGHUP);\n> -\t\tif (WalReceiverPID != 0)\n> -\t\t\tsignal_child(WalReceiverPID, SIGHUP);\n> -\t\tif (WalSummarizerPID != 0)\n> -\t\t\tsignal_child(WalSummarizerPID, SIGHUP);\n> -\t\tif (AutoVacPID != 0)\n> -\t\t\tsignal_child(AutoVacPID, SIGHUP);\n> -\t\tif (PgArchPID != 0)\n> -\t\t\tsignal_child(PgArchPID, SIGHUP);\n> -\t\tif (SysLoggerPID != 0)\n> -\t\t\tsignal_child(SysLoggerPID, SIGHUP);\n> -\t\tif (SlotSyncWorkerPID != 0)\n> -\t\t\tsignal_child(SlotSyncWorkerPID, SIGHUP);\n> \n> \t\t/* Reload authentication config files too */\n> \t\tif (!load_hba())\n\nFor a moment I wondered why this change was part of this commit - but I guess\nwe didn't have any of these in an array/list before this change...\n\n\n\n> @@ -2469,11 +2410,15 @@ process_pm_child_exit(void)\n> \t\t}\n> \n> \t\t/* Was it the system logger? If so, try to start a new one */\n> -\t\tif (pid == SysLoggerPID)\n> +\t\tif (SysLoggerPMChild && pid == SysLoggerPMChild->pid)\n> \t\t{\n> -\t\t\tSysLoggerPID = 0;\n> \t\t\t/* for safety's sake, launch new logger *first* */\n> -\t\t\tSysLoggerPID = SysLogger_Start();\n> +\t\t\tSysLoggerPMChild->pid = SysLogger_Start();\n> +\t\t\tif (SysLoggerPMChild->pid == 0)\n> +\t\t\t{\n> +\t\t\t\tFreePostmasterChildSlot(SysLoggerPMChild);\n> +\t\t\t\tSysLoggerPMChild = NULL;\n> +\t\t\t}\n> \t\t\tif (!EXIT_STATUS_0(exitstatus))\n> \t\t\t\tLogChildExit(LOG, _(\"system logger process\"),\n\nSeems a bit weird to have one place with a different memory lifetime handling\nthan other places. Why don't we just do this the same way as in other places\nbut continue to defer the logging until after we tried to start the new\nlogger?\n\nMight be worth having a test ensuring that loggers restart OK.\n\n\n> \t/* Construct a process name for log message */\n> +\n> +\t/*\n> +\t * FIXME: use GetBackendTypeDesc here? How does the localization of that\n> +\t * work?\n> +\t */\n> \tif (bp->bkend_type == B_DEAD_END_BACKEND)\n> \t{\n> \t\tprocname = _(\"dead end backend\");\n\nMight be worth having a version of GetBackendTypeDesc() that returns a\ntranslated string?\n\n\n> @@ -2697,9 +2643,16 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)\n> \t{\n> \t\tdlist_iter\titer;\n> \n> -\t\tdlist_foreach(iter, &BackendList)\n> +\t\tdlist_foreach(iter, &ActiveChildList)\n> \t\t{\n> -\t\t\tBackend *bp = dlist_container(Backend, elem, iter.cur);\n> +\t\t\tPMChild *bp = dlist_container(PMChild, elem, iter.cur);\n> +\n> +\t\t\t/* We do NOT restart the syslogger */\n> +\t\t\tif (bp == SysLoggerPMChild)\n> +\t\t\t\tcontinue;\n\nThat comment seems a bit misleading - we do restart syslogger, we just don't\ndo it here, no? I realize it's an old comment, but it still seems like it's\nworth fixing given that you touch all the code here...\n\n\n> @@ -2708,48 +2661,8 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)\n> \t\t\t * We could exclude dead_end children here, but at least when\n> \t\t\t * sending SIGABRT it seems better to include them.\n> \t\t\t */\n> -\t\t\tsigquit_child(bp->pid);\n> +\t\t\tsigquit_child(bp);\n> \t\t}\n> -\n> -\t\tif (StartupPID != 0)\n> -\t\t{\n> -\t\t\tsigquit_child(StartupPID);\n> -\t\t\tStartupStatus = STARTUP_SIGNALED;\n> -\t\t}\n> -\n> -\t\t/* Take care of the bgwriter too */\n> -\t\tif (BgWriterPID != 0)\n> -\t\t\tsigquit_child(BgWriterPID);\n> -\n> -\t\t/* Take care of the checkpointer too */\n> -\t\tif (CheckpointerPID != 0)\n> -\t\t\tsigquit_child(CheckpointerPID);\n> -\n> -\t\t/* Take care of the walwriter too */\n> -\t\tif (WalWriterPID != 0)\n> -\t\t\tsigquit_child(WalWriterPID);\n> -\n> -\t\t/* Take care of the walreceiver too */\n> -\t\tif (WalReceiverPID != 0)\n> -\t\t\tsigquit_child(WalReceiverPID);\n> -\n> -\t\t/* Take care of the walsummarizer too */\n> -\t\tif (WalSummarizerPID != 0)\n> -\t\t\tsigquit_child(WalSummarizerPID);\n> -\n> -\t\t/* Take care of the autovacuum launcher too */\n> -\t\tif (AutoVacPID != 0)\n> -\t\t\tsigquit_child(AutoVacPID);\n> -\n> -\t\t/* Take care of the archiver too */\n> -\t\tif (PgArchPID != 0)\n> -\t\t\tsigquit_child(PgArchPID);\n> -\n> -\t\t/* Take care of the slot sync worker too */\n> -\t\tif (SlotSyncWorkerPID != 0)\n> -\t\t\tsigquit_child(SlotSyncWorkerPID);\n> -\n> -\t\t/* We do NOT restart the syslogger */\n> \t}\n\nYay.\n\n\n\n\n> @@ -2871,29 +2786,27 @@ PostmasterStateMachine(void)\n> <snip>\n> -\t\tif (WalSummarizerPID != 0)\n> -\t\t\tsignal_child(WalSummarizerPID, SIGTERM);\n> -\t\tif (SlotSyncWorkerPID != 0)\n> -\t\t\tsignal_child(SlotSyncWorkerPID, SIGTERM);\n> +\t\ttargetMask |= (1 << B_STARTUP);\n> +\t\ttargetMask |= (1 << B_WAL_RECEIVER);\n> +\n> +\t\ttargetMask |= (1 << B_WAL_SUMMARIZER);\n> +\t\ttargetMask |= (1 << B_SLOTSYNC_WORKER);\n> \t\t/* checkpointer, archiver, stats, and syslogger may continue for now */\n> \n> +\t\tSignalSomeChildren(SIGTERM, targetMask);\n> +\n> \t\t/* Now transition to PM_WAIT_BACKENDS state to wait for them to die */\n> \t\tpmState = PM_WAIT_BACKENDS;\n> <snip>\n\nIt's likely the right thing to not do as one patch, but IMO this really wants\nto be a state table. Perhaps as part of child_process_kinds, perhaps separate\nfrom that.\n\n\n> @@ -3130,8 +3047,21 @@ static void\n> LaunchMissingBackgroundProcesses(void)\n> {\n> \t/* Syslogger is active in all states */\n> -\tif (SysLoggerPID == 0 && Logging_collector)\n> -\t\tSysLoggerPID = SysLogger_Start();\n> +\tif (SysLoggerPMChild == NULL && Logging_collector)\n> +\t{\n> +\t\tSysLoggerPMChild = AssignPostmasterChildSlot(B_LOGGER);\n> +\t\tif (!SysLoggerPMChild)\n> +\t\t\telog(LOG, \"no postmaster child slot available for syslogger\");\n\nHow could this elog() be reached? Seems something seriously would have gone\nwrong to get here - in which case a LOG that might not even be visible (due to\nlogger not working) doesn't seem like the right response.\n\n> @@ -3334,29 +3270,12 @@ SignalSomeChildren(int signal, uint32 targetMask)\n> static void\n> TerminateChildren(int signal)\n> {\n\nThe comment for TerminateChildren() says \"except syslogger and dead_end\nbackends.\" - aren't you including the latter here?\n\n\n\n\n> @@ -311,14 +311,9 @@ InitProcess(void)\n> \t/*\n> \t * Before we start accessing the shared memory in a serious way, mark\n> \t * ourselves as an active postmaster child; this is so that the postmaster\n> -\t * can detect it if we exit without cleaning up. (XXX autovac launcher\n> -\t * currently doesn't participate in this; it probably should.)\n> -\t *\n> -\t * Slot sync worker also does not participate in it, see comments atop\n> -\t * 'struct bkend' in postmaster.c.\n> +\t * can detect it if we exit without cleaning up.\n> \t */\n> -\tif (IsUnderPostmaster && !AmAutoVacuumLauncherProcess() &&\n> -\t\t!AmLogicalSlotSyncWorkerProcess())\n> +\tif (IsUnderPostmaster)\n> \t\tMarkPostmasterChildActive();\n> \n> \t/* Decide which list should supply our PGPROC. */\n> @@ -536,6 +531,9 @@ InitAuxiliaryProcess(void)\n> \tif (MyProc != NULL)\n> \t\telog(ERROR, \"you already exist\");\n> \n> +\tif (IsUnderPostmaster)\n> +\t\tMarkPostmasterChildActive();\n> +\n> \t/*\n> \t * We use the ProcStructLock to protect assignment and releasing of\n> \t * AuxiliaryProcs entries.\n\nProbably worth, at some point soon, to have an InitProcessCommon() or such.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Sep 2024 10:35:55 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 04/09/2024 17:35, Andres Freund wrote:\n> On 2024-08-12 12:55:00 +0300, Heikki Linnakangas wrote:\n>> +Running the tests\n>> +=================\n>> +\n>> +NOTE: You must have given the --enable-tap-tests argument to configure.\n>> +\n>> +Run\n>> + make check\n>> +or\n>> + make installcheck\n>> +You can use \"make installcheck\" if you previously did \"make install\".\n>> +In that case, the code in the installation tree is tested. With\n>> +\"make check\", a temporary installation tree is built from the current\n>> +sources and then tested.\n>> +\n>> +Either way, this test initializes, starts, and stops a test Postgres\n>> +cluster.\n>> +\n>> +See src/test/perl/README for more info about running these tests.\n> \n> Is it really useful to have such instructions all over the tree?\n\nThat's debatable but I didn't want to go down that rabbit hole with this \npatch.\n\nIt's repetitive for sure. But there are small variations in which \nPG_TEST_EXTRA options you need, whether \"make installcheck\" runs against \na running server or still creates a temporary cluster, etc.\n\nI tried to deduplicate those instructions by moving the above \nboilerplate to src/test/README, and only noting the variations in the \nsubdirectory READMEs. I didn't like the result. It's very helpful to \nhave full copy-pasteable commands with all the right \"PG_TEST_EXTRA\" \noptions for each test.\n\nThese instructions also don't mention how to run the tests with Meson. \nThe first time I wanted to run individual tests with Meson, it took me a \nwhile to figure it out.\n\nI'll think a little more about how to improve these READMEs, but let's \ntake that to a separate thread.\n\n>> From 93b9e9b6e072f63af9009e0d66ab6d0d62ea8c15 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Mon, 12 Aug 2024 10:55:11 +0300\n>> Subject: [PATCH v4 2/8] Add test for dead-end backends\n>>\n>> The code path for launching a dead-end backend because we're out of\n>> slots was not covered by any tests, so add one. (Some tests did hit\n>> the case of launching a dead-end backend because the server is still\n>> starting up, though, so the gap in our test coverage wasn't as big as\n>> it sounds.)\n>> ---\n>> src/test/perl/PostgreSQL/Test/Cluster.pm | 39 +++++++++++++++++++\n>> .../postmaster/t/001_connection_limits.pl | 17 +++++++-\n>> 2 files changed, 55 insertions(+), 1 deletion(-)\n> \n> Why does this need to use \"raw\" connections? Can't you just create a bunch of\n> connections with BackgroundPsql?\n\nNo, these need to be connections that haven't sent the startup packet \nthe yet.\n\nWith Andrew's PqFFI work [1], we could do better. The latest version on \nthat thread doesn't expose the async functions like PQconnectStart() \nPQconnectPoll() though, but they can be added.\n\n[1] \nhttps://www.postgresql.org/message-id/97d1d1b9-d147-f69d-1991-d8794efed41c%40dunslane.net\n\n\nUnless you have comments on these first two patches which just add \ntests, I'll commit them shortly. Still processing the rest of your \ncomments...\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 6 Sep 2024 12:52:37 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On 04/09/2024 17:35, Andres Freund wrote:\n> On 2024-08-12 12:55:00 +0300, Heikki Linnakangas wrote:\n>> From dc53f89edbeec99f8633def8aa5f47cd98e7a150 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Mon, 12 Aug 2024 10:59:04 +0300\n>> Subject: [PATCH v4 4/8] Introduce a separate BackendType for dead-end children\n>>\n>> And replace postmaster.c's own \"backend type\" codes with BackendType\n> \n> Hm - it seems a bit odd to open-code this when we actually have a \"table\n> driven configuration\" available? Why isn't the type a field in\n> child_process_kind?\n\nSorry, I didn't understand this. What exactly would you add to \nchild_process_kind? Where would you use it?\n\n>> +/*\n>> + * MaxLivePostmasterChildren\n>> + *\n>> + * This reports the number postmaster child processes that can be active. It\n>> + * includes all children except for dead_end children. This allows the array\n>> + * in shared memory (PMChildFlags) to have a fixed maximum size.\n>> + */\n>> +int\n>> +MaxLivePostmasterChildren(void)\n>> +{\n>> +\tint\t\t\tn = 0;\n>> +\n>> +\t/* We know exactly how mamy worker and aux processes can be active */\n>> +\tn += autovacuum_max_workers;\n>> +\tn += max_worker_processes;\n>> +\tn += NUM_AUXILIARY_PROCS;\n>> +\n>> +\t/*\n>> +\t * We allow more connections here than we can have backends because some\n>> +\t * might still be authenticating; they might fail auth, or some existing\n>> +\t * backend might exit before the auth cycle is completed. The exact\n>> +\t * MaxBackends limit is enforced when a new backend tries to join the\n>> +\t * shared-inval backend array.\n>> +\t */\n>> +\tn += 2 * (MaxConnections + max_wal_senders);\n>> +\n>> +\treturn n;\n>> +}\n> \n> I wonder if we could instead maintain at least some of this in\n> child_process_kinds? Manually listing different types of processes in\n> different places doesn't seem particularly sustainable.\n\nHmm, you mean adding \"max this kind of children\" field to \nchild_process_kinds? Perhaps.\n\n> \n>> +/*\n>> + * Initialize at postmaster startup\n>> + */\n>> +void\n>> +InitPostmasterChildSlots(void)\n>> +{\n>> +\tint\t\t\tnum_pmchild_slots;\n>> +\tint\t\t\tslotno;\n>> +\tPMChild *slots;\n>> +\n>> +\tdlist_init(&freeBackendList);\n>> +\tdlist_init(&freeAutoVacWorkerList);\n>> +\tdlist_init(&freeBgWorkerList);\n>> +\tdlist_init(&freeAuxList);\n>> +\tdlist_init(&ActiveChildList);\n>> +\n>> +\tnum_pmchild_slots = MaxLivePostmasterChildren();\n>> +\n>> +\tslots = palloc(num_pmchild_slots * sizeof(PMChild));\n>> +\n>> +\tslotno = 0;\n>> +\tfor (int i = 0; i < 2 * (MaxConnections + max_wal_senders); i++)\n>> +\t{\n>> +\t\tinit_slot(&slots[slotno], slotno, &freeBackendList);\n>> +\t\tslotno++;\n>> +\t}\n>> +\tfor (int i = 0; i < autovacuum_max_workers; i++)\n>> +\t{\n>> +\t\tinit_slot(&slots[slotno], slotno, &freeAutoVacWorkerList);\n>> +\t\tslotno++;\n>> +\t}\n>> +\tfor (int i = 0; i < max_worker_processes; i++)\n>> +\t{\n>> +\t\tinit_slot(&slots[slotno], slotno, &freeBgWorkerList);\n>> +\t\tslotno++;\n>> +\t}\n>> +\tfor (int i = 0; i < NUM_AUXILIARY_PROCS; i++)\n>> +\t{\n>> +\t\tinit_slot(&slots[slotno], slotno, &freeAuxList);\n>> +\t\tslotno++;\n>> +\t}\n>> +\tAssert(slotno == num_pmchild_slots);\n>> +}\n> \n> Along the same vein - could we generalize this into one array of different\n> slot types and then loop over that to initialize / acquire the slots?\n\nMakes sense.\n\n>> +/* Return the appropriate free-list for the given backend type */\n>> +static dlist_head *\n>> +GetFreeList(BackendType btype)\n>> +{\n>> +\tswitch (btype)\n>> +\t{\n>> +\t\tcase B_BACKEND:\n>> +\t\tcase B_BG_WORKER:\n>> +\t\tcase B_WAL_SENDER:\n>> +\t\tcase B_SLOTSYNC_WORKER:\n>> +\t\t\treturn &freeBackendList;\n> \n> Maybe a daft question - but why are all of these in the same list? Sure,\n> they're essentially all full backends, but they're covered by different GUCs?\n\nNo reason. No particular reason they should *not* share the same list \neither though.\n\n> \n>> +\t\t\t/*\n>> +\t\t\t * Auxiliary processes. There can be only one of each of these\n>> +\t\t\t * running at a time.\n>> +\t\t\t */\n>> +\t\tcase B_AUTOVAC_LAUNCHER:\n>> +\t\tcase B_ARCHIVER:\n>> +\t\tcase B_BG_WRITER:\n>> +\t\tcase B_CHECKPOINTER:\n>> +\t\tcase B_STARTUP:\n>> +\t\tcase B_WAL_RECEIVER:\n>> +\t\tcase B_WAL_SUMMARIZER:\n>> +\t\tcase B_WAL_WRITER:\n>> +\t\t\treturn &freeAuxList;\n>> +\n>> +\t\t\t/*\n>> +\t\t\t * Logger is not connected to shared memory, and does not have a\n>> +\t\t\t * PGPROC entry, but we still allocate a child slot for it.\n>> +\t\t\t */\n> \n> Tangential: Why do we need a freelist for these and why do we choose a random\n> pgproc for these instead of assigning one statically?\n> \n> Background: I'd like to not provide AIO workers with \"bounce buffers\" (for IO\n> of buffers that can't be done in-place, like writes when checksums are\n> enabled). The varying proc numbers make that harder than it'd have to be...\n\nYeah, we can make these fixed.Currently, the # of slots reserved for aux \nprocesses is sized by NUM_AUXILIARY_PROCS, which is one smaller than the \nnumber of different aux proces kinds:\n\n> /*\n> * We set aside some extra PGPROC structures for auxiliary processes,\n> * ie things that aren't full-fledged backends but need shmem access.\n> *\n> * Background writer, checkpointer, WAL writer, WAL summarizer, and archiver\n> * run during normal operation. Startup process and WAL receiver also consume\n> * 2 slots, but WAL writer is launched only after startup has exited, so we\n> * only need 6 slots.\n> */\n> #define NUM_AUXILIARY_PROCS\t\t6\n\nFor PMChildSlot numbers, we could certainly just allocate one more slot.\n\nIt would probably make sense for PGPROCs too, even though PGPROC is a \nmuch larger struct.\n\n>> +PMChild *\n>> +AssignPostmasterChildSlot(BackendType btype)\n>> +{\n>> +\tdlist_head *freelist;\n>> +\tPMChild *pmchild;\n>> +\n>> +\tfreelist = GetFreeList(btype);\n>> +\n>> +\tif (dlist_is_empty(freelist))\n>> +\t\treturn NULL;\n>> +\n>> +\tpmchild = dlist_container(PMChild, elem, dlist_pop_head_node(freelist));\n>> +\tpmchild->pid = 0;\n>> +\tpmchild->bkend_type = btype;\n>> +\tpmchild->rw = NULL;\n>> +\tpmchild->bgworker_notify = true;\n>> +\n>> +\t/*\n>> +\t * pmchild->child_slot for each entry was initialized when the array of\n>> +\t * slots was allocated.\n>> +\t */\n>> +\n>> +\tdlist_push_head(&ActiveChildList, &pmchild->elem);\n>> +\n>> +\tReservePostmasterChildSlot(pmchild->child_slot);\n>> +\n>> +\t/* FIXME: find a more elegant way to pass this */\n>> +\tMyPMChildSlot = pmchild->child_slot;\n> \n> What if we assigned one offset for each process and assigned its ID here and\n> also used that for its ProcNumber - that way we wouldn't need to manage\n> freelists in two places.\n\nIt's currently possible to have up to 2 * max_connections backends in \nthe authentication phase. We would have to change that behaviour, or \nmake the PGPROC array 2x larger.\n\nIt might well be worth it, I don't know how sensible the current \nbehaviour is. But I'd like to punt that to later patch, to keep the \nscope of this patch set reasonable. It's pretty straightforward to do \nlater on top of this if we want to.\n\n>> +PMChild *\n>> +FindPostmasterChildByPid(int pid)\n>> +{\n>> +\tdlist_iter\titer;\n>> +\n>> +\tdlist_foreach(iter, &ActiveChildList)\n>> +\t{\n>> +\t\tPMChild *bp = dlist_container(PMChild, elem, iter.cur);\n>> +\n>> +\t\tif (bp->pid == pid)\n>> +\t\t\treturn bp;\n>> +\t}\n>> +\treturn NULL;\n>> +}\n> \n> It's not new, but it's quite sad that postmaster's process exit handling is\n> effectively O(Backends^2)...\n\nIt would be straightforward to turn ActiveChildList into a hash table. \nBut I'd like to leave that to a followup patch too.\n\n>> \t/*\n>> \t * We're ready to rock and roll...\n>> \t */\n>> -\tStartupPID = StartChildProcess(B_STARTUP);\n>> -\tAssert(StartupPID != 0);\n>> +\tStartupPMChild = StartChildProcess(B_STARTUP);\n>> +\tAssert(StartupPMChild != NULL);\n> \n> This (not new) assertion is ... odd.\n\nYeah, it's an assertion because StartChildProcess has this:\n\n> \t\t/*\n> \t\t * fork failure is fatal during startup, but there's no need to choke\n> \t\t * immediately if starting other child types fails.\n> \t\t */\n> \t\tif (type == B_STARTUP)\n> \t\t\tExitPostmaster(1);\n\n\n>> @@ -1961,26 +1915,6 @@ process_pm_reload_request(void)\n>> \t\t\t\t(errmsg(\"received SIGHUP, reloading configuration files\")));\n>> \t\tProcessConfigFile(PGC_SIGHUP);\n>> \t\tSignalSomeChildren(SIGHUP, BACKEND_TYPE_ALL & ~(1 << B_DEAD_END_BACKEND));\n>> -\t\tif (StartupPID != 0)\n>> -\t\t\tsignal_child(StartupPID, SIGHUP);\n>> -\t\tif (BgWriterPID != 0)\n>> -\t\t\tsignal_child(BgWriterPID, SIGHUP);\n>> -\t\tif (CheckpointerPID != 0)\n>> -\t\t\tsignal_child(CheckpointerPID, SIGHUP);\n>> -\t\tif (WalWriterPID != 0)\n>> -\t\t\tsignal_child(WalWriterPID, SIGHUP);\n>> -\t\tif (WalReceiverPID != 0)\n>> -\t\t\tsignal_child(WalReceiverPID, SIGHUP);\n>> -\t\tif (WalSummarizerPID != 0)\n>> -\t\t\tsignal_child(WalSummarizerPID, SIGHUP);\n>> -\t\tif (AutoVacPID != 0)\n>> -\t\t\tsignal_child(AutoVacPID, SIGHUP);\n>> -\t\tif (PgArchPID != 0)\n>> -\t\t\tsignal_child(PgArchPID, SIGHUP);\n>> -\t\tif (SysLoggerPID != 0)\n>> -\t\t\tsignal_child(SysLoggerPID, SIGHUP);\n>> -\t\tif (SlotSyncWorkerPID != 0)\n>> -\t\t\tsignal_child(SlotSyncWorkerPID, SIGHUP);\n>> \n>> \t\t/* Reload authentication config files too */\n>> \t\tif (!load_hba())\n> \n> For a moment I wondered why this change was part of this commit - but I guess\n> we didn't have any of these in an array/list before this change...\n\nCorrect.\n\n>> @@ -2469,11 +2410,15 @@ process_pm_child_exit(void)\n>> \t\t}\n>> \n>> \t\t/* Was it the system logger? If so, try to start a new one */\n>> -\t\tif (pid == SysLoggerPID)\n>> +\t\tif (SysLoggerPMChild && pid == SysLoggerPMChild->pid)\n>> \t\t{\n>> -\t\t\tSysLoggerPID = 0;\n>> \t\t\t/* for safety's sake, launch new logger *first* */\n>> -\t\t\tSysLoggerPID = SysLogger_Start();\n>> +\t\t\tSysLoggerPMChild->pid = SysLogger_Start();\n>> +\t\t\tif (SysLoggerPMChild->pid == 0)\n>> +\t\t\t{\n>> +\t\t\t\tFreePostmasterChildSlot(SysLoggerPMChild);\n>> +\t\t\t\tSysLoggerPMChild = NULL;\n>> +\t\t\t}\n>> \t\t\tif (!EXIT_STATUS_0(exitstatus))\n>> \t\t\t\tLogChildExit(LOG, _(\"system logger process\"),\n> \n> Seems a bit weird to have one place with a different memory lifetime handling\n> than other places. Why don't we just do this the same way as in other places\n> but continue to defer the logging until after we tried to start the new\n> logger?\n\nHmm, you mean let LaunchMissingBackgroundProcesses() handle the restart?\n\nI'm a little scared of changing the existing logic. We don't have a \nmechanism for deferring logging, so we would have to invent that, or the \nlogs would just accumulate in the pipe until syslogger starts up. \nThere's some code between here and LaunchMissingBackgroundProcesses(), \nso might postmaster get blocked between writing to the syslogger pipe, \nbefore having restarted it?\n\nIf forking the syslogger process fails, that can happen anyway, though.\n\n> Might be worth having a test ensuring that loggers restart OK.\n\nYeah..\n\n>> \t/* Construct a process name for log message */\n>> +\n>> +\t/*\n>> +\t * FIXME: use GetBackendTypeDesc here? How does the localization of that\n>> +\t * work?\n>> +\t */\n>> \tif (bp->bkend_type == B_DEAD_END_BACKEND)\n>> \t{\n>> \t\tprocname = _(\"dead end backend\");\n> \n> Might be worth having a version of GetBackendTypeDesc() that returns a\n> translated string?\n\nConstructing the string for background workers is a little more complicated:\n\n snprintf(namebuf, MAXPGPATH, _(\"background worker \\\"%s\\\"\"),\n bp->rw->rw_worker.bgw_type);\n\nWe could still do that for background workers and use the transalated \nvariant of GetBackendTypeDesc() for everything else though.\n\n>> @@ -2697,9 +2643,16 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)\n>> \t{\n>> \t\tdlist_iter\titer;\n>> \n>> -\t\tdlist_foreach(iter, &BackendList)\n>> +\t\tdlist_foreach(iter, &ActiveChildList)\n>> \t\t{\n>> -\t\t\tBackend *bp = dlist_container(Backend, elem, iter.cur);\n>> +\t\t\tPMChild *bp = dlist_container(PMChild, elem, iter.cur);\n>> +\n>> +\t\t\t/* We do NOT restart the syslogger */\n>> +\t\t\tif (bp == SysLoggerPMChild)\n>> +\t\t\t\tcontinue;\n> \n> That comment seems a bit misleading - we do restart syslogger, we just don't\n> do it here, no? I realize it's an old comment, but it still seems like it's\n> worth fixing given that you touch all the code here...\n\nNo, we really do not restart the syslogger. This code runs when \n*another* process has crashed unexpectedly. We kill all other processes, \nreinitialize shared memory and restart, but the old syslogger keeps \nrunning through all that.\n\nI'll add a note on that to InitPostmasterChildSlots(), as it's a bit \nsurprising.\n\n>> @@ -2871,29 +2786,27 @@ PostmasterStateMachine(void)\n>> <snip>\n>> -\t\tif (WalSummarizerPID != 0)\n>> -\t\t\tsignal_child(WalSummarizerPID, SIGTERM);\n>> -\t\tif (SlotSyncWorkerPID != 0)\n>> -\t\t\tsignal_child(SlotSyncWorkerPID, SIGTERM);\n>> +\t\ttargetMask |= (1 << B_STARTUP);\n>> +\t\ttargetMask |= (1 << B_WAL_RECEIVER);\n>> +\n>> +\t\ttargetMask |= (1 << B_WAL_SUMMARIZER);\n>> +\t\ttargetMask |= (1 << B_SLOTSYNC_WORKER);\n>> \t\t/* checkpointer, archiver, stats, and syslogger may continue for now */\n>> \n>> +\t\tSignalSomeChildren(SIGTERM, targetMask);\n>> +\n>> \t\t/* Now transition to PM_WAIT_BACKENDS state to wait for them to die */\n>> \t\tpmState = PM_WAIT_BACKENDS;\n>> <snip>\n> \n> It's likely the right thing to not do as one patch, but IMO this really wants\n> to be a state table. Perhaps as part of child_process_kinds, perhaps separate\n> from that.\n\nYeah. I've tried to refactor this into a table before, but didn't come \nup with anything that I was happy with. I also feel there must be a \nbetter way to organize this, but not sure what exactly. I hope that will \nbecome more apparent after these other changes.\n\n>> @@ -3130,8 +3047,21 @@ static void\n>> LaunchMissingBackgroundProcesses(void)\n>> {\n>> \t/* Syslogger is active in all states */\n>> -\tif (SysLoggerPID == 0 && Logging_collector)\n>> -\t\tSysLoggerPID = SysLogger_Start();\n>> +\tif (SysLoggerPMChild == NULL && Logging_collector)\n>> +\t{\n>> +\t\tSysLoggerPMChild = AssignPostmasterChildSlot(B_LOGGER);\n>> +\t\tif (!SysLoggerPMChild)\n>> +\t\t\telog(LOG, \"no postmaster child slot available for syslogger\");\n> \n> How could this elog() be reached? Seems something seriously would have gone\n> wrong to get here - in which case a LOG that might not even be visible (due to\n> logger not working) doesn't seem like the right response.\n\nI'll turn it into an assertion or PANIC.\n\n>> @@ -3334,29 +3270,12 @@ SignalSomeChildren(int signal, uint32 targetMask)\n>> static void\n>> TerminateChildren(int signal)\n>> {\n> \n> The comment for TerminateChildren() says \"except syslogger and dead_end\n> backends.\" - aren't you including the latter here?\n\nThe comment is adjusted in \nv4-0004-Introduce-a-separate-BackendType-for-dead-end-chi.patch. Before \nthat, SignalChildren() does ignore dead-end children.\n\nThanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 6 Sep 2024 16:13:43 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On Fri, Sep 6, 2024 at 9:13 AM Heikki Linnakangas <[email protected]> wrote:\n> It's currently possible to have up to 2 * max_connections backends in\n> the authentication phase. We would have to change that behaviour, or\n> make the PGPROC array 2x larger.\n\nI know I already said this elsewhere, but in case it got lost in the\nshuffle, +1 for changing this, unless somebody can make a compelling\nargument why 2 * max_connections isn't WAY too many.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Sep 2024 09:32:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "Hi,\n\nOn 2024-09-06 16:13:43 +0300, Heikki Linnakangas wrote:\n> On 04/09/2024 17:35, Andres Freund wrote:\n> > On 2024-08-12 12:55:00 +0300, Heikki Linnakangas wrote:\n> > > From dc53f89edbeec99f8633def8aa5f47cd98e7a150 Mon Sep 17 00:00:00 2001\n> > > From: Heikki Linnakangas <[email protected]>\n> > > Date: Mon, 12 Aug 2024 10:59:04 +0300\n> > > Subject: [PATCH v4 4/8] Introduce a separate BackendType for dead-end children\n> > >\n> > > And replace postmaster.c's own \"backend type\" codes with BackendType\n> >\n> > Hm - it seems a bit odd to open-code this when we actually have a \"table\n> > driven configuration\" available? Why isn't the type a field in\n> > child_process_kind?\n>\n> Sorry, I didn't understand this. What exactly would you add to\n> child_process_kind? Where would you use it?\n\nI'm not entirely sure what I was thinking of. It might be partially triggering\na prior complaint I had about manually assigning things to MyBackendType,\ndespite actually having all the information already.\n\nOne thing that I just noticed is that this patch orphans comment references to\nBACKEND_TYPE_AUTOVAC and BACKEND_TYPE_BGWORKER.\n\nSeems a tad odd to have BACKEND_TYPE_ALL after removing everything else from\nthe BACKEND_TYPE_* \"namespace\".\n\n\nTo deal with the issue around bitmasks you had mentioned, I think we should at\nleast have a static inline function to convert B_* values to the bitmask\nindex.\n\n\n> > > +/*\n> > > + * MaxLivePostmasterChildren\n> > > + *\n> > > + * This reports the number postmaster child processes that can be active. It\n> > > + * includes all children except for dead_end children. This allows the array\n> > > + * in shared memory (PMChildFlags) to have a fixed maximum size.\n> > > + */\n> > > +int\n> > > +MaxLivePostmasterChildren(void)\n> > > +{\n> > > +\tint\t\t\tn = 0;\n> > > +\n> > > +\t/* We know exactly how mamy worker and aux processes can be active */\n> > > +\tn += autovacuum_max_workers;\n> > > +\tn += max_worker_processes;\n> > > +\tn += NUM_AUXILIARY_PROCS;\n> > > +\n> > > +\t/*\n> > > +\t * We allow more connections here than we can have backends because some\n> > > +\t * might still be authenticating; they might fail auth, or some existing\n> > > +\t * backend might exit before the auth cycle is completed. The exact\n> > > +\t * MaxBackends limit is enforced when a new backend tries to join the\n> > > +\t * shared-inval backend array.\n> > > +\t */\n> > > +\tn += 2 * (MaxConnections + max_wal_senders);\n> > > +\n> > > +\treturn n;\n> > > +}\n> >\n> > I wonder if we could instead maintain at least some of this in\n> > child_process_kinds? Manually listing different types of processes in\n> > different places doesn't seem particularly sustainable.\n>\n> Hmm, you mean adding \"max this kind of children\" field to\n> child_process_kinds? Perhaps.\n\nYep, that's what I meant.\n\n\n\n> > > +/* Return the appropriate free-list for the given backend type */\n> > > +static dlist_head *\n> > > +GetFreeList(BackendType btype)\n> > > +{\n> > > +\tswitch (btype)\n> > > +\t{\n> > > +\t\tcase B_BACKEND:\n> > > +\t\tcase B_BG_WORKER:\n> > > +\t\tcase B_WAL_SENDER:\n> > > +\t\tcase B_SLOTSYNC_WORKER:\n> > > +\t\t\treturn &freeBackendList;\n> >\n> > Maybe a daft question - but why are all of these in the same list? Sure,\n> > they're essentially all full backends, but they're covered by different GUCs?\n>\n> No reason. No particular reason they should *not* share the same list either\n> though.\n\nAren't they controlled by distinct connection limits? Isn't there a danger\nthat we could use up entries and fail connections due to that, despite not\nactually being above the limit?\n\n\n> > Tangential: Why do we need a freelist for these and why do we choose a random\n> > pgproc for these instead of assigning one statically?\n> >\n> > Background: I'd like to not provide AIO workers with \"bounce buffers\" (for IO\n> > of buffers that can't be done in-place, like writes when checksums are\n> > enabled). The varying proc numbers make that harder than it'd have to be...\n>\n> Yeah, we can make these fixed.\n\nCool.\n\n\n> Currently, the # of slots reserved for aux processes is sized by\n> NUM_AUXILIARY_PROCS, which is one smaller than the number of different aux\n> proces kinds:\n\n> > /*\n> > * We set aside some extra PGPROC structures for auxiliary processes,\n> > * ie things that aren't full-fledged backends but need shmem access.\n> > *\n> > * Background writer, checkpointer, WAL writer, WAL summarizer, and archiver\n> > * run during normal operation. Startup process and WAL receiver also consume\n> > * 2 slots, but WAL writer is launched only after startup has exited, so we\n> > * only need 6 slots.\n> > */\n> > #define NUM_AUXILIARY_PROCS\t\t6\n>\n> For PMChildSlot numbers, we could certainly just allocate one more slot.\n>\n> It would probably make sense for PGPROCs too, even though PGPROC is a much\n> larger struct.\n\nI don't think it's worth worrying about that much. PGPROC is large, but not\n*that* large. And the robustness win of properly detecting when there's a\nproblem around starting/stopping aux workers seems to outweigh that to me.\n\n\n\n> > > +PMChild *\n> > > +AssignPostmasterChildSlot(BackendType btype)\n> > > +{\n> > > +\tdlist_head *freelist;\n> > > +\tPMChild *pmchild;\n> > > +\n> > > +\tfreelist = GetFreeList(btype);\n> > > +\n> > > +\tif (dlist_is_empty(freelist))\n> > > +\t\treturn NULL;\n> > > +\n> > > +\tpmchild = dlist_container(PMChild, elem, dlist_pop_head_node(freelist));\n> > > +\tpmchild->pid = 0;\n> > > +\tpmchild->bkend_type = btype;\n> > > +\tpmchild->rw = NULL;\n> > > +\tpmchild->bgworker_notify = true;\n> > > +\n> > > +\t/*\n> > > +\t * pmchild->child_slot for each entry was initialized when the array of\n> > > +\t * slots was allocated.\n> > > +\t */\n> > > +\n> > > +\tdlist_push_head(&ActiveChildList, &pmchild->elem);\n> > > +\n> > > +\tReservePostmasterChildSlot(pmchild->child_slot);\n> > > +\n> > > +\t/* FIXME: find a more elegant way to pass this */\n> > > +\tMyPMChildSlot = pmchild->child_slot;\n> >\n> > What if we assigned one offset for each process and assigned its ID here and\n> > also used that for its ProcNumber - that way we wouldn't need to manage\n> > freelists in two places.\n>\n> It's currently possible to have up to 2 * max_connections backends in the\n> authentication phase. We would have to change that behaviour, or make the\n> PGPROC array 2x larger.\n\nThat however, might be too much...\n\n\n> It might well be worth it, I don't know how sensible the current behaviour\n> is. But I'd like to punt that to later patch, to keep the scope of this\n> patch set reasonable. It's pretty straightforward to do later on top of this\n> if we want to.\n\nMakes sense.\n\n\nI still think that we'd be better off to just return an error to the client in\npostmaster, rather than deal with this dead-end children mess. That was\nperhaps justified at some point, but now it seems to add way more complexity\nthan it's worth. And it's absurdly expensive to fork to return an error. Way\nmore expensive than just having postmaster send an error and close the socket.\n\n\n> > > @@ -2469,11 +2410,15 @@ process_pm_child_exit(void)\n> > > \t\t}\n> > > \t\t/* Was it the system logger? If so, try to start a new one */\n> > > -\t\tif (pid == SysLoggerPID)\n> > > +\t\tif (SysLoggerPMChild && pid == SysLoggerPMChild->pid)\n> > > \t\t{\n> > > -\t\t\tSysLoggerPID = 0;\n> > > \t\t\t/* for safety's sake, launch new logger *first* */\n> > > -\t\t\tSysLoggerPID = SysLogger_Start();\n> > > +\t\t\tSysLoggerPMChild->pid = SysLogger_Start();\n> > > +\t\t\tif (SysLoggerPMChild->pid == 0)\n> > > +\t\t\t{\n> > > +\t\t\t\tFreePostmasterChildSlot(SysLoggerPMChild);\n> > > +\t\t\t\tSysLoggerPMChild = NULL;\n> > > +\t\t\t}\n> > > \t\t\tif (!EXIT_STATUS_0(exitstatus))\n> > > \t\t\t\tLogChildExit(LOG, _(\"system logger process\"),\n> >\n> > Seems a bit weird to have one place with a different memory lifetime handling\n> > than other places. Why don't we just do this the same way as in other places\n> > but continue to defer the logging until after we tried to start the new\n> > logger?\n>\n> Hmm, you mean let LaunchMissingBackgroundProcesses() handle the restart?\n\nYea - which it already can do, presumably to handle the case of\nlogging_collector. It just seems odd to have code to have three places calling\nSysLogger_Start() - with some mild variations of the code.\n\nPerhaps we can at least centralize some of that?\n\n\nBut you have a point with:\n\n> I'm a little scared of changing the existing logic. We don't have a\n> mechanism for deferring logging, so we would have to invent that, or the\n> logs would just accumulate in the pipe until syslogger starts up. There's\n> some code between here and LaunchMissingBackgroundProcesses(), so might\n> postmaster get blocked between writing to the syslogger pipe, before having\n> restarted it?\n>\n> If forking the syslogger process fails, that can happen anyway, though.\n\n\n\n\n> > > \t/* Construct a process name for log message */\n> > > +\n> > > +\t/*\n> > > +\t * FIXME: use GetBackendTypeDesc here? How does the localization of that\n> > > +\t * work?\n> > > +\t */\n> > > \tif (bp->bkend_type == B_DEAD_END_BACKEND)\n> > > \t{\n> > > \t\tprocname = _(\"dead end backend\");\n> >\n> > Might be worth having a version of GetBackendTypeDesc() that returns a\n> > translated string?\n>\n> Constructing the string for background workers is a little more complicated:\n\nRandom aside: I *hate* that there's no trivial way recognie background workers\nin pg_stat_activity, because somebody made pg_stat_activity.backend_type\nreport something completely under control of extensions...\n\n\n\n\n> > > @@ -2697,9 +2643,16 @@ HandleChildCrash(int pid, int exitstatus, const char *procname)\n> > > \t{\n> > > \t\tdlist_iter\titer;\n> > > -\t\tdlist_foreach(iter, &BackendList)\n> > > +\t\tdlist_foreach(iter, &ActiveChildList)\n> > > \t\t{\n> > > -\t\t\tBackend *bp = dlist_container(Backend, elem, iter.cur);\n> > > +\t\t\tPMChild *bp = dlist_container(PMChild, elem, iter.cur);\n> > > +\n> > > +\t\t\t/* We do NOT restart the syslogger */\n> > > +\t\t\tif (bp == SysLoggerPMChild)\n> > > +\t\t\t\tcontinue;\n> >\n> > That comment seems a bit misleading - we do restart syslogger, we just don't\n> > do it here, no? I realize it's an old comment, but it still seems like it's\n> > worth fixing given that you touch all the code here...\n>\n> No, we really do not restart the syslogger.\n\nHm?\n\n\t\t/* Was it the system logger? If so, try to start a new one */\n\t\tif (SysLoggerPMChild && pid == SysLoggerPMChild->pid)\n\t\t{\n\t\t\t/* for safety's sake, launch new logger *first* */\n\t\t\tSysLoggerPMChild->pid = SysLogger_Start(SysLoggerPMChild->child_slot);\n\t\t\tif (SysLoggerPMChild->pid == 0)\n\t\t\t{\n\t\t\t\tFreePostmasterChildSlot(SysLoggerPMChild);\n\t\t\t\tSysLoggerPMChild = NULL;\n\t\t\t}\n\t\t\tif (!EXIT_STATUS_0(exitstatus))\n\t\t\t\tLogChildExit(LOG, _(\"system logger process\"),\n\t\t\t\t\t\t\t pid, exitstatus);\n\t\t\tcontinue;\n\t\t}\n\nWe don't do it reaction to other processes crashing, but we still restart it\nif it dies. Perhaps it's clear from context - but I had to think aobut it for\na moment.\n\n\n>\n> > > @@ -2871,29 +2786,27 @@ PostmasterStateMachine(void)\n> > > <snip>\n> > > -\t\tif (WalSummarizerPID != 0)\n> > > -\t\t\tsignal_child(WalSummarizerPID, SIGTERM);\n> > > -\t\tif (SlotSyncWorkerPID != 0)\n> > > -\t\t\tsignal_child(SlotSyncWorkerPID, SIGTERM);\n> > > +\t\ttargetMask |= (1 << B_STARTUP);\n> > > +\t\ttargetMask |= (1 << B_WAL_RECEIVER);\n> > > +\n> > > +\t\ttargetMask |= (1 << B_WAL_SUMMARIZER);\n> > > +\t\ttargetMask |= (1 << B_SLOTSYNC_WORKER);\n> > > \t\t/* checkpointer, archiver, stats, and syslogger may continue for now */\n> > > +\t\tSignalSomeChildren(SIGTERM, targetMask);\n> > > +\n> > > \t\t/* Now transition to PM_WAIT_BACKENDS state to wait for them to die */\n> > > \t\tpmState = PM_WAIT_BACKENDS;\n> > > <snip>\n> >\n> > It's likely the right thing to not do as one patch, but IMO this really wants\n> > to be a state table. Perhaps as part of child_process_kinds, perhaps separate\n> > from that.\n>\n> Yeah. I've tried to refactor this into a table before, but didn't come up\n> with anything that I was happy with. I also feel there must be a better way\n> to organize this, but not sure what exactly. I hope that will become more\n> apparent after these other changes.\n\nWhat I'm imagining is something like:\n1) Make PMState values each have a distinct bit\n2) Move PMState to some (new?) header\n3) Add a \"uint32 should_run\" member to child_process_kind that's a bitmask of\n PMStates\n4) Add a new function in launch_backend.c that gets passed the \"target\"\n PMState and returns a bitmask of the tasks that should be running (or the\n inverse, doesn't really matter).\n5) Instead of open-coding the targetMask \"computation\", use the new function\n from 4).\n\nI think that might not look too bad?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Sep 2024 12:59:03 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "On Tue, Sep 10, 2024 at 12:59 PM Andres Freund <[email protected]> wrote:\n> I still think that we'd be better off to just return an error to the client in\n> postmaster, rather than deal with this dead-end children mess. That was\n> perhaps justified at some point, but now it seems to add way more complexity\n> than it's worth. And it's absurdly expensive to fork to return an error. Way\n> more expensive than just having postmaster send an error and close the socket.\n\nThe tricky case is the one where the client write() -- or SSL_write() -- blocks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 13:33:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "Hi,\n\nOn 2024-08-12 12:55:00 +0300, Heikki Linnakangas wrote:\n> @@ -2864,6 +2777,8 @@ PostmasterStateMachine(void)\n> \t */\n> \tif (pmState == PM_STOP_BACKENDS)\n> \t{\n> +\t\tuint32\t\ttargetMask;\n> +\n> \t\t/*\n> \t\t * Forget any pending requests for background workers, since we're no\n> \t\t * longer willing to launch any new workers. (If additional requests\n> @@ -2871,29 +2786,27 @@ PostmasterStateMachine(void)\n> \t\t */\n> \t\tForgetUnstartedBackgroundWorkers();\n> \n> -\t\t/* Signal all backend children except walsenders and dead-end backends */\n> -\t\tSignalSomeChildren(SIGTERM,\n> -\t\t\t\t\t\t BACKEND_TYPE_ALL & ~(1 << B_WAL_SENDER | 1 << B_DEAD_END_BACKEND));\n> +\t\t/* Signal all backend children except walsenders */\n> +\t\t/* dead-end children are not signalled yet */\n> +\t\ttargetMask = (1 << B_BACKEND);\n> +\t\ttargetMask |= (1 << B_BG_WORKER);\n> +\n> \t\t/* and the autovac launcher too */\n> -\t\tif (AutoVacPID != 0)\n> -\t\t\tsignal_child(AutoVacPID, SIGTERM);\n> +\t\ttargetMask |= (1 << B_AUTOVAC_LAUNCHER);\n> \t\t/* and the bgwriter too */\n> -\t\tif (BgWriterPID != 0)\n> -\t\t\tsignal_child(BgWriterPID, SIGTERM);\n> +\t\ttargetMask |= (1 << B_BG_WRITER);\n> \t\t/* and the walwriter too */\n> -\t\tif (WalWriterPID != 0)\n> -\t\t\tsignal_child(WalWriterPID, SIGTERM);\n> +\t\ttargetMask |= (1 << B_WAL_WRITER);\n> \t\t/* If we're in recovery, also stop startup and walreceiver procs */\n> -\t\tif (StartupPID != 0)\n> -\t\t\tsignal_child(StartupPID, SIGTERM);\n> -\t\tif (WalReceiverPID != 0)\n> -\t\t\tsignal_child(WalReceiverPID, SIGTERM);\n> -\t\tif (WalSummarizerPID != 0)\n> -\t\t\tsignal_child(WalSummarizerPID, SIGTERM);\n> -\t\tif (SlotSyncWorkerPID != 0)\n> -\t\t\tsignal_child(SlotSyncWorkerPID, SIGTERM);\n> +\t\ttargetMask |= (1 << B_STARTUP);\n> +\t\ttargetMask |= (1 << B_WAL_RECEIVER);\n> +\n> +\t\ttargetMask |= (1 << B_WAL_SUMMARIZER);\n> +\t\ttargetMask |= (1 << B_SLOTSYNC_WORKER);\n> \t\t/* checkpointer, archiver, stats, and syslogger may continue for now */\n> \n> +\t\tSignalSomeChildren(SIGTERM, targetMask);\n> +\n> \t\t/* Now transition to PM_WAIT_BACKENDS state to wait for them to die */\n> \t\tpmState = PM_WAIT_BACKENDS;\n> \t}\n\nI think this might now omit shutting down at least autovac workers, which\nafaict previously were included.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Sep 2024 13:53:08 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" }, { "msg_contents": "Hi,\n\nOn 2024-09-10 13:33:36 -0400, Robert Haas wrote:\n> On Tue, Sep 10, 2024 at 12:59 PM Andres Freund <[email protected]> wrote:\n> > I still think that we'd be better off to just return an error to the client in\n> > postmaster, rather than deal with this dead-end children mess. That was\n> > perhaps justified at some point, but now it seems to add way more complexity\n> > than it's worth. And it's absurdly expensive to fork to return an error. Way\n> > more expensive than just having postmaster send an error and close the socket.\n> \n> The tricky case is the one where the client write() -- or SSL_write() -- blocks.\n\nYea, SSL definitely does make it harder. But it's not exactly rocket science\nto do non-blocking SSL connection establishment. After all, we do manage to\ndo so in libpq...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Sep 2024 18:35:35 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refactoring postmaster's code to cleanup after child exit" } ]
[ { "msg_contents": "Hi,\n\nGit on windows defaults to core.autocrlf being enabled. Which means\nthat a normal git clone will convert all lineendings in text files.\n\nUnfortunately that causes a few tests to fail, at least:\n test_json_parser/001_test_json_parser_incremental\n test_json_parser/003_test_semantic\n pg_bsd_indent/001_pg_bsd_indent\n\nIn the case of test_json_parser the problem is that\ntest_json_parser_incremental.c assumes one can read statbuf.st_size bytes via\nfread() - but that doesn't work if the input has crlf inside. Due to the crlf\nconversion we reach EOF before we've read statbuf.st_size bytes, triggering an\nerror.\n\nI suspect the issue with pg_bsd_indent is similar.\n\n\nDo we want to support checking out with core.autocrlf? I suspect it might\njust take using binary mode in a few more places.\n\n\nIf we do not want to support that, ISTM we ought to raise an error somewhere?\nThis kind of thing is pretty time consuming to track down, at least for the\nwindows-noob writing this email.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Jul 2024 22:20:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "tests fail on windows with default git settings" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Do we want to support checking out with core.autocrlf?\n\n-1. It would be a constant source of breakage, and you could never\nexpect that (for example) making a tarball from such a checkout\nwould match anyone else's results.\n\n> If we do not want to support that, ISTM we ought to raise an error somewhere?\n\n+1, if we can figure out how.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Jul 2024 01:26:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi,\n\nOn 2024-07-07 01:26:13 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Do we want to support checking out with core.autocrlf?\n>\n> -1. It would be a constant source of breakage, and you could never\n> expect that (for example) making a tarball from such a checkout\n> would match anyone else's results.\n\nWFM.\n\n\n> > If we do not want to support that, ISTM we ought to raise an error somewhere?\n>\n> +1, if we can figure out how.\n\nI can see two paths:\n\n1) we prevent eol conversion, by using the right magic incantation in\n .gitattributes\n\n2) we check that some canary file is correctly encoded, e.g. during meson\n configure (should suffice, this is realistically only a windows issue)\n\n\nIt seems that the only realistic way to achieve 1) is to remove the \"text\"\nattribute from all files. That had me worried for a bit, thinking that might\nhave a larger blast radius. However, it looks like this is solely used for\nline-ending conversion. The man page says:\n \"This attribute marks the path as a text file, which enables end-of-line conversion:\"\n\n\nWhich sounds like it'd work well - except that it appears to behave oddly when\nupdating to such a change in an existing repo -\n\ncd /tmp/;\nrm -rf pg-eol;\ngit -c core.eol=crlf -c core.autocrlf=true clone ~/src/postgresql pg-eol;\ncd pg-eol;\ngit config core.eol crlf; git config core.autocrlf true;\nstat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n\ncd ~/src/postgresql\nstat src/test/modules/test_json_parser/tiny.json -> 6604 bytes\necho '* -text' >> .gitattributes\ngit commit -a -m tmp\n\ncd /tmp/pg-eol\ngit pull\ngit st\n ...\n nothing to commit, working tree clean\nstat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n\nI.e. the repo still is in CRLF state.\n\nBut if I reclone at that point, the line endings are in a sane state.\n\n\nIIUC this is because line-ending conversion is done only during\ncheckout/checkin.\n\n\nThere are ways to get git to redo the normalization, but it's somewhat\nawkward [1].\n\nOTOH, given that the tests already fail, I assume our windows contributors\nalready have disabled autocrlf?\n\nGreetings,\n\nAndres Freund\n\n[1] https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings#refreshing-a-repository-after-changing-line-endings\n\n\n", "msg_date": "Sat, 6 Jul 2024 23:07:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "\nOn 2024-07-07 Su 1:26 AM, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n>> Do we want to support checking out with core.autocrlf?\n> -1. It would be a constant source of breakage, and you could never\n> expect that (for example) making a tarball from such a checkout\n> would match anyone else's results.\n\n\nYeah, totally agree.\n\n\n>> If we do not want to support that, ISTM we ought to raise an error somewhere?\n> +1, if we can figure out how.\n>\n> \t\t\t\n\n\n\nISTM the right fix is probably to use PG_BINARY_R mode instead of \"r\" \nwhen opening the files, at least in the case if the test_json_parser tests.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 7 Jul 2024 06:30:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi\n\nOn Sun, 7 Jul 2024 at 07:07, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-07 01:26:13 -0400, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > Do we want to support checking out with core.autocrlf?\n> >\n> > -1. It would be a constant source of breakage, and you could never\n> > expect that (for example) making a tarball from such a checkout\n> > would match anyone else's results.\n>\n> WFM.\n>\n>\n> > > If we do not want to support that, ISTM we ought to raise an error\n> somewhere?\n> >\n> > +1, if we can figure out how.\n>\n> I can see two paths:\n>\n> 1) we prevent eol conversion, by using the right magic incantation in\n> .gitattributes\n>\n> 2) we check that some canary file is correctly encoded, e.g. during meson\n> configure (should suffice, this is realistically only a windows issue)\n>\n>\n> It seems that the only realistic way to achieve 1) is to remove the \"text\"\n> attribute from all files. That had me worried for a bit, thinking that\n> might\n> have a larger blast radius. However, it looks like this is solely used for\n> line-ending conversion. The man page says:\n> \"This attribute marks the path as a text file, which enables end-of-line\n> conversion:\"\n>\n>\n> Which sounds like it'd work well - except that it appears to behave oddly\n> when\n> updating to such a change in an existing repo -\n>\n> cd /tmp/;\n> rm -rf pg-eol;\n> git -c core.eol=crlf -c core.autocrlf=true clone ~/src/postgresql pg-eol;\n> cd pg-eol;\n> git config core.eol crlf; git config core.autocrlf true;\n> stat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n>\n> cd ~/src/postgresql\n> stat src/test/modules/test_json_parser/tiny.json -> 6604 bytes\n> echo '* -text' >> .gitattributes\n> git commit -a -m tmp\n>\n> cd /tmp/pg-eol\n> git pull\n> git st\n> ...\n> nothing to commit, working tree clean\n> stat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n>\n> I.e. the repo still is in CRLF state.\n>\n> But if I reclone at that point, the line endings are in a sane state.\n>\n>\n> IIUC this is because line-ending conversion is done only during\n> checkout/checkin.\n>\n>\n> There are ways to get git to redo the normalization, but it's somewhat\n> awkward [1].\n>\n\nYeah, I vaguely remember playing with core.autocrlf many years ago and\nrunning into similar issues.\n\n\n>\n> OTOH, given that the tests already fail, I assume our windows contributors\n> already have disabled autocrlf?\n>\n\nI can't speak for others of course, but at least as far as building of\ninstallers is concerned, we use tarballs not git checkouts.\n\nFor my own work; well, I've started playing with PG17 on Windows just in\nthe last month or so and have noticed a number of test failures as well as\na weird meson issue that only shows up on a Github actions runner. I was\nhoping to look into those issues this week as I've been somewhat\nsidetracked with other work of late.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Sun, 7 Jul 2024 at 07:07, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-07 01:26:13 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Do we want to support checking out with core.autocrlf?\n>\n> -1.  It would be a constant source of breakage, and you could never\n> expect that (for example) making a tarball from such a checkout\n> would match anyone else's results.\n\nWFM.\n\n\n> > If we do not want to support that, ISTM we ought to raise an error somewhere?\n>\n> +1, if we can figure out how.\n\nI can see two paths:\n\n1) we prevent eol conversion, by using the right magic incantation in\n   .gitattributes\n\n2) we check that some canary file is correctly encoded, e.g. during meson\n   configure (should suffice, this is realistically only a windows issue)\n\n\nIt seems that the only realistic way to achieve 1) is to remove the \"text\"\nattribute from all files. That had me worried for a bit, thinking that might\nhave a larger blast radius. However, it looks like this is solely used for\nline-ending conversion. The man page says:\n  \"This attribute marks the path as a text file, which enables end-of-line conversion:\"\n\n\nWhich sounds like it'd work well - except that it appears to behave oddly when\nupdating to such a change in an existing repo -\n\ncd /tmp/;\nrm -rf pg-eol;\ngit -c core.eol=crlf -c core.autocrlf=true clone ~/src/postgresql pg-eol;\ncd pg-eol;\ngit config core.eol crlf; git config core.autocrlf true;\nstat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n\ncd ~/src/postgresql\nstat src/test/modules/test_json_parser/tiny.json -> 6604 bytes\necho '*         -text' >> .gitattributes\ngit commit -a -m tmp\n\ncd /tmp/pg-eol\ngit pull\ngit st\n  ...\n  nothing to commit, working tree clean\nstat src/test/modules/test_json_parser/tiny.json -> 6748 bytes\n\nI.e. the repo still is in CRLF state.\n\nBut if I reclone at that point, the line endings are in a sane state.\n\n\nIIUC this is because line-ending conversion is done only during\ncheckout/checkin.\n\n\nThere are ways to get git to redo the normalization, but it's somewhat\nawkward [1].Yeah, I vaguely remember playing with core.autocrlf many years ago and running into similar issues. \n\nOTOH, given that the tests already fail, I assume our windows contributors\nalready have disabled autocrlf?I can't speak for others of course, but at least as far as building of installers is concerned, we use tarballs not git checkouts.For my own work; well, I've started playing with PG17 on Windows just in the last month or so and have noticed a number of test failures as well as a weird meson issue that only shows up on a Github actions runner. I was hoping to look into those issues this week as I've been somewhat sidetracked with other work of late.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Mon, 8 Jul 2024 11:32:23 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On 2024-07-07 06:30:57 -0400, Andrew Dunstan wrote:\n> \n> On 2024-07-07 Su 1:26 AM, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > Do we want to support checking out with core.autocrlf?\n> > -1. It would be a constant source of breakage, and you could never\n> > expect that (for example) making a tarball from such a checkout\n> > would match anyone else's results.\n\n> Yeah, totally agree.\n> \n> \n> > > If we do not want to support that, ISTM we ought to raise an error somewhere?\n> > +1, if we can figure out how.\n> > \n> > \t\t\t\n> \n> \n> \n> ISTM the right fix is probably to use PG_BINARY_R mode instead of \"r\" when\n> opening the files, at least in the case if the test_json_parser tests.\n\nThat does seem like it'd fix this issue, assuming the parser can cope with\n\\r\\n.\n\nI'm actually mildly surprised that the tests don't fail when *not* using\nautocrlf, because afaict test_json_parser_incremental.c doesn't set stdout to\nbinary and thus we presumably end up with \\r\\n in the output? Except that that\ncan't be true, because the test does pass on repos without autocrlf...\n\n\nThat approach does seem to mildly conflict with Tom and your preference for\nfixing this by disallowing core.autocrlf? If we do so, the test never ought to\nsee a crlf?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:16:49 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-07 06:30:57 -0400, Andrew Dunstan wrote:\n>> ISTM the right fix is probably to use PG_BINARY_R mode instead of \"r\" when\n>> opening the files, at least in the case if the test_json_parser tests.\n\n> That approach does seem to mildly conflict with Tom and your preference for\n> fixing this by disallowing core.autocrlf? If we do so, the test never ought to\n> see a crlf?\n\nIs this code that will *never* be applied to user-supplied files?\nWe certainly should tolerate \\r\\n in the general case (we even\nhave written-down project policy about that!). While I wouldn't\ncomplain too hard about assuming that our own test files don't\ncontain \\r\\n, if the code might get copied into a non-test\nscenario then it could create problems later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 16:35:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On 2024-07-08 Mo 4:16 PM, Andres Freund wrote:\n> On 2024-07-07 06:30:57 -0400, Andrew Dunstan wrote:\n>> On 2024-07-07 Su 1:26 AM, Tom Lane wrote:\n>>> Andres Freund <[email protected]> writes:\n>>>> Do we want to support checking out with core.autocrlf?\n>>> -1. It would be a constant source of breakage, and you could never\n>>> expect that (for example) making a tarball from such a checkout\n>>> would match anyone else's results.\n>> Yeah, totally agree.\n>>\n>>\n>>>> If we do not want to support that, ISTM we ought to raise an error somewhere?\n>>> +1, if we can figure out how.\n>>>\n>>> \t\t\t\n>>\n>>\n>> ISTM the right fix is probably to use PG_BINARY_R mode instead of \"r\" when\n>> opening the files, at least in the case if the test_json_parser tests.\n> That does seem like it'd fix this issue, assuming the parser can cope with\n> \\r\\n.\n\n\nYes, the parser can handle \\r\\n. Note that they can only be white space \nin JSON - they can only be present in string values via escapes.\n\n\n>\n> I'm actually mildly surprised that the tests don't fail when *not* using\n> autocrlf, because afaict test_json_parser_incremental.c doesn't set stdout to\n> binary and thus we presumably end up with \\r\\n in the output? Except that that\n> can't be true, because the test does pass on repos without autocrlf...\n>\n>\n> That approach does seem to mildly conflict with Tom and your preference for\n> fixing this by disallowing core.autocrlf? If we do so, the test never ought to\n> see a crlf?\n>\n\nIDK. I normally use core.autocrlf=false core.eol=lf on Windows. The \neditors I use are reasonably well behaved ;-)\n\nWhat I suggest (see attached) is we run the diff command with \n--strip-trailing-cr on Windows. Then we just won't care if the expected \nfile and/or the output file has CRs.\n\nNot sure what the issue is with pg_bsd_indent, though.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 8 Jul 2024 16:56:10 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi,\n\nOn 2024-07-08 16:56:10 -0400, Andrew Dunstan wrote:\n> On 2024-07-08 Mo 4:16 PM, Andres Freund wrote:\n> > I'm actually mildly surprised that the tests don't fail when *not* using\n> > autocrlf, because afaict test_json_parser_incremental.c doesn't set stdout to\n> > binary and thus we presumably end up with \\r\\n in the output? Except that that\n> > can't be true, because the test does pass on repos without autocrlf...\n> > \n> > \n> > That approach does seem to mildly conflict with Tom and your preference for\n> > fixing this by disallowing core.autocrlf? If we do so, the test never ought to\n> > see a crlf?\n> > \n> \n> IDK. I normally use core.autocrlf=false core.eol=lf on Windows. The editors\n> I use are reasonably well behaved ;-)\n\n:)\n\n\n> What I suggest (see attached) is we run the diff command with\n> --strip-trailing-cr on Windows. Then we just won't care if the expected file\n> and/or the output file has CRs.\n\nI was wondering about that too, but I wasn't sure we can rely on that flag\nbeing supported...\n\n\n> Not sure what the issue is with pg_bsd_indent, though.\n\nI think it's purely that we *read* with fopen(\"r\") and write with\nfopen(\"wb\"). Which means that any \\r\\n in the input will be converted to \\n in\nthe output. That's not a problem if the repo has been cloned without autocrlf,\nas there are no crlf in the expected files, but if autocrlf has been used, the\nexpected files don't match.\n\nIt doesn't look like it'd be trivial to make indent remember what was used in\nthe input. So I think for now the best path is to just use .gitattributes to\nexclude the expected files from crlf conversion. If we don't want to do so\nrepo wide, we can do so just for these files.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:44:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On 2024-07-08 Mo 5:44 PM, Andres Freund wrote:\n>> What I suggest (see attached) is we run the diff command with\n>> --strip-trailing-cr on Windows. Then we just won't care if the expected file\n>> and/or the output file has CRs.\n> I was wondering about that too, but I wasn't sure we can rely on that flag\n> being supported...\n>\n\nWell, my suggestion was to use it only on Windows. I'm using the \ndiffutils from chocolatey, which has it, as does Msys2 diff. Not sure \nwhat you have in the CI setup.\n\n\n>> Not sure what the issue is with pg_bsd_indent, though.\n> I think it's purely that we *read* with fopen(\"r\") and write with\n> fopen(\"wb\"). Which means that any \\r\\n in the input will be converted to \\n in\n> the output. That's not a problem if the repo has been cloned without autocrlf,\n> as there are no crlf in the expected files, but if autocrlf has been used, the\n> expected files don't match.\n>\n> It doesn't look like it'd be trivial to make indent remember what was used in\n> the input. So I think for now the best path is to just use .gitattributes to\n> exclude the expected files from crlf conversion. If we don't want to do so\n> repo wide, we can do so just for these files.\n>\n\neither that or we could use the --strip-trailing-cr gadget here too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-08 Mo 5:44 PM, Andres Freund\n wrote:\n\n\n\n\nWhat I suggest (see attached) is we run the diff command with\n--strip-trailing-cr on Windows. Then we just won't care if the expected file\nand/or the output file has CRs.\n\n\n\nI was wondering about that too, but I wasn't sure we can rely on that flag\nbeing supported...\n\n\n\n\n\nWell, my suggestion was to use it only on Windows. I'm using the\n diffutils from chocolatey, which has it, as does Msys2 diff. Not\n sure what you have in the CI setup.\n\n\n\n\n\nNot sure what the issue is with pg_bsd_indent, though.\n\n\n\nI think it's purely that we *read* with fopen(\"r\") and write with\nfopen(\"wb\"). Which means that any \\r\\n in the input will be converted to \\n in\nthe output. That's not a problem if the repo has been cloned without autocrlf,\nas there are no crlf in the expected files, but if autocrlf has been used, the\nexpected files don't match.\n\nIt doesn't look like it'd be trivial to make indent remember what was used in\nthe input. So I think for now the best path is to just use .gitattributes to\nexclude the expected files from crlf conversion. If we don't want to do so\nrepo wide, we can do so just for these files.\n\n\n\n\n\neither that or we could use the --strip-trailing-cr gadget here\n too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 9 Jul 2024 06:26:12 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi\n\nOn Mon, 8 Jul 2024 at 22:44, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-08 16:56:10 -0400, Andrew Dunstan wrote:\n> > On 2024-07-08 Mo 4:16 PM, Andres Freund wrote:\n> > > I'm actually mildly surprised that the tests don't fail when *not*\n> using\n> > > autocrlf, because afaict test_json_parser_incremental.c doesn't set\n> stdout to\n> > > binary and thus we presumably end up with \\r\\n in the output? Except\n> that that\n> > > can't be true, because the test does pass on repos without autocrlf...\n> > >\n> > >\n> > > That approach does seem to mildly conflict with Tom and your\n> preference for\n> > > fixing this by disallowing core.autocrlf? If we do so, the test never\n> ought to\n> > > see a crlf?\n> > >\n> >\n> > IDK. I normally use core.autocrlf=false core.eol=lf on Windows. The\n> editors\n> > I use are reasonably well behaved ;-)\n>\n> :)\n>\n>\n> > What I suggest (see attached) is we run the diff command with\n> > --strip-trailing-cr on Windows. Then we just won't care if the expected\n> file\n> > and/or the output file has CRs.\n>\n> I was wondering about that too, but I wasn't sure we can rely on that flag\n> being supported...\n>\n\nI have 4 different diff.exe's on my ~6 week old build VM (not counting\nshims), all of which seem to support --strip-trailing-cr. Those builds came\nwith:\n\n- git\n- VC++\n- diffutils (installed by chocolatey)\n- vcpkg\n\nI think it's reasonable to assume it'll be supported.\n\n\n>\n>\n> > Not sure what the issue is with pg_bsd_indent, though.\n>\n\nYeah - that's odd, as that test always passes for me, with or without\nautocrlf.\n\nThe other failures I see are the following, which I'm just starting to dig\ninto:\n\n 26/298 postgresql:recovery / recovery/019_replslot_limit\n ERROR 43.05s exit status 2\n 44/298 postgresql:recovery / recovery/027_stream_regress\n ERROR 383.08s exit status 1\n 50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n ERROR 138.06s exit status 25\n 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n ERROR 132.87s exit status 25\n170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n ERROR 93.45s exit status 2\n233/298 postgresql:bloom / bloom/001_wal\n ERROR 54.47s exit status 2\n236/298 postgresql:subscription / subscription/001_rep_changes\n ERROR 46.46s exit status 2\n246/298 postgresql:subscription / subscription/010_truncate\n ERROR 47.69s exit status 2\n253/298 postgresql:subscription / subscription/013_partition\n ERROR 125.63s exit status 25\n255/298 postgresql:subscription / subscription/022_twophase_cascade\n ERROR 58.13s exit status 2\n257/298 postgresql:subscription / subscription/015_stream\n ERROR 128.32s exit status 2\n262/298 postgresql:subscription / subscription/028_row_filter\n ERROR 43.14s exit status 2\n263/298 postgresql:subscription / subscription/027_nosuperuser\n ERROR 102.02s exit status 2\n269/298 postgresql:subscription / subscription/031_column_list\n ERROR 123.16s exit status 2\n271/298 postgresql:subscription / subscription/032_subscribe_use_index\n ERROR 139.33s exit status 2\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Mon, 8 Jul 2024 at 22:44, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-08 16:56:10 -0400, Andrew Dunstan wrote:\n> On 2024-07-08 Mo 4:16 PM, Andres Freund wrote:\n> > I'm actually mildly surprised that the tests don't fail when *not* using\n> > autocrlf, because afaict test_json_parser_incremental.c doesn't set stdout to\n> > binary and thus we presumably end up with \\r\\n in the output? Except that that\n> > can't be true, because the test does pass on repos without autocrlf...\n> > \n> > \n> > That approach does seem to mildly conflict with Tom and your preference for\n> > fixing this by disallowing core.autocrlf? If we do so, the test never ought to\n> > see a crlf?\n> > \n> \n> IDK. I normally use core.autocrlf=false core.eol=lf on Windows. The editors\n> I use are reasonably well behaved ;-)\n\n:)\n\n\n> What I suggest (see attached) is we run the diff command with\n> --strip-trailing-cr on Windows. Then we just won't care if the expected file\n> and/or the output file has CRs.\n\nI was wondering about that too, but I wasn't sure we can rely on that flag\nbeing supported...I have 4 different diff.exe's on my ~6 week old build VM (not counting shims), all of which seem to support --strip-trailing-cr. Those builds came with:- git- VC++- diffutils (installed by chocolatey)- vcpkgI think it's reasonable to assume it'll be supported. \n\n\n> Not sure what the issue is with pg_bsd_indent, though.Yeah - that's odd, as that test always passes for me, with or without autocrlf. The other failures I see are the following, which I'm just starting to dig into: 26/298 postgresql:recovery / recovery/019_replslot_limit                               ERROR            43.05s   exit status 2 44/298 postgresql:recovery / recovery/027_stream_regress                               ERROR           383.08s   exit status 1 50/298 postgresql:recovery / recovery/035_standby_logical_decoding                     ERROR           138.06s   exit status 25 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync                  ERROR           132.87s   exit status 25170/298 postgresql:pg_dump / pg_dump/002_pg_dump                                        ERROR            93.45s   exit status 2233/298 postgresql:bloom / bloom/001_wal                                                ERROR            54.47s   exit status 2236/298 postgresql:subscription / subscription/001_rep_changes                          ERROR            46.46s   exit status 2246/298 postgresql:subscription / subscription/010_truncate                             ERROR            47.69s   exit status 2253/298 postgresql:subscription / subscription/013_partition                            ERROR           125.63s   exit status 25255/298 postgresql:subscription / subscription/022_twophase_cascade                     ERROR            58.13s   exit status 2257/298 postgresql:subscription / subscription/015_stream                               ERROR           128.32s   exit status 2262/298 postgresql:subscription / subscription/028_row_filter                           ERROR            43.14s   exit status 2263/298 postgresql:subscription / subscription/027_nosuperuser                          ERROR           102.02s   exit status 2269/298 postgresql:subscription / subscription/031_column_list                          ERROR           123.16s   exit status 2271/298 postgresql:subscription / subscription/032_subscribe_use_index                  ERROR           139.33s   exit status 2-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 9 Jul 2024 14:52:39 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On 2024-07-09 Tu 9:52 AM, Dave Page wrote:\n>\n>\n> > What I suggest (see attached) is we run the diff command with\n> > --strip-trailing-cr on Windows. Then we just won't care if the\n> expected file\n> > and/or the output file has CRs.\n>\n> I was wondering about that too, but I wasn't sure we can rely on\n> that flag\n> being supported...\n>\n>\n> I have 4 different diff.exe's on my ~6 week old build VM (not counting \n> shims), all of which seem to support --strip-trailing-cr. Those builds \n> came with:\n>\n> - git\n> - VC++\n> - diffutils (installed by chocolatey)\n> - vcpkg\n>\n> I think it's reasonable to assume it'll be supported.\n>\n\nOk, cool. So I propose to patch the test_json_parser and pg_bsd_indent \ntests to use it on Windows, later today unless there's some objection.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-09 Tu 9:52 AM, Dave Page\n wrote:\n\n\n\n\n\n\n\n > What I suggest (see attached) is we run the diff\n command with\n > --strip-trailing-cr on Windows. Then we just won't care\n if the expected file\n > and/or the output file has CRs.\n\n I was wondering about that too, but I wasn't sure we can\n rely on that flag\n being supported...\n\n\n\nI have 4 different diff.exe's on my ~6 week old build VM\n (not counting shims), all of which seem to\n support --strip-trailing-cr. Those builds came with:\n\n\n- git\n- VC++\n- diffutils (installed by chocolatey)\n- vcpkg\n\n\nI think it's reasonable to assume it'll be supported.\n \n\n\n\n\n\n\nOk, cool. So I propose to patch the test_json_parser and\n pg_bsd_indent tests to use it on Windows, later today unless\n there's some objection.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 9 Jul 2024 11:34:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi,\n\nOn 2024-07-09 14:52:39 +0100, Dave Page wrote:\n> I have 4 different diff.exe's on my ~6 week old build VM (not counting\n> shims), all of which seem to support --strip-trailing-cr. Those builds came\n> with:\n>\n> - git\n> - VC++\n> - diffutils (installed by chocolatey)\n> - vcpkg\n>\n> I think it's reasonable to assume it'll be supported.\n\nI think the more likely issue would be an older setup with an older diff,\npeople on windows seem to not want to touch a working setup ever :). But we\ncan deal with that if reports about it come in.\n\n\n> > > Not sure what the issue is with pg_bsd_indent, though.\n> >\n>\n> Yeah - that's odd, as that test always passes for me, with or without\n> autocrlf.\n\nHuh.\n\n\n> The other failures I see are the following, which I'm just starting to dig\n> into:\n>\n> 26/298 postgresql:recovery / recovery/019_replslot_limit\n> ERROR 43.05s exit status 2\n> 44/298 postgresql:recovery / recovery/027_stream_regress\n> ERROR 383.08s exit status 1\n> 50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n> ERROR 138.06s exit status 25\n> 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n> ERROR 132.87s exit status 25\n> 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n> ERROR 93.45s exit status 2\n> 233/298 postgresql:bloom / bloom/001_wal\n> ERROR 54.47s exit status 2\n> 236/298 postgresql:subscription / subscription/001_rep_changes\n> ERROR 46.46s exit status 2\n> 246/298 postgresql:subscription / subscription/010_truncate\n> ERROR 47.69s exit status 2\n> 253/298 postgresql:subscription / subscription/013_partition\n> ERROR 125.63s exit status 25\n> 255/298 postgresql:subscription / subscription/022_twophase_cascade\n> ERROR 58.13s exit status 2\n> 257/298 postgresql:subscription / subscription/015_stream\n> ERROR 128.32s exit status 2\n> 262/298 postgresql:subscription / subscription/028_row_filter\n> ERROR 43.14s exit status 2\n> 263/298 postgresql:subscription / subscription/027_nosuperuser\n> ERROR 102.02s exit status 2\n> 269/298 postgresql:subscription / subscription/031_column_list\n> ERROR 123.16s exit status 2\n> 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n> ERROR 139.33s exit status 2\n\nHm, it'd be good to see some of errors behind that ([1]).\n\nI suspect it might be related to conflicting ports. I had to use\nPG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n\n # use unix socket to prevent port conflicts\n $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n # otherwise pg_regress insists on creating the directory and does it\n # in a non-existing place, this needs to be fixed :(\n mkdir d:/sockets\n $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"\n\n\nFWIW, building a tree with the patches I sent to the list last night and\nchanges to make postgresql-dev.yml use a git checkout, I get:\n\nhttps://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469\n\nOk: 281\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 17\nTimeout: 0\n\nThis is without readline and pltcl, as neither is currently built as part of\nwinpgbuild. Otherwise it has all applicable dependencies enabled (no bonjour,\nbsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict expected).\n\nGreetings,\n\nAndres Freund\n\n\n[1] I plan to submit a PR that'll collect the necessary information\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:32:55 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi,\n\nOn 2024-07-09 06:26:12 -0400, Andrew Dunstan wrote:\n> On 2024-07-08 Mo 5:44 PM, Andres Freund wrote:\n> > > What I suggest (see attached) is we run the diff command with\n> > > --strip-trailing-cr on Windows. Then we just won't care if the expected file\n> > > and/or the output file has CRs.\n> > I was wondering about that too, but I wasn't sure we can rely on that flag\n> > being supported...\n> > \n> \n> Well, my suggestion was to use it only on Windows. I'm using the diffutils\n> from chocolatey, which has it, as does Msys2 diff. Not sure what you have in\n> the CI setup.\n\nIIRC it's git's, which in turn is based on msys/mingw.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:35:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On Tue, 9 Jul 2024 at 17:32, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-09 14:52:39 +0100, Dave Page wrote:\n> > I have 4 different diff.exe's on my ~6 week old build VM (not counting\n> > shims), all of which seem to support --strip-trailing-cr. Those builds\n> came\n> > with:\n> >\n> > - git\n> > - VC++\n> > - diffutils (installed by chocolatey)\n> > - vcpkg\n> >\n> > I think it's reasonable to assume it'll be supported.\n>\n> I think the more likely issue would be an older setup with an older diff,\n> people on windows seem to not want to touch a working setup ever :). But we\n> can deal with that if reports about it come in.\n>\n\nThey've got to move to meson/ninja anyway, so... <shrug>.\n\n\n>\n>\n> > > > Not sure what the issue is with pg_bsd_indent, though.\n> > >\n> >\n> > Yeah - that's odd, as that test always passes for me, with or without\n> > autocrlf.\n>\n> Huh.\n>\n>\n> > The other failures I see are the following, which I'm just starting to\n> dig\n> > into:\n> >\n> > 26/298 postgresql:recovery / recovery/019_replslot_limit\n> > ERROR 43.05s exit status 2\n> > 44/298 postgresql:recovery / recovery/027_stream_regress\n> > ERROR 383.08s exit status 1\n> > 50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n> > ERROR 138.06s exit status 25\n> > 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n> > ERROR 132.87s exit status 25\n> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n> > ERROR 93.45s exit status 2\n> > 233/298 postgresql:bloom / bloom/001_wal\n> > ERROR 54.47s exit status 2\n> > 236/298 postgresql:subscription / subscription/001_rep_changes\n> > ERROR 46.46s exit status 2\n> > 246/298 postgresql:subscription / subscription/010_truncate\n> > ERROR 47.69s exit status 2\n> > 253/298 postgresql:subscription / subscription/013_partition\n> > ERROR 125.63s exit status 25\n> > 255/298 postgresql:subscription / subscription/022_twophase_cascade\n> > ERROR 58.13s exit status 2\n> > 257/298 postgresql:subscription / subscription/015_stream\n> > ERROR 128.32s exit status 2\n> > 262/298 postgresql:subscription / subscription/028_row_filter\n> > ERROR 43.14s exit status 2\n> > 263/298 postgresql:subscription / subscription/027_nosuperuser\n> > ERROR 102.02s exit status 2\n> > 269/298 postgresql:subscription / subscription/031_column_list\n> > ERROR 123.16s exit status 2\n> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n> > ERROR 139.33s exit status 2\n>\n> Hm, it'd be good to see some of errors behind that ([1]).\n>\n> I suspect it might be related to conflicting ports. I had to use\n> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n>\n> # use unix socket to prevent port conflicts\n> $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n> # otherwise pg_regress insists on creating the directory and\n> does it\n> # in a non-existing place, this needs to be fixed :(\n> mkdir d:/sockets\n> $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"\n>\n\nNo, it all seems to be fallout from GSSAPI being included in the build. If\nI build without that, everything passes. Most of the tests are failing with\na \"too many clients already\" error, but a handful do seem to include auth\nrelated errors as well. For example, this is from\n\n\n\n\n>\n>\n> FWIW, building a tree with the patches I sent to the list last night and\n> changes to make postgresql-dev.yml use a git checkout, I get:\n>\n>\n> https://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469\n>\n> Ok: 281\n> Expected Fail: 0\n> Fail: 0\n> Unexpected Pass: 0\n> Skipped: 17\n> Timeout: 0\n>\n> This is without readline and pltcl, as neither is currently built as part\n> of\n> winpgbuild. Otherwise it has all applicable dependencies enabled (no\n> bonjour,\n> bsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict expected).\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n> [1] I plan to submit a PR that'll collect the necessary information\n>\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 9 Jul 2024 at 17:32, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-09 14:52:39 +0100, Dave Page wrote:\n> I have 4 different diff.exe's on my ~6 week old build VM (not counting\n> shims), all of which seem to support --strip-trailing-cr. Those builds came\n> with:\n>\n> - git\n> - VC++\n> - diffutils (installed by chocolatey)\n> - vcpkg\n>\n> I think it's reasonable to assume it'll be supported.\n\nI think the more likely issue would be an older setup with an older diff,\npeople on windows seem to not want to touch a working setup ever :). But we\ncan deal with that if reports about it come in.They've got to move to meson/ninja anyway, so... <shrug>. \n\n\n> > > Not sure what the issue is with pg_bsd_indent, though.\n> >\n>\n> Yeah - that's odd, as that test always passes for me, with or without\n> autocrlf.\n\nHuh.\n\n\n> The other failures I see are the following, which I'm just starting to dig\n> into:\n>\n>  26/298 postgresql:recovery / recovery/019_replslot_limit\n>             ERROR            43.05s   exit status 2\n>  44/298 postgresql:recovery / recovery/027_stream_regress\n>             ERROR           383.08s   exit status 1\n>  50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n>             ERROR           138.06s   exit status 25\n>  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n>              ERROR           132.87s   exit status 25\n> 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n>              ERROR            93.45s   exit status 2\n> 233/298 postgresql:bloom / bloom/001_wal\n>              ERROR            54.47s   exit status 2\n> 236/298 postgresql:subscription / subscription/001_rep_changes\n>              ERROR            46.46s   exit status 2\n> 246/298 postgresql:subscription / subscription/010_truncate\n>             ERROR            47.69s   exit status 2\n> 253/298 postgresql:subscription / subscription/013_partition\n>              ERROR           125.63s   exit status 25\n> 255/298 postgresql:subscription / subscription/022_twophase_cascade\n>             ERROR            58.13s   exit status 2\n> 257/298 postgresql:subscription / subscription/015_stream\n>             ERROR           128.32s   exit status 2\n> 262/298 postgresql:subscription / subscription/028_row_filter\n>             ERROR            43.14s   exit status 2\n> 263/298 postgresql:subscription / subscription/027_nosuperuser\n>              ERROR           102.02s   exit status 2\n> 269/298 postgresql:subscription / subscription/031_column_list\n>              ERROR           123.16s   exit status 2\n> 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n>              ERROR           139.33s   exit status 2\n\nHm, it'd be good to see some of errors behind that ([1]).\n\nI suspect it might be related to conflicting ports. I had to use\nPG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n\n          # use unix socket to prevent port conflicts\n          $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n          # otherwise pg_regress insists on creating the directory and does it\n          # in a non-existing place, this needs to be fixed :(\n          mkdir d:/sockets\n          $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"No, it all seems to be fallout from GSSAPI being included in the build. If I build without that, everything passes. Most of the tests are failing with a \"too many clients already\" error, but a handful do seem to include auth related errors as well. For example, this is from  \n\n\nFWIW, building a tree with the patches I sent to the list last night and\nchanges to make postgresql-dev.yml use a git checkout, I get:\n\nhttps://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469\n\nOk:                 281\nExpected Fail:      0\nFail:               0\nUnexpected Pass:    0\nSkipped:            17\nTimeout:            0\n\nThis is without readline and pltcl, as neither is currently built as part of\nwinpgbuild. Otherwise it has all applicable dependencies enabled (no bonjour,\nbsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict expected).\n\nGreetings,\n\nAndres Freund\n\n\n[1] I plan to submit a PR that'll collect the necessary information\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 10:30:37 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Sorry - somehow managed to send whilst pasting in logs...\n\nOn Wed, 10 Jul 2024 at 10:30, Dave Page <[email protected]> wrote:\n\n>\n>\n> On Tue, 9 Jul 2024 at 17:32, Andres Freund <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> On 2024-07-09 14:52:39 +0100, Dave Page wrote:\n>> > I have 4 different diff.exe's on my ~6 week old build VM (not counting\n>> > shims), all of which seem to support --strip-trailing-cr. Those builds\n>> came\n>> > with:\n>> >\n>> > - git\n>> > - VC++\n>> > - diffutils (installed by chocolatey)\n>> > - vcpkg\n>> >\n>> > I think it's reasonable to assume it'll be supported.\n>>\n>> I think the more likely issue would be an older setup with an older diff,\n>> people on windows seem to not want to touch a working setup ever :). But\n>> we\n>> can deal with that if reports about it come in.\n>>\n>\n> They've got to move to meson/ninja anyway, so... <shrug>.\n>\n>\n>>\n>>\n>> > > > Not sure what the issue is with pg_bsd_indent, though.\n>> > >\n>> >\n>> > Yeah - that's odd, as that test always passes for me, with or without\n>> > autocrlf.\n>>\n>> Huh.\n>>\n>>\n>> > The other failures I see are the following, which I'm just starting to\n>> dig\n>> > into:\n>> >\n>> > 26/298 postgresql:recovery / recovery/019_replslot_limit\n>> > ERROR 43.05s exit status 2\n>> > 44/298 postgresql:recovery / recovery/027_stream_regress\n>> > ERROR 383.08s exit status 1\n>> > 50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n>> > ERROR 138.06s exit status 25\n>> > 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n>> > ERROR 132.87s exit status 25\n>> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n>> > ERROR 93.45s exit status 2\n>> > 233/298 postgresql:bloom / bloom/001_wal\n>> > ERROR 54.47s exit status 2\n>> > 236/298 postgresql:subscription / subscription/001_rep_changes\n>> > ERROR 46.46s exit status 2\n>> > 246/298 postgresql:subscription / subscription/010_truncate\n>> > ERROR 47.69s exit status 2\n>> > 253/298 postgresql:subscription / subscription/013_partition\n>> > ERROR 125.63s exit status 25\n>> > 255/298 postgresql:subscription / subscription/022_twophase_cascade\n>> > ERROR 58.13s exit status 2\n>> > 257/298 postgresql:subscription / subscription/015_stream\n>> > ERROR 128.32s exit status 2\n>> > 262/298 postgresql:subscription / subscription/028_row_filter\n>> > ERROR 43.14s exit status 2\n>> > 263/298 postgresql:subscription / subscription/027_nosuperuser\n>> > ERROR 102.02s exit status 2\n>> > 269/298 postgresql:subscription / subscription/031_column_list\n>> > ERROR 123.16s exit status 2\n>> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n>> > ERROR 139.33s exit status 2\n>>\n>> Hm, it'd be good to see some of errors behind that ([1]).\n>>\n>> I suspect it might be related to conflicting ports. I had to use\n>> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n>>\n>> # use unix socket to prevent port conflicts\n>> $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n>> # otherwise pg_regress insists on creating the directory and\n>> does it\n>> # in a non-existing place, this needs to be fixed :(\n>> mkdir d:/sockets\n>> $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"\n>>\n>\n> No, it all seems to be fallout from GSSAPI being included in the build. If\n> I build without that, everything passes. Most of the tests are failing with\n> a \"too many clients already\" error, but a handful do seem to include GSSAPI\n> auth related errors as well. For example, this is from\n>\n\n\n... this is from subscription/001_rep_changes:\n\n[14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop\nfrom publication\nconnection error: 'psql: error: connection to server at \"127.0.0.1\", port\n58059 failed: could not initiate GSSAPI security context: No credentials\nwere supplied, or the credentials were unavailable or inaccessible:\nCredential cache is empty\nconnection to server at \"127.0.0.1\", port 58059 failed: FATAL: sorry, too\nmany clients already'\nwhile running 'psql -XAtq -d port=58059 host=127.0.0.1 dbname='postgres' -f\n- -v ON_ERROR_STOP=1' at\nC:/Users/dpage/git/postgresql/src/test/perl/PostgreSQL/Test/Cluster.pm line\n2129.\n# Postmaster PID for node \"publisher\" is 14488\n### Stopping node \"publisher\" using mode immediate\n# Running: pg_ctl -D\nC:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_publisher_data/pgdata\n-m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"publisher\"\n# Postmaster PID for node \"subscriber\" is 15012\n### Stopping node \"subscriber\" using mode immediate\n# Running: pg_ctl -D\nC:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_subscriber_data/pgdata\n-m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"subscriber\"\n[14:46:59.068](1.346s) # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[14:46:59.069](0.000s) # Looks like your test exited with 2 just after 11.\n\n\n>\n>\n>\n>\n>>\n>>\n>> FWIW, building a tree with the patches I sent to the list last night and\n>> changes to make postgresql-dev.yml use a git checkout, I get:\n>>\n>>\n>> https://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469\n>>\n>> Ok: 281\n>> Expected Fail: 0\n>> Fail: 0\n>> Unexpected Pass: 0\n>> Skipped: 17\n>> Timeout: 0\n>>\n>> This is without readline and pltcl, as neither is currently built as part\n>> of\n>> winpgbuild. Otherwise it has all applicable dependencies enabled (no\n>> bonjour,\n>> bsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict\n>> expected).\n>>\n>> Greetings,\n>>\n>> Andres Freund\n>>\n>>\n>> [1] I plan to submit a PR that'll collect the necessary information\n>>\n>\n>\n> --\n> Dave Page\n> pgAdmin: https://www.pgadmin.org\n> PostgreSQL: https://www.postgresql.org\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nSorry - somehow managed to send whilst pasting in logs...On Wed, 10 Jul 2024 at 10:30, Dave Page <[email protected]> wrote:On Tue, 9 Jul 2024 at 17:32, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-09 14:52:39 +0100, Dave Page wrote:\n> I have 4 different diff.exe's on my ~6 week old build VM (not counting\n> shims), all of which seem to support --strip-trailing-cr. Those builds came\n> with:\n>\n> - git\n> - VC++\n> - diffutils (installed by chocolatey)\n> - vcpkg\n>\n> I think it's reasonable to assume it'll be supported.\n\nI think the more likely issue would be an older setup with an older diff,\npeople on windows seem to not want to touch a working setup ever :). But we\ncan deal with that if reports about it come in.They've got to move to meson/ninja anyway, so... <shrug>. \n\n\n> > > Not sure what the issue is with pg_bsd_indent, though.\n> >\n>\n> Yeah - that's odd, as that test always passes for me, with or without\n> autocrlf.\n\nHuh.\n\n\n> The other failures I see are the following, which I'm just starting to dig\n> into:\n>\n>  26/298 postgresql:recovery / recovery/019_replslot_limit\n>             ERROR            43.05s   exit status 2\n>  44/298 postgresql:recovery / recovery/027_stream_regress\n>             ERROR           383.08s   exit status 1\n>  50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n>             ERROR           138.06s   exit status 25\n>  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n>              ERROR           132.87s   exit status 25\n> 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n>              ERROR            93.45s   exit status 2\n> 233/298 postgresql:bloom / bloom/001_wal\n>              ERROR            54.47s   exit status 2\n> 236/298 postgresql:subscription / subscription/001_rep_changes\n>              ERROR            46.46s   exit status 2\n> 246/298 postgresql:subscription / subscription/010_truncate\n>             ERROR            47.69s   exit status 2\n> 253/298 postgresql:subscription / subscription/013_partition\n>              ERROR           125.63s   exit status 25\n> 255/298 postgresql:subscription / subscription/022_twophase_cascade\n>             ERROR            58.13s   exit status 2\n> 257/298 postgresql:subscription / subscription/015_stream\n>             ERROR           128.32s   exit status 2\n> 262/298 postgresql:subscription / subscription/028_row_filter\n>             ERROR            43.14s   exit status 2\n> 263/298 postgresql:subscription / subscription/027_nosuperuser\n>              ERROR           102.02s   exit status 2\n> 269/298 postgresql:subscription / subscription/031_column_list\n>              ERROR           123.16s   exit status 2\n> 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n>              ERROR           139.33s   exit status 2\n\nHm, it'd be good to see some of errors behind that ([1]).\n\nI suspect it might be related to conflicting ports. I had to use\nPG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n\n          # use unix socket to prevent port conflicts\n          $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n          # otherwise pg_regress insists on creating the directory and does it\n          # in a non-existing place, this needs to be fixed :(\n          mkdir d:/sockets\n          $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"No, it all seems to be fallout from GSSAPI being included in the build. If I build without that, everything passes. Most of the tests are failing with a \"too many clients already\" error, but a handful do seem to include GSSAPI auth related errors as well. For example, this is from ... this is from subscription/001_rep_changes: [14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop from publicationconnection error: 'psql: error: connection to server at \"127.0.0.1\", port 58059 failed: could not initiate GSSAPI security context: No credentials were supplied, or the credentials were unavailable or inaccessible: Credential cache is emptyconnection to server at \"127.0.0.1\", port 58059 failed: FATAL:  sorry, too many clients already'while running 'psql -XAtq -d port=58059 host=127.0.0.1 dbname='postgres' -f - -v ON_ERROR_STOP=1' at C:/Users/dpage/git/postgresql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2129.# Postmaster PID for node \"publisher\" is 14488### Stopping node \"publisher\" using mode immediate# Running: pg_ctl -D C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_publisher_data/pgdata -m immediate stopwaiting for server to shut down.... doneserver stopped# No postmaster PID for node \"publisher\"# Postmaster PID for node \"subscriber\" is 15012### Stopping node \"subscriber\" using mode immediate# Running: pg_ctl -D C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_subscriber_data/pgdata -m immediate stopwaiting for server to shut down.... doneserver stopped# No postmaster PID for node \"subscriber\"[14:46:59.068](1.346s) # Tests were run but no plan was declared and done_testing() was not seen.[14:46:59.069](0.000s) # Looks like your test exited with 2 just after 11.  \n\n\nFWIW, building a tree with the patches I sent to the list last night and\nchanges to make postgresql-dev.yml use a git checkout, I get:\n\nhttps://github.com/anarazel/winpgbuild/actions/runs/9852370209/job/27200784987#step:12:469\n\nOk:                 281\nExpected Fail:      0\nFail:               0\nUnexpected Pass:    0\nSkipped:            17\nTimeout:            0\n\nThis is without readline and pltcl, as neither is currently built as part of\nwinpgbuild. Otherwise it has all applicable dependencies enabled (no bonjour,\nbsd_auth, dtrace, llvm, pam, selinux, systemd, but that's afaict expected).\n\nGreetings,\n\nAndres Freund\n\n\n[1] I plan to submit a PR that'll collect the necessary information\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 10:35:12 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On 2024-07-09 Tu 11:34 AM, Andrew Dunstan wrote:\n>\n>\n> On 2024-07-09 Tu 9:52 AM, Dave Page wrote:\n>>\n>>\n>> > What I suggest (see attached) is we run the diff command with\n>> > --strip-trailing-cr on Windows. Then we just won't care if the\n>> expected file\n>> > and/or the output file has CRs.\n>>\n>> I was wondering about that too, but I wasn't sure we can rely on\n>> that flag\n>> being supported...\n>>\n>>\n>> I have 4 different diff.exe's on my ~6 week old build VM (not \n>> counting shims), all of which seem to support --strip-trailing-cr. \n>> Those builds came with:\n>>\n>> - git\n>> - VC++\n>> - diffutils (installed by chocolatey)\n>> - vcpkg\n>>\n>> I think it's reasonable to assume it'll be supported.\n>>\n>\n> Ok, cool. So I propose to patch the test_json_parser and pg_bsd_indent \n> tests to use it on Windows, later today unless there's some objection.\n>\n>\n>\n\nAs I was looking at this I wondered if there might be anywhere else that \nneeded adjustment. One thing that occurred to me was that that maybe we \nshould replace the use of \"-w\" in pg_regress.c with this rather less \ndangerous flag, so instead of ignoring any white space difference we \nwould only ignore line end differences. The use of \"-w\" apparently dates \nback to 2009.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-09 Tu 11:34 AM, Andrew\n Dunstan wrote:\n\n\n\n\n\nOn 2024-07-09 Tu 9:52 AM, Dave Page\n wrote:\n\n\n\n\n\n\n\n > What I suggest (see attached) is we run the diff\n command with\n > --strip-trailing-cr on Windows. Then we just won't\n care if the expected file\n > and/or the output file has CRs.\n\n I was wondering about that too, but I wasn't sure we can\n rely on that flag\n being supported...\n\n\n\nI have 4 different diff.exe's on my ~6 week old build\n VM (not counting shims), all of which seem to\n support --strip-trailing-cr. Those builds came with:\n\n\n- git\n- VC++\n- diffutils (installed by chocolatey)\n- vcpkg\n\n\nI think it's reasonable to assume it'll be supported.\n \n\n\n\n\n\n\nOk, cool. So I propose to patch the test_json_parser and\n pg_bsd_indent tests to use it on Windows, later today unless\n there's some objection.\n\n\n\n\n\n\nAs I was looking at this I wondered if there might be anywhere\n else that needed adjustment. One thing that occurred to me was\n that that maybe we should replace the use of \"-w\" in pg_regress.c\n with this rather less dangerous flag, so instead of ignoring any\n white space difference we would only ignore line end differences.\n The use of \"-w\" apparently dates back to 2009.\n\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 07:12:25 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-07-09 Tu 11:34 AM, Andrew Dunstan wrote:\n>\n>\n> On 2024-07-09 Tu 9:52 AM, Dave Page wrote:\n>\n>\n>\n>> > What I suggest (see attached) is we run the diff command with\n>> > --strip-trailing-cr on Windows. Then we just won't care if the expected\n>> file\n>> > and/or the output file has CRs.\n>>\n>> I was wondering about that too, but I wasn't sure we can rely on that flag\n>> being supported...\n>>\n>\n> I have 4 different diff.exe's on my ~6 week old build VM (not counting\n> shims), all of which seem to support --strip-trailing-cr. Those builds came\n> with:\n>\n> - git\n> - VC++\n> - diffutils (installed by chocolatey)\n> - vcpkg\n>\n> I think it's reasonable to assume it'll be supported.\n>\n>\n>\n> Ok, cool. So I propose to patch the test_json_parser and pg_bsd_indent\n> tests to use it on Windows, later today unless there's some objection.\n>\n>\n>\n>\n> As I was looking at this I wondered if there might be anywhere else that\n> needed adjustment. One thing that occurred to me was that that maybe we\n> should replace the use of \"-w\" in pg_regress.c with this rather less\n> dangerous flag, so instead of ignoring any white space difference we would\n> only ignore line end differences. The use of \"-w\" apparently dates back to\n> 2009.\n>\nThat seems like a good improvement to me.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-07-09 Tu 11:34 AM, Andrew\n Dunstan wrote:\n\n\n\n\nOn 2024-07-09 Tu 9:52 AM, Dave Page\n wrote:\n\n\n\n\n\n\n > What I suggest (see attached) is we run the diff\n command with\n > --strip-trailing-cr on Windows. Then we just won't\n care if the expected file\n > and/or the output file has CRs.\n\n I was wondering about that too, but I wasn't sure we can\n rely on that flag\n being supported...\n\n\n\nI have 4 different diff.exe's on my ~6 week old build\n VM (not counting shims), all of which seem to\n support --strip-trailing-cr. Those builds came with:\n\n\n- git\n- VC++\n- diffutils (installed by chocolatey)\n- vcpkg\n\n\nI think it's reasonable to assume it'll be supported.\n \n\n\n\n\n\n\nOk, cool. So I propose to patch the test_json_parser and\n pg_bsd_indent tests to use it on Windows, later today unless\n there's some objection.\n\n\n\n\n\n\nAs I was looking at this I wondered if there might be anywhere\n else that needed adjustment. One thing that occurred to me was\n that that maybe we should replace the use of \"-w\" in pg_regress.c\n with this rather less dangerous flag, so instead of ignoring any\n white space difference we would only ignore line end differences.\n The use of \"-w\" apparently dates back to 2009.That seems like a good improvement to me. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 12:17:50 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Dave Page <[email protected]> writes:\n> On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n>> As I was looking at this I wondered if there might be anywhere else that\n>> needed adjustment. One thing that occurred to me was that that maybe we\n>> should replace the use of \"-w\" in pg_regress.c with this rather less\n>> dangerous flag, so instead of ignoring any white space difference we would\n>> only ignore line end differences. The use of \"-w\" apparently dates back to\n>> 2009.\n\n> That seems like a good improvement to me.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:25:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "\nOn 2024-07-10 We 9:25 AM, Tom Lane wrote:\n> Dave Page <[email protected]> writes:\n>> On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n>>> As I was looking at this I wondered if there might be anywhere else that\n>>> needed adjustment. One thing that occurred to me was that that maybe we\n>>> should replace the use of \"-w\" in pg_regress.c with this rather less\n>>> dangerous flag, so instead of ignoring any white space difference we would\n>>> only ignore line end differences. The use of \"-w\" apparently dates back to\n>>> 2009.\n>> That seems like a good improvement to me.\n> +1\n>\n> \t\t\t\n\n\nOK, done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 10 Jul 2024 10:03:50 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi,\n\nOn Wed, 10 Jul 2024 at 17:04, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2024-07-10 We 9:25 AM, Tom Lane wrote:\n> > Dave Page <[email protected]> writes:\n> >> On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n> >>> As I was looking at this I wondered if there might be anywhere else that\n> >>> needed adjustment. One thing that occurred to me was that that maybe we\n> >>> should replace the use of \"-w\" in pg_regress.c with this rather less\n> >>> dangerous flag, so instead of ignoring any white space difference we would\n> >>> only ignore line end differences. The use of \"-w\" apparently dates back to\n> >>> 2009.\n> >> That seems like a good improvement to me.\n> > +1\n> >\n> >\n>\n>\n> OK, done.\n\nIt looks like Postgres CI did not like this change. 'Windows - Server\n2019, VS 2019 - Meson & ninja' [1] task started to fail after this\ncommit, there is one extra space at the end of line in regress test's\noutput.\n\n[1] https://cirrus-ci.com/task/6753781205958656\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 11 Jul 2024 11:59:57 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "\nOn 2024-07-11 Th 4:59 AM, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Wed, 10 Jul 2024 at 17:04, Andrew Dunstan <[email protected]> wrote:\n>>\n>> On 2024-07-10 We 9:25 AM, Tom Lane wrote:\n>>> Dave Page <[email protected]> writes:\n>>>> On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> wrote:\n>>>>> As I was looking at this I wondered if there might be anywhere else that\n>>>>> needed adjustment. One thing that occurred to me was that that maybe we\n>>>>> should replace the use of \"-w\" in pg_regress.c with this rather less\n>>>>> dangerous flag, so instead of ignoring any white space difference we would\n>>>>> only ignore line end differences. The use of \"-w\" apparently dates back to\n>>>>> 2009.\n>>>> That seems like a good improvement to me.\n>>> +1\n>>>\n>>>\n>>\n>> OK, done.\n> It looks like Postgres CI did not like this change. 'Windows - Server\n> 2019, VS 2019 - Meson & ninja' [1] task started to fail after this\n> commit, there is one extra space at the end of line in regress test's\n> output.\n>\n> [1] https://cirrus-ci.com/task/6753781205958656\n>\n\nOh, that's annoying. Will investigate. Thanks for the heads up.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 07:29:30 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "\nOn 2024-07-11 Th 7:29 AM, Andrew Dunstan wrote:\n>\n> On 2024-07-11 Th 4:59 AM, Nazir Bilal Yavuz wrote:\n>> Hi,\n>>\n>> On Wed, 10 Jul 2024 at 17:04, Andrew Dunstan <[email protected]> \n>> wrote:\n>>>\n>>> On 2024-07-10 We 9:25 AM, Tom Lane wrote:\n>>>> Dave Page <[email protected]> writes:\n>>>>> On Wed, 10 Jul 2024 at 12:12, Andrew Dunstan <[email protected]> \n>>>>> wrote:\n>>>>>> As I was looking at this I wondered if there might be anywhere \n>>>>>> else that\n>>>>>> needed adjustment. One thing that occurred to me was that that \n>>>>>> maybe we\n>>>>>> should replace the use of \"-w\" in pg_regress.c with this rather less\n>>>>>> dangerous flag, so instead of ignoring any white space difference \n>>>>>> we would\n>>>>>> only ignore line end differences. The use of \"-w\" apparently \n>>>>>> dates back to\n>>>>>> 2009.\n>>>>> That seems like a good improvement to me.\n>>>> +1\n>>>>\n>>>>\n>>>\n>>> OK, done.\n>> It looks like Postgres CI did not like this change. 'Windows - Server\n>> 2019, VS 2019 - Meson & ninja' [1] task started to fail after this\n>> commit, there is one extra space at the end of line in regress test's\n>> output.\n>>\n>> [1] https://cirrus-ci.com/task/6753781205958656\n>>\n>\n> Oh, that's annoying. Will investigate. Thanks for the heads up.\n>\n>\n>\n\nI have reverted the pg_regress.c portion of the patch. I will \ninvestigate non line-end differences on Windows further.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 09:56:41 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On Wed, 10 Jul 2024 at 10:35, Dave Page <[email protected]> wrote:\n\n> > The other failures I see are the following, which I'm just starting to\n>>> dig\n>>> > into:\n>>> >\n>>> > 26/298 postgresql:recovery / recovery/019_replslot_limit\n>>> > ERROR 43.05s exit status 2\n>>> > 44/298 postgresql:recovery / recovery/027_stream_regress\n>>> > ERROR 383.08s exit status 1\n>>> > 50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n>>> > ERROR 138.06s exit status 25\n>>> > 68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n>>> > ERROR 132.87s exit status 25\n>>> > 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n>>> > ERROR 93.45s exit status 2\n>>> > 233/298 postgresql:bloom / bloom/001_wal\n>>> > ERROR 54.47s exit status 2\n>>> > 236/298 postgresql:subscription / subscription/001_rep_changes\n>>> > ERROR 46.46s exit status 2\n>>> > 246/298 postgresql:subscription / subscription/010_truncate\n>>> > ERROR 47.69s exit status 2\n>>> > 253/298 postgresql:subscription / subscription/013_partition\n>>> > ERROR 125.63s exit status 25\n>>> > 255/298 postgresql:subscription / subscription/022_twophase_cascade\n>>> > ERROR 58.13s exit status 2\n>>> > 257/298 postgresql:subscription / subscription/015_stream\n>>> > ERROR 128.32s exit status 2\n>>> > 262/298 postgresql:subscription / subscription/028_row_filter\n>>> > ERROR 43.14s exit status 2\n>>> > 263/298 postgresql:subscription / subscription/027_nosuperuser\n>>> > ERROR 102.02s exit status 2\n>>> > 269/298 postgresql:subscription / subscription/031_column_list\n>>> > ERROR 123.16s exit status 2\n>>> > 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n>>> > ERROR 139.33s exit status 2\n>>>\n>>> Hm, it'd be good to see some of errors behind that ([1]).\n>>>\n>>> I suspect it might be related to conflicting ports. I had to use\n>>> PG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n>>>\n>>> # use unix socket to prevent port conflicts\n>>> $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n>>> # otherwise pg_regress insists on creating the directory and\n>>> does it\n>>> # in a non-existing place, this needs to be fixed :(\n>>> mkdir d:/sockets\n>>> $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"\n>>>\n>>\n>> No, it all seems to be fallout from GSSAPI being included in the build.\n>> If I build without that, everything passes. Most of the tests are failing\n>> with a \"too many clients already\" error, but a handful do seem to include\n>> GSSAPI auth related errors as well. For example, this is from\n>>\n>\n>\n> ... this is from subscription/001_rep_changes:\n>\n> [14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop\n> from publication\n> connection error: 'psql: error: connection to server at \"127.0.0.1\", port\n> 58059 failed: could not initiate GSSAPI security context: No credentials\n> were supplied, or the credentials were unavailable or inaccessible:\n> Credential cache is empty\n> connection to server at \"127.0.0.1\", port 58059 failed: FATAL: sorry, too\n> many clients already'\n> while running 'psql -XAtq -d port=58059 host=127.0.0.1 dbname='postgres'\n> -f - -v ON_ERROR_STOP=1' at\n> C:/Users/dpage/git/postgresql/src/test/perl/PostgreSQL/Test/Cluster.pm line\n> 2129.\n> # Postmaster PID for node \"publisher\" is 14488\n> ### Stopping node \"publisher\" using mode immediate\n> # Running: pg_ctl -D\n> C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_publisher_data/pgdata\n> -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"publisher\"\n> # Postmaster PID for node \"subscriber\" is 15012\n> ### Stopping node \"subscriber\" using mode immediate\n> # Running: pg_ctl -D\n> C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_subscriber_data/pgdata\n> -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"subscriber\"\n> [14:46:59.068](1.346s) # Tests were run but no plan was declared and\n> done_testing() was not seen.\n> [14:46:59.069](0.000s) # Looks like your test exited with 2 just after 11.\n>\n\nSo I received an off-list tip to checkout [1], a discussion around GSSAPI\ncausing test failures on windows that Alexander Lakhin was looking at.\nThomas Munro's v2 patch to try to address the issue brought me down to just\na single test failure with GSSAPI enabled on 17b2 (with a second, simple\nfix for the OpenSSL/Kerberos/x509 issue): pg_dump/002_pg_dump. The\nrelevant section from the log looks like this:\n\n[15:28:42.692](0.006s) not ok 2 - connecting to a non-existent database:\nmatches\n[15:28:42.693](0.001s) # Failed test 'connecting to a non-existent\ndatabase: matches'\n# at C:/Users/dpage/git/postgresql/src/bin/pg_dump/t/002_pg_dump.pl line\n4689.\n[15:28:42.694](0.001s) # 'pg_dump: error: connection to\nserver at \"127.0.0.1\", port 53834 failed: could not initiate GSSAPI\nsecurity context: No credentials were supplied, or the credentials were\nunavailable or inaccessible: Credential cache is empty\n# connection to server at \"127.0.0.1\", port 53834 failed: FATAL: database\n\"qqq\" does not exist\n# '\n# doesn't match '(?^:pg_dump: error: connection to server .* failed:\nFATAL: database \"qqq\" does not exist)'\n# Running: pg_dump -d regression_invalid\n\nWe could tweak the regex I suppose, but that just seems like it's skirting\naround the actual problem. I could also get on board with Tom's idea of\ndeprecating GSSAPI for Windows, assuming that SSPI can handle everything\nusers might reasonably need (I really have no idea how likely that is).\n\n[1] ttps://www.postgresql.org/message-id/[email protected]\n<https://www.postgresql.org/message-id/[email protected]>\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Wed, 10 Jul 2024 at 10:35, Dave Page <[email protected]> wrote:\n> The other failures I see are the following, which I'm just starting to dig\n> into:\n>\n>  26/298 postgresql:recovery / recovery/019_replslot_limit\n>             ERROR            43.05s   exit status 2\n>  44/298 postgresql:recovery / recovery/027_stream_regress\n>             ERROR           383.08s   exit status 1\n>  50/298 postgresql:recovery / recovery/035_standby_logical_decoding\n>             ERROR           138.06s   exit status 25\n>  68/298 postgresql:recovery / recovery/040_standby_failover_slots_sync\n>              ERROR           132.87s   exit status 25\n> 170/298 postgresql:pg_dump / pg_dump/002_pg_dump\n>              ERROR            93.45s   exit status 2\n> 233/298 postgresql:bloom / bloom/001_wal\n>              ERROR            54.47s   exit status 2\n> 236/298 postgresql:subscription / subscription/001_rep_changes\n>              ERROR            46.46s   exit status 2\n> 246/298 postgresql:subscription / subscription/010_truncate\n>             ERROR            47.69s   exit status 2\n> 253/298 postgresql:subscription / subscription/013_partition\n>              ERROR           125.63s   exit status 25\n> 255/298 postgresql:subscription / subscription/022_twophase_cascade\n>             ERROR            58.13s   exit status 2\n> 257/298 postgresql:subscription / subscription/015_stream\n>             ERROR           128.32s   exit status 2\n> 262/298 postgresql:subscription / subscription/028_row_filter\n>             ERROR            43.14s   exit status 2\n> 263/298 postgresql:subscription / subscription/027_nosuperuser\n>              ERROR           102.02s   exit status 2\n> 269/298 postgresql:subscription / subscription/031_column_list\n>              ERROR           123.16s   exit status 2\n> 271/298 postgresql:subscription / subscription/032_subscribe_use_index\n>              ERROR           139.33s   exit status 2\n\nHm, it'd be good to see some of errors behind that ([1]).\n\nI suspect it might be related to conflicting ports. I had to use\nPG_TEST_USE_UNIX_SOCKETS to avoid random tests from failing:\n\n          # use unix socket to prevent port conflicts\n          $env:PG_TEST_USE_UNIX_SOCKETS = 1;\n          # otherwise pg_regress insists on creating the directory and does it\n          # in a non-existing place, this needs to be fixed :(\n          mkdir d:/sockets\n          $env:PG_REGRESS_SOCK_DIR = \"d:/sockets/\"No, it all seems to be fallout from GSSAPI being included in the build. If I build without that, everything passes. Most of the tests are failing with a \"too many clients already\" error, but a handful do seem to include GSSAPI auth related errors as well. For example, this is from ... this is from subscription/001_rep_changes: [14:46:57.723](2.318s) ok 11 - check rows on subscriber after table drop from publicationconnection error: 'psql: error: connection to server at \"127.0.0.1\", port 58059 failed: could not initiate GSSAPI security context: No credentials were supplied, or the credentials were unavailable or inaccessible: Credential cache is emptyconnection to server at \"127.0.0.1\", port 58059 failed: FATAL:  sorry, too many clients already'while running 'psql -XAtq -d port=58059 host=127.0.0.1 dbname='postgres' -f - -v ON_ERROR_STOP=1' at C:/Users/dpage/git/postgresql/src/test/perl/PostgreSQL/Test/Cluster.pm line 2129.# Postmaster PID for node \"publisher\" is 14488### Stopping node \"publisher\" using mode immediate# Running: pg_ctl -D C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_publisher_data/pgdata -m immediate stopwaiting for server to shut down.... doneserver stopped# No postmaster PID for node \"publisher\"# Postmaster PID for node \"subscriber\" is 15012### Stopping node \"subscriber\" using mode immediate# Running: pg_ctl -D C:\\Users\\dpage\\git\\postgresql\\build/testrun/subscription/001_rep_changes\\data/t_001_rep_changes_subscriber_data/pgdata -m immediate stopwaiting for server to shut down.... doneserver stopped# No postmaster PID for node \"subscriber\"[14:46:59.068](1.346s) # Tests were run but no plan was declared and done_testing() was not seen.[14:46:59.069](0.000s) # Looks like your test exited with 2 just after 11.So I received an off-list tip to checkout [1], a discussion around GSSAPI causing test failures on windows that Alexander Lakhin was looking at. Thomas Munro's v2 patch to try to address the issue brought me down to just a single test failure with GSSAPI enabled on 17b2 (with a second, simple fix for the OpenSSL/Kerberos/x509 issue): pg_dump/002_pg_dump. The relevant section from the log looks like this:[15:28:42.692](0.006s) not ok 2 - connecting to a non-existent database: matches[15:28:42.693](0.001s) #   Failed test 'connecting to a non-existent database: matches'#   at C:/Users/dpage/git/postgresql/src/bin/pg_dump/t/002_pg_dump.pl line 4689.[15:28:42.694](0.001s) #                   'pg_dump: error: connection to server at \"127.0.0.1\", port 53834 failed: could not initiate GSSAPI security context: No credentials were supplied, or the credentials were unavailable or inaccessible: Credential cache is empty# connection to server at \"127.0.0.1\", port 53834 failed: FATAL:  database \"qqq\" does not exist# '#     doesn't match '(?^:pg_dump: error: connection to server .* failed: FATAL:  database \"qqq\" does not exist)'# Running: pg_dump -d regression_invalidWe could tweak the regex I suppose, but that just seems like it's skirting around the actual problem. I could also get on board with Tom's idea of deprecating GSSAPI for Windows, assuming that SSPI can handle everything users might reasonably need (I really have no idea how likely that is).[1] ttps://www.postgresql.org/message-id/[email protected] Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Thu, 11 Jul 2024 16:49:00 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On Fri, Jul 12, 2024 at 3:49 AM Dave Page <[email protected]> wrote:\n> So I received an off-list tip to checkout [1], a discussion around GSSAPI causing test failures on windows that Alexander Lakhin was looking at. Thomas Munro's v2 patch to try to address the issue brought me down to just a single test failure with GSSAPI enabled on 17b2 (with a second, simple fix for the OpenSSL/Kerberos/x509 issue): pg_dump/002_pg_dump. The relevant section from the log looks like this:\n\nI pushed that (ba9fcac7).\n\n> [15:28:42.692](0.006s) not ok 2 - connecting to a non-existent database: matches\n> [15:28:42.693](0.001s) # Failed test 'connecting to a non-existent database: matches'\n> # at C:/Users/dpage/git/postgresql/src/bin/pg_dump/t/002_pg_dump.pl line 4689.\n> [15:28:42.694](0.001s) # 'pg_dump: error: connection to server at \"127.0.0.1\", port 53834 failed: could not initiate GSSAPI security context: No credentials were supplied, or the credentials were unavailable or inaccessible: Credential cache is empty\n> # connection to server at \"127.0.0.1\", port 53834 failed: FATAL: database \"qqq\" does not exist\n> # '\n> # doesn't match '(?^:pg_dump: error: connection to server .* failed: FATAL: database \"qqq\" does not exist)'\n\nDoes it help if you revert 29992a6?\n\n\n", "msg_date": "Sun, 14 Jul 2024 10:00:32 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "On Sun, Jul 14, 2024 at 10:00 AM Thomas Munro <[email protected]> wrote:\n> On Fri, Jul 12, 2024 at 3:49 AM Dave Page <[email protected]> wrote:\n> > # doesn't match '(?^:pg_dump: error: connection to server .* failed: FATAL: database \"qqq\" does not exist)'\n>\n> Does it help if you revert 29992a6?\n\nFWIW I just happened to notice the same failure on Cirrus, in the\ngithub.com/postgres/postgres master branch:\n\nhttps://cirrus-ci.com/task/5382396705505280\n\nYour failure mentions GSSAPI and the above doesn't, but that'd be\nbecause for Cirrus CI we have PG_TEST_USE_UNIX_SOCKETS so it's using\nAF_UNIX. At one point I proposed deleting that weird GSAPPI stuff and\nusing AF_UNIX always on Windows[1], but the feedback was that I should\ninstead teach the whole test suite to be able to use AF_UNIX or\nAF_INET* on all OSes and I never got back to it.\n\nThe error does seem be the never-ending saga from this and other threads:\n\nhttps://www.postgresql.org/message-id/flat/90b34057-4176-7bb0-0dbb-9822a5f6425b%40greiz-reinsdorf.de\n\nMy uninformed impression is that graceful socket shutdowns would very\nlikely fix the class of lost-final-message problem where the client\ndoes recv() next, including this case IIUC. It's only a partial\nimprovement though: if the client calls send() next, I think it can\nstill drop buffered received data, so this graceful shutdown stuff\ndoesn't quite get us to the same situation as Unix all points in the\nprotocol. The real world case where that second case comes up is\nwhere the client sends a new query and on Unix gets a buffered error\nmessage saying the backend has exited due to idle timeout, but on\nWindows gets a connection reset message. I've wondered before if you\ncould fix (or narrow to almost zero?) that by giving libpq a mode\nwhere it calls poll() to check for buffered readable data every single\ntime it's about to send.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGK30uLx9dpgkYwomgH0WVLUHytkChDgf3iUM2zp0pf_nA%40mail.gmail.com\n\n\n", "msg_date": "Tue, 16 Jul 2024 12:22:11 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" }, { "msg_contents": "Hi\n\nOn Sat, 13 Jul 2024 at 23:01, Thomas Munro <[email protected]> wrote:\n\n> On Fri, Jul 12, 2024 at 3:49 AM Dave Page <[email protected]> wrote:\n> > So I received an off-list tip to checkout [1], a discussion around\n> GSSAPI causing test failures on windows that Alexander Lakhin was looking\n> at. Thomas Munro's v2 patch to try to address the issue brought me down to\n> just a single test failure with GSSAPI enabled on 17b2 (with a second,\n> simple fix for the OpenSSL/Kerberos/x509 issue): pg_dump/002_pg_dump. The\n> relevant section from the log looks like this:\n>\n> I pushed that (ba9fcac7).\n>\n> > [15:28:42.692](0.006s) not ok 2 - connecting to a non-existent database:\n> matches\n> > [15:28:42.693](0.001s) # Failed test 'connecting to a non-existent\n> database: matches'\n> > # at C:/Users/dpage/git/postgresql/src/bin/pg_dump/t/002_pg_dump.pl\n> line 4689.\n> > [15:28:42.694](0.001s) # 'pg_dump: error: connection\n> to server at \"127.0.0.1\", port 53834 failed: could not initiate GSSAPI\n> security context: No credentials were supplied, or the credentials were\n> unavailable or inaccessible: Credential cache is empty\n> > # connection to server at \"127.0.0.1\", port 53834 failed: FATAL:\n> database \"qqq\" does not exist\n> > # '\n> > # doesn't match '(?^:pg_dump: error: connection to server .* failed:\n> FATAL: database \"qqq\" does not exist)'\n>\n> Does it help if you revert 29992a6?\n>\n\nSorry for the delay - things got crazy busy for a while.\n\nNo, reverting that commit does not help.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nPGDay UK 2024, 11th September, London: https://2024.pgday.uk/\n\nHiOn Sat, 13 Jul 2024 at 23:01, Thomas Munro <[email protected]> wrote:On Fri, Jul 12, 2024 at 3:49 AM Dave Page <[email protected]> wrote:\n> So I received an off-list tip to checkout [1], a discussion around GSSAPI causing test failures on windows that Alexander Lakhin was looking at. Thomas Munro's v2 patch to try to address the issue brought me down to just a single test failure with GSSAPI enabled on 17b2 (with a second, simple fix for the OpenSSL/Kerberos/x509 issue): pg_dump/002_pg_dump. The relevant section from the log looks like this:\n\nI pushed that (ba9fcac7).\n\n> [15:28:42.692](0.006s) not ok 2 - connecting to a non-existent database: matches\n> [15:28:42.693](0.001s) #   Failed test 'connecting to a non-existent database: matches'\n> #   at C:/Users/dpage/git/postgresql/src/bin/pg_dump/t/002_pg_dump.pl line 4689.\n> [15:28:42.694](0.001s) #                   'pg_dump: error: connection to server at \"127.0.0.1\", port 53834 failed: could not initiate GSSAPI security context: No credentials were supplied, or the credentials were unavailable or inaccessible: Credential cache is empty\n> # connection to server at \"127.0.0.1\", port 53834 failed: FATAL:  database \"qqq\" does not exist\n> # '\n> #     doesn't match '(?^:pg_dump: error: connection to server .* failed: FATAL:  database \"qqq\" does not exist)'\n\nDoes it help if you revert 29992a6?Sorry for the delay - things got crazy busy for a while. No, reverting that commit does not help.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.comPGDay UK 2024, 11th September, London: https://2024.pgday.uk/", "msg_date": "Wed, 24 Jul 2024 16:49:28 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tests fail on windows with default git settings" } ]
[ { "msg_contents": "Hi,\n\nWhile working to address some of Dave's concerns at [1], I encountered the odd\nissue of tests failing because postmaster not being allowed to open\npg_control. This did not happen for all tests, but for a lot of tests.\n\nFor example, the only output in\npg_upgrade/003_logical_slots/log/003_logical_slots_oldpub.log would be\n\n2024-07-06 00:31:58.293 UTC [4432] PANIC: could not open file \"global/pg_control\": Permission denied\n\nFor some other tests it was pg_regress --config-auth that couldn't open\npg_hba.conf for writing.\n\n\nI spent an embarassingly large amount of time debugging this. Unfortuntely I\ncouldn't reproduce this locally, just in Dave's github-actions environment.\n\n\n\nAfter going down *many* rabbitholes, I figured out that this is due to a\npoorly documented peculiarity of windows file-ownership logic - which\napparently is only active when \"UAC\" is disabled - which is the case for the\nimages github actions [2].\n\n\n From what I gather from old documentation [3], when a highly privileged user\ncreates a directory/file with UAC disabled, the newly created object is *not*\nowned by the user, but magically owned by the \"Administrators\" group.\n\nNormally that's kinda fine, the user presumably continues to be a member of\nthe admin group and can access the file that way. However, this blows up when\ncombined with pg_ctl dropping privileges - after the privileges are dropped,\nthe user is *not* considered a member of the administrators group anymore.\nAnd boom, postmaster can't write to pg_control in some circumstances.\n\n\nWhat made this nastier is that this only applied to *some* tests, not\nall. Sometimes it works, because the database is created via initdb, which\ndoes also drop privileges (albeit slightly differently). However, that\ndoesn't get us very far:\n1) initdb template gets copied (we could fix this by adding /sec to robocopy)\n2) pg_basebackup doesn't do the get_restricted_token() dance (we could add that)\n3) there are quite a few other sources of data directories being copied,\n e.g. Cluster.pm's init_from_backup()\n\n\n\nOnce one knows what the issue is, it's not too hard to work around - adding an\ninheritable ACL explicitly granting the user permissions works. E.g. with\n icacls.exe . /inheritance:e /grant 'runneradmin:(OI)(CI)F';\n(with the current user being runneradmin)\n\nThat way the current user has access, even if not considered a domain admin\nanymore.\n\n\n\nThe only other postgres person that referenced this issue is Alexander Lakhin,\nat [4].\n\n\nThis email is partially just so I have something to find if I ever\nre-encounter this issue. I intend to forget this ASAP.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CA%2BOCxowQhMHFNRLTsXNuJpC96KRtSPHYKJuOS%3Db-Zrwmy-P4-g%40mail.gmail.com\n[2] https://github.com/actions/runner-images/blob/ca499b86975e62edd4a0ac336de94af096635838/images/windows/scripts/build/Configure-BaseImage.ps1#L28-L31\n[3] https://community.netapp.com/t5/AFF/Owner-on-newly-created-files-and-folders-default-to-built-in-Administrators/td-p/146645\n[4] https://www.postgresql.org/message-id/3f72f608-88ab-bd43-b7de-685c26e69421%40gmail.com\n\n\n", "msg_date": "Sat, 6 Jul 2024 23:40:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "dropping privileges on windows vs privileged accounts" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1] I encountered the issue that, on github-actions,\n010_pg_basebackup.pl fails on windows.\n\nThe reason for that is that github actions uses two drives, with TMP/TEMP\nlocated on c:, the tested code on d:. This causes the following code to fail:\n\n # Create a temporary directory in the system location.\n my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n\n # On Windows use the short location to avoid path length issues.\n # Elsewhere use $tempdir to avoid file system boundary issues with moving.\n my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n\n rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n dir_symlink(\"$tmploc/pg_replslot\", \"$pgdata/pg_replslot\")\n or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\nIt's not possible to move (vs copy) a file between two filesystems, on most\noperating systems. Which thus leads to:\n\n[23:02:03.114](0.288s) Bail out! could not move D:\\a\\winpgbuild\\winpgbuild\\postgresql-17beta2\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n\n\nThis logic was added in\n\ncommit e213de8e785aac4e2ebc44282b8dc0fcc74834e8\nAuthor: Andrew Dunstan <[email protected]>\nDate: 2023-07-08 11:21:58 -0400\n\n Use shorter location for pg_replslot in pg_basebackup test\n\nand revised in\n\ncommit e9f15bc9db7564a29460d089c0917590bc13fffc\nAuthor: Andrew Dunstan <[email protected]>\nDate: 2023-07-08 12:34:25 -0400\n\n Fix tmpdir issues with commit e213de8e78\n\nThe latter deals with precisely this issue, for !windows.\n\n\nI've temporarily worked around this by setting TMP/TEMP to something else, but\nISTM we need a better solution.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/CA%2BOCxowQhMHFNRLTsXNuJpC96KRtSPHYKJuOS%3Db-Zrwmy-P4-g%40mail.gmail.com\n[2] https://postgr.es/m/666ac55b-3400-fb2c-2cea-0281bf36a53c%40dunslane.net\n\n\n", "msg_date": "Sun, 7 Jul 2024 00:02:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "\nOn 2024-07-07 Su 3:02 AM, Andres Freund wrote:\n> Hi,\n>\n> While working on [1] I encountered the issue that, on github-actions,\n> 010_pg_basebackup.pl fails on windows.\n>\n> The reason for that is that github actions uses two drives, with TMP/TEMP\n> located on c:, the tested code on d:. This causes the following code to fail:\n>\n> # Create a temporary directory in the system location.\n> my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n>\n> # On Windows use the short location to avoid path length issues.\n> # Elsewhere use $tempdir to avoid file system boundary issues with moving.\n> my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n>\n> rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n> or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n> dir_symlink(\"$tmploc/pg_replslot\", \"$pgdata/pg_replslot\")\n> or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n>\n> It's not possible to move (vs copy) a file between two filesystems, on most\n> operating systems. Which thus leads to:\n>\n> [23:02:03.114](0.288s) Bail out! could not move D:\\a\\winpgbuild\\winpgbuild\\postgresql-17beta2\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n>\n>\n> This logic was added in\n>\n> commit e213de8e785aac4e2ebc44282b8dc0fcc74834e8\n> Author: Andrew Dunstan <[email protected]>\n> Date: 2023-07-08 11:21:58 -0400\n>\n> Use shorter location for pg_replslot in pg_basebackup test\n>\n> and revised in\n>\n> commit e9f15bc9db7564a29460d089c0917590bc13fffc\n> Author: Andrew Dunstan <[email protected]>\n> Date: 2023-07-08 12:34:25 -0400\n>\n> Fix tmpdir issues with commit e213de8e78\n>\n> The latter deals with precisely this issue, for !windows.\n>\n>\n> I've temporarily worked around this by setting TMP/TEMP to something else, but\n> ISTM we need a better solution.\n\n\nI'll be happy to hear of one. I agree it's a mess.  Maybe we could test \nthat the temp directory is on the same device on Windows and skip the \ntest if not? You could still get the test to run by setting TMPDIR \nand/or friends.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 7 Jul 2024 07:28:28 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "\nOn 2024-07-07 Su 7:28 AM, Andrew Dunstan wrote:\n>\n> On 2024-07-07 Su 3:02 AM, Andres Freund wrote:\n>> Hi,\n>>\n>> While working on [1] I encountered the issue that, on github-actions,\n>> 010_pg_basebackup.pl fails on windows.\n>>\n>> The reason for that is that github actions uses two drives, with \n>> TMP/TEMP\n>> located on c:, the tested code on d:.  This causes the following code \n>> to fail:\n>>\n>>    # Create a temporary directory in the system location.\n>>    my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n>>\n>>    # On Windows use the short location to avoid path length issues.\n>>    # Elsewhere use $tempdir to avoid file system boundary issues with \n>> moving.\n>>    my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n>>\n>>    rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n>>      or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n>>    dir_symlink(\"$tmploc/pg_replslot\", \"$pgdata/pg_replslot\")\n>>      or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n>>\n>> It's not possible to move (vs copy) a file between two filesystems, \n>> on most\n>> operating systems. Which thus leads to:\n>>\n>> [23:02:03.114](0.288s) Bail out!  could not move \n>> D:\\a\\winpgbuild\\winpgbuild\\postgresql-17beta2\\build/testrun/pg_basebackup/010_pg_basebackup\\data/t_010_pg_basebackup_main_data/pgdata/pg_replslot\n>>\n>>\n>> This logic was added in\n>>\n>> commit e213de8e785aac4e2ebc44282b8dc0fcc74834e8\n>> Author: Andrew Dunstan <[email protected]>\n>> Date:   2023-07-08 11:21:58 -0400\n>>\n>>      Use shorter location for pg_replslot in pg_basebackup test\n>>\n>> and revised in\n>>\n>> commit e9f15bc9db7564a29460d089c0917590bc13fffc\n>> Author: Andrew Dunstan <[email protected]>\n>> Date:   2023-07-08 12:34:25 -0400\n>>\n>>      Fix tmpdir issues with commit e213de8e78\n>>\n>> The latter deals with precisely this issue, for !windows.\n>>\n>>\n>> I've temporarily worked around this by setting TMP/TEMP to something \n>> else, but\n>> ISTM we need a better solution.\n>\n>\n> I'll be happy to hear of one. I agree it's a mess.  Maybe we could \n> test that the temp directory is on the same device on Windows and skip \n> the test if not? You could still get the test to run by setting TMPDIR \n> and/or friends.\n>\n>\n>\n\nMaybe we should just not try to rename the directory. Looking at the \ntest I'm pretty sure the directory should be empty. Instead of trying to \nmove it, let's just remove it, and recreate it in the tmp location.\n\nSomething like\n\n\ndiff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl \nb/src/bin/pg_basebackup/t/010_pg_basebackup.pl\nindex 489dde4adf..c0c334c6fc 100644\n--- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n+++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n@@ -363,8 +363,8 @@ my $sys_tempdir = \nPostgreSQL::Test::Utils::tempdir_short;\n  # Elsewhere use $tempdir to avoid file system boundary issues with moving.\n  my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n\n-rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n-  or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n+rmtree(\"$pgdata/pg_replslot\");\n+mkdir (\"$tmploc/pg_replslot\", 0700);\n  dir_symlink(\"$tmploc/pg_replslot\", \"$pgdata/pg_replslot\")\n    or BAIL_OUT \"could not symlink to $pgdata/pg_replslot\";\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 7 Jul 2024 09:10:48 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "Hi,\n\nOn 2024-07-07 09:10:48 -0400, Andrew Dunstan wrote:\n> On 2024-07-07 Su 7:28 AM, Andrew Dunstan wrote:\n> > I'll be happy to hear of one. I agree it's a mess.� Maybe we could test\n> > that the temp directory is on the same device on Windows and skip the\n> > test if not? You could still get the test to run by setting TMPDIR\n> > and/or friends.\n\n> Maybe we should just not try to rename the directory. Looking at the test\n> I'm pretty sure the directory should be empty. Instead of trying to move it,\n> let's just remove it, and recreate it in the tmp location.\n\nGood catch, yes, that'd be much better!\n\n\n> diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> index 489dde4adf..c0c334c6fc 100644\n> --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> @@ -363,8 +363,8 @@ my $sys_tempdir =\n> PostgreSQL::Test::Utils::tempdir_short;\n> �# Elsewhere use $tempdir to avoid file system boundary issues with moving.\n> �my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n\nThe comment would need a bit of editing, I guess. I think we should consider\njust getting rid of the os-dependant switch now, it shouldn't be needed\nanymore?\n\n\n> -rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n> -� or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n> +rmtree(\"$pgdata/pg_replslot\");\n\nShould this perhaps be an rmdir, to ensure that we're not removing something\nwe don't want (e.g. somebody adding an earlier test for slots that then gets\nbroken by the rmtree)?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:31:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "On Sun, Jul 7, 2024 at 3:02 AM Andres Freund <[email protected]> wrote:\n> While working on [1] I encountered the issue that, on github-actions,\n> 010_pg_basebackup.pl fails on windows.\n>\n> The reason for that is that github actions uses two drives, with TMP/TEMP\n> located on c:, the tested code on d:. This causes the following code to fail:\n>\n> # Create a temporary directory in the system location.\n> my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n\nWhatever we end up doing about this, it would be a good idea to check\nthe other places that use tempdir_short and see if they also need\nadjustment.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 16:45:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "\nOn 2024-07-08 Mo 4:45 PM, Robert Haas wrote:\n> On Sun, Jul 7, 2024 at 3:02 AM Andres Freund <[email protected]> wrote:\n>> While working on [1] I encountered the issue that, on github-actions,\n>> 010_pg_basebackup.pl fails on windows.\n>>\n>> The reason for that is that github actions uses two drives, with TMP/TEMP\n>> located on c:, the tested code on d:. This causes the following code to fail:\n>>\n>> # Create a temporary directory in the system location.\n>> my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n> Whatever we end up doing about this, it would be a good idea to check\n> the other places that use tempdir_short and see if they also need\n> adjustment.\n\n\nI don't think it's a problem. There are lots of systems that have \ntempdir on a different device. That's why we previously saw lots of \nerrors from this code, resulting in the present incomplete workaround. \nThe fact that we haven't seen such errors from other tests means we \nshould be ok.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:18:14 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "Hi,\n\nOn 2024-07-08 16:45:42 -0400, Robert Haas wrote:\n> On Sun, Jul 7, 2024 at 3:02 AM Andres Freund <[email protected]> wrote:\n> > While working on [1] I encountered the issue that, on github-actions,\n> > 010_pg_basebackup.pl fails on windows.\n> >\n> > The reason for that is that github actions uses two drives, with TMP/TEMP\n> > located on c:, the tested code on d:. This causes the following code to fail:\n> >\n> > # Create a temporary directory in the system location.\n> > my $sys_tempdir = PostgreSQL::Test::Utils::tempdir_short;\n> \n> Whatever we end up doing about this, it would be a good idea to check\n> the other places that use tempdir_short and see if they also need\n> adjustment.\n\nFWIW, this was the only test that failed due to TMP being on a separate\npartition.\n\nI couldn't quite enable all tests (PG_TEST_EXTRA including libpq_encryption\nfails due to some gssapi issue, kerberos fails due to paths being wrong / not\nset up for windows, load_balance because I haven't set up hostnames), but I\ndon't think any of the issues around those tests are related.\n\nI did also grep for other uses of tmpdir_short, they all seem to be\ndifferently motivated.\n\nI also looked for uses of rename() in tests. While\nsrc/bin/pg_verifybackup/t/007_wal.pl does move stuff around, it afaict is\nalways going to be on the same filesystem.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:26:40 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "\nOn 2024-07-08 Mo 4:31 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2024-07-07 09:10:48 -0400, Andrew Dunstan wrote:\n>> On 2024-07-07 Su 7:28 AM, Andrew Dunstan wrote:\n>>> I'll be happy to hear of one. I agree it's a mess.  Maybe we could test\n>>> that the temp directory is on the same device on Windows and skip the\n>>> test if not? You could still get the test to run by setting TMPDIR\n>>> and/or friends.\n>> Maybe we should just not try to rename the directory. Looking at the test\n>> I'm pretty sure the directory should be empty. Instead of trying to move it,\n>> let's just remove it, and recreate it in the tmp location.\n> Good catch, yes, that'd be much better!\n>\n>\n>> diff --git a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> index 489dde4adf..c0c334c6fc 100644\n>> --- a/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> +++ b/src/bin/pg_basebackup/t/010_pg_basebackup.pl\n>> @@ -363,8 +363,8 @@ my $sys_tempdir =\n>> PostgreSQL::Test::Utils::tempdir_short;\n>>  # Elsewhere use $tempdir to avoid file system boundary issues with moving.\n>>  my $tmploc = $windows_os ? $sys_tempdir : $tempdir;\n> The comment would need a bit of editing, I guess. I think we should consider\n> just getting rid of the os-dependant switch now, it shouldn't be needed\n> anymore?\n>\n>\n>> -rename(\"$pgdata/pg_replslot\", \"$tmploc/pg_replslot\")\n>> -  or BAIL_OUT \"could not move $pgdata/pg_replslot\";\n>> +rmtree(\"$pgdata/pg_replslot\");\n> Should this perhaps be an rmdir, to ensure that we're not removing something\n> we don't want (e.g. somebody adding an earlier test for slots that then gets\n> broken by the rmtree)?\n>\n\n\nOK, done like this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:45:16 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" }, { "msg_contents": "On 2024-07-08 17:45:16 -0400, Andrew Dunstan wrote:\n> OK, done like this.\n\nThanks!\n\n\n", "msg_date": "Mon, 8 Jul 2024 15:57:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 010_pg_basebackup.pl vs multiple filesystems" } ]
[ { "msg_contents": "I think the period here should be a typo.\n\nindex 16b5803d388..af3b15e93df 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -185,7 +185,7 @@ typedef struct ComputeXidHorizonsResult\n FullTransactionId latest_completed;\n\n /*\n- * The same for procArray->replication_slot_xmin and.\n+ * The same for procArray->replication_slot_xmin and\n * procArray->replication_slot_catalog_xmin.\n */\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 7 Jul 2024 17:42:47 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "report a typo in comments of ComputeXidHorizonsResult" }, { "msg_contents": "On Sun, Jul 7, 2024 at 5:43 PM Junwang Zhao <[email protected]> wrote:\n> I think the period here should be a typo.\n>\n> index 16b5803d388..af3b15e93df 100644\n> --- a/src/backend/storage/ipc/procarray.c\n> +++ b/src/backend/storage/ipc/procarray.c\n> @@ -185,7 +185,7 @@ typedef struct ComputeXidHorizonsResult\n> FullTransactionId latest_completed;\n>\n> /*\n> - * The same for procArray->replication_slot_xmin and.\n> + * The same for procArray->replication_slot_xmin and\n> * procArray->replication_slot_catalog_xmin.\n> */\n\n+1.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 8 Jul 2024 07:49:43 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: report a typo in comments of ComputeXidHorizonsResult" }, { "msg_contents": "On Mon, Jul 8, 2024 at 7:49 AM Richard Guo <[email protected]> wrote:\n> On Sun, Jul 7, 2024 at 5:43 PM Junwang Zhao <[email protected]> wrote:\n> > I think the period here should be a typo.\n> >\n> > index 16b5803d388..af3b15e93df 100644\n> > --- a/src/backend/storage/ipc/procarray.c\n> > +++ b/src/backend/storage/ipc/procarray.c\n> > @@ -185,7 +185,7 @@ typedef struct ComputeXidHorizonsResult\n> > FullTransactionId latest_completed;\n> >\n> > /*\n> > - * The same for procArray->replication_slot_xmin and.\n> > + * The same for procArray->replication_slot_xmin and\n> > * procArray->replication_slot_catalog_xmin.\n> > */\n>\n> +1.\n\nPushed.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:36:44 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: report a typo in comments of ComputeXidHorizonsResult" } ]
[ { "msg_contents": "As part of making tuplestores faster [1], I noticed that in WindowAgg, when\nwe end one partition we call tuplestore_end() and then we do\ntuplestore_begin_heap() again for the next partition in begin_partition()\nand then go on to set up the tuplestore read pointers according to what's\nrequired for the given frameOptions of the WindowAgg. This might make\nsense if the frameOptions could change between partitions, but they can't,\nso I don't see any reason why we can't just do tuplestore_clear() at the\nend of a partition. That resets the read pointer positions back to the\nstart again ready for the next partition.\n\nI wrote the attached patch and checked how it affects performance. It helps\nquite a bit when there are lots of partitions.\n\nCREATE TABLE a (a INT NOT NULL);\nINSERT INTO a SELECT x FROM generate_series(1,1000000)x;\nVACUUM FREEZE ANALYZE a;\n\nbench.sql:\nSELECT a,count(*) OVER (PARTITION BY a) FROM a OFFSET 1000000;\n\nmaster:\n$ pgbench -n -f bench.sql -T 60 -M prepared postgres | grep latency\nlatency average = 293.488 ms\nlatency average = 295.509 ms\nlatency average = 297.772 ms\n\npatched:\n$ pgbench -n -f bench.sql -T 60 -M prepared postgres | grep latency\nlatency average = 203.234 ms\nlatency average = 204.538 ms\nlatency average = 203.877 ms\n\nAbout 45% faster.\n\nHere's the top of perf top of each:\n\nmaster:\n 8.61% libc.so.6 [.] _int_malloc\n 6.71% libc.so.6 [.] _int_free\n 6.42% postgres [.] heap_form_minimal_tuple\n 6.40% postgres [.] tuplestore_alloc_read_pointer\n 5.87% libc.so.6 [.] unlink_chunk.constprop.0\n 3.86% libc.so.6 [.] __memmove_avx512_unaligned_erms\n 3.80% postgres [.] AllocSetFree\n 3.51% postgres [.] spool_tuples\n 3.45% postgres [.] ExecWindowAgg\n 2.09% postgres [.] tuplesort_puttuple_common\n 1.92% postgres [.] comparetup_datum\n 1.88% postgres [.] AllocSetAlloc\n 1.85% postgres [.] tuplesort_heap_replace_top\n 1.84% postgres [.] ExecStoreBufferHeapTuple\n 1.69% postgres [.] window_gettupleslot\n\npatched:\n 8.95% postgres [.] ExecWindowAgg\n 8.10% postgres [.] heap_form_minimal_tuple\n 5.16% postgres [.] spool_tuples\n 4.03% libc.so.6 [.] __memmove_avx512_unaligned_erms\n 4.02% postgres [.] begin_partition\n 3.11% postgres [.] tuplesort_puttuple_common\n 2.97% postgres [.] AllocSetAlloc\n 2.96% postgres [.] comparetup_datum\n 2.83% postgres [.] tuplesort_heap_replace_top\n 2.79% postgres [.] window_gettupleslot\n 2.22% postgres [.] AllocSetFree\n 2.11% postgres [.] MemoryContextReset\n 1.99% postgres [.] LogicalTapeWrite\n 1.98% postgres [.] advance_windowaggregate\n\nLots less malloc/free work going on.\n\nI'm also tempted to do a cleanup of the state machine inside\nnodeWindowAgg.c as I had to add another bool flag to WindowAggState to\nreplace the previous test that checked if the tuplestore was empty (i.e\njust finished a partition) with if (winstate->buffer == NULL). I couldn't\ndo that anymore due to no longer nullifying that field. I think such a\ncleanup would be a separate patch. Annoyingly the extra bool is the 9th\nbool flag and widens that struct by 8 bytes and leaves a 7 byte hole.\n\nDavid\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=590b045c37aad44915f7f472343f24c2bafbe5d8", "msg_date": "Sun, 7 Jul 2024 22:57:17 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Sun, Jul 7, 2024 at 4:27 PM David Rowley <[email protected]> wrote:\n>\n> As part of making tuplestores faster [1], I noticed that in WindowAgg, when we end one partition we call tuplestore_end() and then we do tuplestore_begin_heap() again for the next partition in begin_partition() and then go on to set up the tuplestore read pointers according to what's required for the given frameOptions of the WindowAgg. This might make sense if the frameOptions could change between partitions, but they can't, so I don't see any reason why we can't just do tuplestore_clear() at the end of a partition. That resets the read pointer positions back to the start again ready for the next partition.\n>\n> I wrote the attached patch and checked how it affects performance. It helps quite a bit when there are lots of partitions.\n>\n> CREATE TABLE a (a INT NOT NULL);\n> INSERT INTO a SELECT x FROM generate_series(1,1000000)x;\n> VACUUM FREEZE ANALYZE a;\n>\n> bench.sql:\n> SELECT a,count(*) OVER (PARTITION BY a) FROM a OFFSET 1000000;\n>\n> master:\n> $ pgbench -n -f bench.sql -T 60 -M prepared postgres | grep latency\n> latency average = 293.488 ms\n> latency average = 295.509 ms\n> latency average = 297.772 ms\n>\n> patched:\n> $ pgbench -n -f bench.sql -T 60 -M prepared postgres | grep latency\n> latency average = 203.234 ms\n> latency average = 204.538 ms\n> latency average = 203.877 ms\n>\n> About 45% faster.\n>\n\nI repeated your measurements but by varying the number of partitions\nand repeating pgbench 5 times instead of 3. The idea is to see the\nimpact of the change on a lower number of partitions.\n\n10 partitions query: SELECT a,count(*) OVER (PARTITION BY a % 10) FROM\na OFFSET 1000000;\n100 partitions query: SELECT a,count(*) OVER (PARTITION BY a % 100)\nFROM a OFFSET 1000000;\n1000 partitions query: SELECT a,count(*) OVER (PARTITION BY a % 1000)\nFROM a OFFSET 1000000;\noriginal query with 1M partitions: SELECT a,count(*) OVER (PARTITION\nBY a) FROM a OFFSET 1000000;\nNotice that the offset is still the same to avoid any impact it may\nhave on the query execution.\n\nHere are the results\nmaster:\nno. of partitions, average latencies\n10, 362.166 ms, 369.313 ms, 375.203 ms, 368.798 ms, 372.483 ms\n100, 372.885 ms, 381.463 ms, 385.372 ms, 382.915 ms, 383.630 ms\n1000, 390.834 ms, 395.653 ms, 400.339 ms, 407.777 ms, 389.906 ms\n1000000, 552.848 ms, 553.943 ms, 547.806 ms, 541.871 ms, 546.741 ms\n\npatched\n10, 356.980 ms, 371.223 ms, 375.550 ms, 378.011 ms, 381.119 ms\n100, 392.307 ms, 385.087 ms, 380.383 ms, 390.999 ms, 388.422 ms\n1000, 405.136 ms, 397.576 ms, 399.021 ms, 399.572 ms, 406.604 ms\n1000000, 394.711 ms, 403.741 ms, 399.008 ms, 392.932 ms, 393.335 ms\n\nObservations\n1. The numbers corresponding to 10 and 100 partitions are higher when\npatched. That might be just noise. I don't see any reason why it would\nimpact negatively when there are a small number of partitions. The\nlower partition cases also have a higher number of rows per partition,\nso is the difference between MemoryContextDelete() vs\nMemoryContextReset() making any difference here. May be worth\nverifying those cases carefully. Otherwise upto 1000 partitions, it\ndoesn't show any differences.\n2. For 1M partitions it does make a difference. About 35% in my case.\nMoreover the change seems to be making the execution times independent\nof the number of partitions (more or less).\n\nCombining this observation with the first one, It might be worth\nlooking at the execution times when there are many rows per partition\nin case of a higher number of partitions.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 9 Jul 2024 20:11:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Wed, 10 Jul 2024 at 02:42, Ashutosh Bapat\n<[email protected]> wrote:\n> Observations\n> 1. The numbers corresponding to 10 and 100 partitions are higher when\n> patched. That might be just noise. I don't see any reason why it would\n> impact negatively when there are a small number of partitions. The\n> lower partition cases also have a higher number of rows per partition,\n> so is the difference between MemoryContextDelete() vs\n> MemoryContextReset() making any difference here. May be worth\n> verifying those cases carefully. Otherwise upto 1000 partitions, it\n> doesn't show any differences.\n\nI think this might just be noise as a result of rearranging code. In\nterms of C code, I don't see any reason for it to be slower. If you\nlook at GenerationDelete() (as what is getting called from\nMemoryContextDelete()), it just calls GenerationReset(). So resetting\nis going to always be less work than deleting the context, especially\ngiven we don't need to create the context again when we reset it.\n\nI wrote the attached script to see if I can also see the slowdown and\nI do see the patched code come out slightly slower (within noise\nlevels) in lower partition counts.\n\nTo get my compiler to produce code in a more optimal order for the\ncommon case, I added unlikely() to the \"if (winstate->all_first)\"\ncondition. This is only evaluated on the first time after a rescan,\nso putting that code at the end of the function makes more sense. The\nattached v2 patch has it this way. You can see the numbers look\nslightly better in the attached graph.\n\nThanks for having a look at this.\n\nDavid", "msg_date": "Fri, 12 Jul 2024 00:09:03 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "Em qui., 11 de jul. de 2024 às 09:09, David Rowley <[email protected]>\nescreveu:\n\n> On Wed, 10 Jul 2024 at 02:42, Ashutosh Bapat\n> <[email protected]> wrote:\n> > Observations\n> > 1. The numbers corresponding to 10 and 100 partitions are higher when\n> > patched. That might be just noise. I don't see any reason why it would\n> > impact negatively when there are a small number of partitions. The\n> > lower partition cases also have a higher number of rows per partition,\n> > so is the difference between MemoryContextDelete() vs\n> > MemoryContextReset() making any difference here. May be worth\n> > verifying those cases carefully. Otherwise upto 1000 partitions, it\n> > doesn't show any differences.\n>\n> I think this might just be noise as a result of rearranging code. In\n> terms of C code, I don't see any reason for it to be slower. If you\n> look at GenerationDelete() (as what is getting called from\n> MemoryContextDelete()), it just calls GenerationReset(). So resetting\n> is going to always be less work than deleting the context, especially\n> given we don't need to create the context again when we reset it.\n>\n> I wrote the attached script to see if I can also see the slowdown and\n> I do see the patched code come out slightly slower (within noise\n> levels) in lower partition counts.\n>\n> To get my compiler to produce code in a more optimal order for the\n> common case, I added unlikely() to the \"if (winstate->all_first)\"\n> condition. This is only evaluated on the first time after a rescan,\n> so putting that code at the end of the function makes more sense. The\n> attached v2 patch has it this way.\n\nNot that it's going to make any difference,\nbut since you're going to touch this code, why not?\n\nFunction *begin_partition*:\n1. You can reduce the scope for variable *outerPlan*,\nit is not needed anywhere else.\n+ /*\n+ * If this is the very first partition, we need to fetch the first input\n+ * row to store in first_part_slot.\n+ */\n+ if (TupIsNull(winstate->first_part_slot))\n+ {\n+ PlanState *outerPlan = outerPlanState(winstate);\n+ TupleTableSlot *outerslot = ExecProcNode(outerPlan);\n\n2. There is once reference to variable numFuncs\nYou can reduce the scope.\n\n+ /* reset mark and seek positions for each real window function */\n+ for (int i = 0; i < winstate->numfuncs; i++)\n\nFunction *prepare_tuplestore. *\n1. There is once reference to variable numFuncs\nYou can reduce the scope.\n /* create mark and read pointers for each real window function */\n- for (i = 0; i < numfuncs; i++)\n+ for (int i = 0; i < winstate->numfuncs; i++)\n\n2. You can securely initialize the default value for variable\n*winstate->grouptail_ptr*\nin *else* part.\n\n if ((frameOptions & (FRAMEOPTION_EXCLUDE_GROUP |\n FRAMEOPTION_EXCLUDE_TIES)) &&\n node->ordNumCols != 0)\n winstate->grouptail_ptr =\n tuplestore_alloc_read_pointer(winstate->buffer, 0);\n }\n+ else\n+ winstate->grouptail_ptr = -1;\n\nbest regards,\nRanier Vilela\n\nEm qui., 11 de jul. de 2024 às 09:09, David Rowley <[email protected]> escreveu:On Wed, 10 Jul 2024 at 02:42, Ashutosh Bapat\n<[email protected]> wrote:\n> Observations\n> 1. The numbers corresponding to 10 and 100 partitions are higher when\n> patched. That might be just noise. I don't see any reason why it would\n> impact negatively when there are a small number of partitions. The\n> lower partition cases also have a higher number of rows per partition,\n> so is the difference between MemoryContextDelete() vs\n> MemoryContextReset() making any difference here. May be worth\n> verifying those cases carefully. Otherwise upto 1000 partitions, it\n> doesn't show any differences.\n\nI think this might just be noise as a result of rearranging code. In\nterms of C code, I don't see any reason for it to be slower.  If you\nlook at GenerationDelete() (as what is getting called from\nMemoryContextDelete()), it just calls GenerationReset(). So resetting\nis going to always be less work than deleting the context, especially\ngiven we don't need to create the context again when we reset it.\n\nI wrote the attached script to see if I can also see the slowdown and\nI do see the patched code come out slightly slower (within noise\nlevels) in lower partition counts.\n\nTo get my compiler to produce code in a more optimal order for the\ncommon case, I added unlikely() to the \"if (winstate->all_first)\"\ncondition.  This is only evaluated on the first time after a rescan,\nso putting that code at the end of the function makes more sense.  The\nattached v2 patch has it this way.Not that it's going to make any difference, but since you're going to touch this code, why not?Function *begin_partition*:1. You can reduce the scope for variable *outerPlan*, it is not needed anywhere else.\n\n+\t/*+\t * If this is the very first partition, we need to fetch the first input+\t * row to store in first_part_slot.+\t */+\tif (TupIsNull(winstate->first_part_slot))+\t{+\t\tPlanState\t\t*outerPlan = outerPlanState(winstate);+\t\tTupleTableSlot *outerslot = ExecProcNode(outerPlan);2. There is once reference to variable numFuncsYou can reduce the scope.+\t/* reset mark and seek positions for each real window function */+\tfor (int i = 0; i < winstate->numfuncs; i++)Function *prepare_tuplestore. *1. There is once reference to variable numFuncsYou can reduce the scope. \t/* create mark and read pointers for each real window function */-\tfor (i = 0; i < numfuncs; i++)+\tfor (int i = 0; i < winstate->numfuncs; i++)2. You can securely initialize the default value for variable *winstate->grouptail_ptr*in *else* part. \tif ((frameOptions & (FRAMEOPTION_EXCLUDE_GROUP | \t\t\t\t\t\t FRAMEOPTION_EXCLUDE_TIES)) && \t\tnode->ordNumCols != 0) \t\twinstate->grouptail_ptr = \t\t\ttuplestore_alloc_read_pointer(winstate->buffer, 0); \t}+\telse+\t\twinstate->grouptail_ptr = -1;best regards,Ranier Vilela", "msg_date": "Thu, 11 Jul 2024 18:48:53 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "Hi David,\n\nThank you for the patch.\n\n> I think this might just be noise as a result of rearranging code. In\n> terms of C code, I don't see any reason for it to be slower. If you\n> look at GenerationDelete() (as what is getting called from\n> MemoryContextDelete()), it just calls GenerationReset(). So resetting\n> is going to always be less work than deleting the context, especially\n> given we don't need to create the context again when we reset it.\n> \n> I wrote the attached script to see if I can also see the slowdown and\n> I do see the patched code come out slightly slower (within noise\n> levels) in lower partition counts.\n> \n> To get my compiler to produce code in a more optimal order for the\n> common case, I added unlikely() to the \"if (winstate->all_first)\"\n> condition. This is only evaluated on the first time after a rescan,\n> so putting that code at the end of the function makes more sense. The\n> attached v2 patch has it this way. You can see the numbers look\n> slightly better in the attached graph.\n> \n> Thanks for having a look at this.\n> \n> David\n\n@@ -2684,6 +2726,14 @@ ExecEndWindowAgg(WindowAggState *node)\n \tPlanState *outerPlan;\n \tint\t\t\ti;\n \n+\tif (node->buffer != NULL)\n+\t{\n+\t\ttuplestore_end(node->buffer);\n+\n+\t\t/* nullify so that release_partition skips the tuplestore_clear() */\n+\t\tnode->buffer = NULL;\n+\t}\n+\n\nIs it possible that node->buffer == NULL in ExecEndWindowAgg()? If\nnot, probably making it an Assert() or just removing the if() should\nbe fine.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 12 Jul 2024 12:20:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "> @@ -2684,6 +2726,14 @@ ExecEndWindowAgg(WindowAggState *node)\n> \tPlanState *outerPlan;\n> \tint\t\t\ti;\n> \n> +\tif (node->buffer != NULL)\n> +\t{\n> +\t\ttuplestore_end(node->buffer);\n> +\n> +\t\t/* nullify so that release_partition skips the tuplestore_clear() */\n> +\t\tnode->buffer = NULL;\n> +\t}\n> +\n> \n> Is it possible that node->buffer == NULL in ExecEndWindowAgg()? If\n> not, probably making it an Assert() or just removing the if() should\n> be fine.\n\nOf course it it possible, for example there's no row in a partition.\nSorry for noise.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 12 Jul 2024 14:41:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Thu, Jul 11, 2024 at 5:39 PM David Rowley <[email protected]> wrote:\n>\n> On Wed, 10 Jul 2024 at 02:42, Ashutosh Bapat\n> <[email protected]> wrote:\n> > Observations\n> > 1. The numbers corresponding to 10 and 100 partitions are higher when\n> > patched. That might be just noise. I don't see any reason why it would\n> > impact negatively when there are a small number of partitions. The\n> > lower partition cases also have a higher number of rows per partition,\n> > so is the difference between MemoryContextDelete() vs\n> > MemoryContextReset() making any difference here. May be worth\n> > verifying those cases carefully. Otherwise upto 1000 partitions, it\n> > doesn't show any differences.\n>\n> I think this might just be noise as a result of rearranging code. In\n> terms of C code, I don't see any reason for it to be slower. If you\n> look at GenerationDelete() (as what is getting called from\n> MemoryContextDelete()), it just calls GenerationReset(). So resetting\n> is going to always be less work than deleting the context, especially\n> given we don't need to create the context again when we reset it.\n>\n> I wrote the attached script to see if I can also see the slowdown and\n> I do see the patched code come out slightly slower (within noise\n> levels) in lower partition counts.\n>\n> To get my compiler to produce code in a more optimal order for the\n> common case, I added unlikely() to the \"if (winstate->all_first)\"\n> condition. This is only evaluated on the first time after a rescan,\n> so putting that code at the end of the function makes more sense. The\n> attached v2 patch has it this way. You can see the numbers look\n> slightly better in the attached graph.\n\nThe change to all_first seems unrelated to the tuplestore\noptimization. But it's bringing the results inline with the master for\nlower number of partitions.\n\nThanks for the script. I have similar results on my laptop.\n From master\nTesting with 1000000 partitions\nlatency average = 505.738 ms\nlatency average = 509.407 ms\nlatency average = 522.461 ms\nTesting with 100000 partitions\nlatency average = 329.026 ms\nlatency average = 327.504 ms\nlatency average = 342.556 ms\nTesting with 10000 partitions\nlatency average = 299.496 ms\nlatency average = 298.266 ms\nlatency average = 306.773 ms\nTesting with 1000 partitions\nlatency average = 299.006 ms\nlatency average = 302.188 ms\nlatency average = 301.701 ms\nTesting with 100 partitions\nlatency average = 305.411 ms\nlatency average = 286.935 ms\nlatency average = 302.432 ms\nTesting with 10 partitions\nlatency average = 288.091 ms\nlatency average = 294.506 ms\nlatency average = 305.082 ms\nTesting with 1 partitions\nlatency average = 301.121 ms\nlatency average = 319.615 ms\nlatency average = 301.141 ms\n\nPatched\nTesting with 1000000 partitions\nlatency average = 351.683 ms\nlatency average = 352.516 ms\nlatency average = 352.086 ms\nTesting with 100000 partitions\nlatency average = 300.626 ms\nlatency average = 303.584 ms\nlatency average = 306.959 ms\nTesting with 10000 partitions\nlatency average = 289.560 ms\nlatency average = 302.248 ms\nlatency average = 297.423 ms\nTesting with 1000 partitions\nlatency average = 308.600 ms\nlatency average = 299.215 ms\nlatency average = 289.681 ms\nTesting with 100 partitions\nlatency average = 301.216 ms\nlatency average = 286.240 ms\nlatency average = 291.232 ms\nTesting with 10 partitions\nlatency average = 305.260 ms\nlatency average = 296.707 ms\nlatency average = 300.266 ms\nTesting with 1 partitions\nlatency average = 316.199 ms\nlatency average = 314.043 ms\nlatency average = 309.425 ms\n\nNow that you are also seeing the slowdown with your earlier patch, I\nam wondering whether adding unlikely() by itself is a good\noptimization. There might be some other reason behind the perceived\nslowdown. How do the numbers look when you just add unlikely() without\nany other changes?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 12 Jul 2024 11:59:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Fri, Jul 12, 2024 at 11:59 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n>\n> Now that you are also seeing the slowdown with your earlier patch, I\n> am wondering whether adding unlikely() by itself is a good\n> optimization. There might be some other reason behind the perceived\n> slowdown. How do the numbers look when you just add unlikely() without\n> any other changes?\n\nOut of curiosity, I measured the performance with just the \"unlikely\"\nchange and with the full patch. Below are the results\nTesting with 1000000 partitions\nlatency average = 503.321 ms\nlatency average = 510.365 ms\nlatency average = 512.117 ms\nTesting with 100000 partitions\nlatency average = 371.764 ms\nlatency average = 361.202 ms\nlatency average = 360.529 ms\nTesting with 10000 partitions\nlatency average = 327.495 ms\nlatency average = 327.334 ms\nlatency average = 325.925 ms\nTesting with 1000 partitions\nlatency average = 319.290 ms\nlatency average = 318.709 ms\nlatency average = 318.013 ms\nTesting with 100 partitions\nlatency average = 317.756 ms\nlatency average = 318.933 ms\nlatency average = 316.529 ms\nTesting with 10 partitions\nlatency average = 316.392 ms\nlatency average = 315.297 ms\nlatency average = 316.007 ms\nTesting with 1 partitions\nlatency average = 330.978 ms\nlatency average = 330.529 ms\nlatency average = 333.538 ms\n\nwith just unlikely change\nTesting with 1000000 partitions\nlatency average = 504.786 ms\nlatency average = 507.557 ms\nlatency average = 508.522 ms\nTesting with 100000 partitions\nlatency average = 316.345 ms\nlatency average = 315.496 ms\nlatency average = 326.503 ms\nTesting with 10000 partitions\nlatency average = 296.878 ms\nlatency average = 293.927 ms\nlatency average = 294.654 ms\nTesting with 1000 partitions\nlatency average = 292.680 ms\nlatency average = 283.245 ms\nlatency average = 280.857 ms\nTesting with 100 partitions\nlatency average = 292.569 ms\nlatency average = 296.330 ms\nlatency average = 295.389 ms\nTesting with 10 partitions\nlatency average = 285.909 ms\nlatency average = 287.499 ms\nlatency average = 293.322 ms\nTesting with 1 partitions\nlatency average = 305.080 ms\nlatency average = 309.100 ms\nlatency average = 307.794 ms\n\nThere's noticeable change across all the number of partitions with\njust \"unlikely\" change. The improvement is lesser with larger number\nof partitions but quite visible with lesser number of partitions.\n\nfull patch\nTesting with 1000000 partitions\nlatency average = 356.026 ms\nlatency average = 375.280 ms\nlatency average = 374.575 ms\nTesting with 100000 partitions\nlatency average = 318.173 ms\nlatency average = 307.598 ms\nlatency average = 315.868 ms\nTesting with 10000 partitions\nlatency average = 295.541 ms\nlatency average = 313.317 ms\nlatency average = 299.936 ms\nTesting with 1000 partitions\nlatency average = 295.082 ms\nlatency average = 305.204 ms\nlatency average = 294.702 ms\nTesting with 100 partitions\nlatency average = 302.552 ms\nlatency average = 307.596 ms\nlatency average = 304.202 ms\nTesting with 10 partitions\nlatency average = 295.050 ms\nlatency average = 291.127 ms\nlatency average = 299.704 ms\nTesting with 1 partitions\nlatency average = 308.781 ms\nlatency average = 304.071 ms\nlatency average = 319.560 ms\n\nThere is significant improvement with a large number of partitions as\nseen previously. But for a smaller number of partitions the\nperformance worsens, which needs some investigation.\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 15 Jul 2024 11:32:42 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "Ashutosh Bapat <[email protected]> writes:\n\n> On Fri, Jul 12, 2024 at 11:59 AM Ashutosh Bapat\n> <[email protected]> wrote:\n>>\n>>\n> Out of curiosity, I measured the performance with just the \"unlikely\"\n> change and with the full patch. Below are the results\n> Testing with 1000000 partitions\n...\n> latency average = 333.538 ms\n>\n> with just unlikely change\n> Testing with 1000000 partitions\n..\n> Testing with 1 partitions\n>\n> There's noticeable change across all the number of partitions with\n> just \"unlikely\" change.\n\nI'm curious about why a 'unlikey' change can cause noticeable change,\nespecially there is just one function call in the 'if-statement' (I am\nthinking more instrucments in the if-statement body, more changes it can\ncause). \n\n+\tif (unlikely(winstate->buffer == NULL))\n+\t\tprepare_tuplestore(winstate);\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 15:55:42 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Thu, 18 Jul 2024 at 19:56, Andy Fan <[email protected]> wrote:\n> I'm curious about why a 'unlikey' change can cause noticeable change,\n> especially there is just one function call in the 'if-statement' (I am\n> thinking more instrucments in the if-statement body, more changes it can\n> cause).\n>\n> + if (unlikely(winstate->buffer == NULL))\n> + prepare_tuplestore(winstate);\n\nThis isn't the branch being discussed. We've not talked about whether\nthe above one is useful or not. The branch we were discussing is the\n\"if (winstate->all_first)\", of which has a large number of\ninstructions inside it.\n\nHowever, for this one, my understanding of this particular case is,\nit's a very predictable branch as, even if the first prediction gets\nit wrong the first time, it's not going to be wrong for long as the\ncondition is false for all subsequent calls. So, from there, if you\nassume that the instruction decoder is always going to fetch the\ncorrect instructions according to the branch predictor's correct\nprediction (the ones for branch not taken), then the only benefit\npossible is that the next instructions to execute are the next\ninstructions in the code rather than instructions located far away,\npossibly on another cacheline that needs to be loaded from RAM or\nhigher cache levels. Adding the unlikely() should coax the compiler\ninto laying out the code so the branch's instructions at the end of\nthe function and that, in theory, should reduce the likelihood of\nfrontend stalls waiting for cachelines for further instructions to\narrive from RAM or higher cache levels. That's my theory, at least. I\nexpected to see perf stat show me a lower \"stalled-cycles-frontend\"\nwith the v2 patch than v1, but didn't see that when I looked, so my\ntheory might be wrong.\n\nFor the case you're mentioning above, calling the function isn't very\nmany instructions, so the benefits are likely very small with a branch\nthis predictable.\n\nDavid\n\n\n", "msg_date": "Thu, 18 Jul 2024 23:49:54 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Mon, 15 Jul 2024 at 18:02, Ashutosh Bapat\n<[email protected]> wrote:\n> There is significant improvement with a large number of partitions as\n> seen previously. But for a smaller number of partitions the\n> performance worsens, which needs some investigation.\n\nWe get performance variations all the time from unrelated changes due\nto alignment changes in the code layout. There's a write-up in [1]\nthat you might find interesting. In particular the following\nparagraph:\n\n\"Unfortunately, modern architectural features make this approach\nunsound. Statistically sound evaluation requires multiple samples to\ntest whether one can or cannot (with high confidence) reject the null\nhypothesis that results are the same before and after. However, caches\nand branch predictors make performance dependent on machine-specific\nparameters and the exact layout of code, stack frames, and heap\nobjects. A single binary constitutes just one sample from the space of\nprogram layouts, regardless of the number of runs. Since compiler\noptimizations and code changes also alter layout, it is currently\nimpossible to distinguish the impact of an optimization from that of\nits layout effects.\"\n\nSince the code changes here add no additional work, they only remove\nwork. I think any regressions you see must be related to code\nalignment.\n\nTo try and move this forward again, I adjusted the patch to use a\nstatic function with pg_noinline rather than unlikely. I don't think\nthis will make much difference code generation wise, but I did think\nit was an improvement in code cleanliness. Patches attached.\n\nI did a round of benchmarking on an AMD Zen4 7945hx and on an Apple\nM2. I also graphed the results you sent so they're easier to compare\nwith mine.\n\n0001 is effectively the unlikely() patch for calculating the frame offsets.\n0002 is the tuplestore_reset() patch\n\nThe AMD laptop results were a bit noisy. M2 results were much more stable.\n\nDavid\n\n[1] https://emeryberger.com/research/stabilizer/", "msg_date": "Mon, 19 Aug 2024 22:01:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Mon, 19 Aug 2024 at 22:01, David Rowley <[email protected]> wrote:\n> To try and move this forward again, I adjusted the patch to use a\n> static function with pg_noinline rather than unlikely. I don't think\n> this will make much difference code generation wise, but I did think\n> it was an improvement in code cleanliness. Patches attached.\n>\n> I did a round of benchmarking on an AMD Zen4 7945hx and on an Apple\n> M2. I also graphed the results you sent so they're easier to compare\n> with mine.\n>\n> 0001 is effectively the unlikely() patch for calculating the frame offsets.\n> 0002 is the tuplestore_reset() patch\n\nI was experimenting with this again. The 0002 patch added a\nnext_partition field to the WindowAggState struct and caused the\nstruct to become slightly bigger. I've now included a 0003 patch\nwhich shifts some fields around in that struct so as to keep it the\nsame size as it is on master. Benchmarking with that removes that very\ntiny performance regression. Please see the attached CSV file for the\nresults. The percentage row compares master to all patches. I also\ntested this on an AMD 3990x machine along with fresh results from the\nAMD 7945hx laptop. Both of those machines come out faster on all tests\nwhen comparing master to all 3 patches. With the Apple M2, there does\nnot seem to be much change in performance with the tests containing\nfewer rows per partition, some are faster, some are slower, all within\ntypical noise fluctuations.\n\nGiven the performance now seems improved in all cases, I plan on\npushing this change as a single commit.\n\nDavid", "msg_date": "Thu, 5 Sep 2024 02:49:51 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Wed, Sep 4, 2024 at 8:20 PM David Rowley <[email protected]> wrote:\n>\n> On Mon, 19 Aug 2024 at 22:01, David Rowley <[email protected]> wrote:\n> > To try and move this forward again, I adjusted the patch to use a\n> > static function with pg_noinline rather than unlikely. I don't think\n> > this will make much difference code generation wise, but I did think\n> > it was an improvement in code cleanliness. Patches attached.\n> >\n> > I did a round of benchmarking on an AMD Zen4 7945hx and on an Apple\n> > M2. I also graphed the results you sent so they're easier to compare\n> > with mine.\n> >\n> > 0001 is effectively the unlikely() patch for calculating the frame offsets.\n> > 0002 is the tuplestore_reset() patch\n>\n> I was experimenting with this again. The 0002 patch added a\n> next_partition field to the WindowAggState struct and caused the\n> struct to become slightly bigger. I've now included a 0003 patch\n> which shifts some fields around in that struct so as to keep it the\n> same size as it is on master. Benchmarking with that removes that very\n> tiny performance regression.\n\nIf patches are applied in the same sequence as yours, the size of\nWindowAggState struct goes from 632 to 640 and then back to 632 on my\nlaptop. That looks a tiny but nice improvement by itself.\n\nIf the patches are applied in the order 0001, 0003 and 0002, the size\nof the structure remains 632 throughout. Patch 0003 does not affect\nthe size of the structure by itself.\n\n> I also\n> tested this on an AMD 3990x machine along with fresh results from the\n> AMD 7945hx laptop. Both of those machines come out faster on all tests\n> when comparing master to all 3 patches. With the Apple M2, there does\n> not seem to be much change in performance with the tests containing\n> fewer rows per partition, some are faster, some are slower, all within\n> typical noise fluctuations.\n\nI have similar observations as yours on my amd64 laptop. I also\nverified that 0003 by itself is not effective. This indicates that the\n(atleast some of the) regression caused by 0002 comes from larger\nstructure. Why would that happen?\n\n>\n> Given the performance now seems improved in all cases, I plan on\n> pushing this change as a single commit.\n>\n\nAgreed. I will review the code in detail by next week.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Sep 2024 12:00:33 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" }, { "msg_contents": "On Fri, 6 Sept 2024 at 18:30, Ashutosh Bapat\n<[email protected]> wrote:\n> If the patches are applied in the order 0001, 0003 and 0002, the size\n> of the structure remains 632 throughout. Patch 0003 does not affect\n> the size of the structure by itself.\n\nYeah. I kept 0003 separate so it could be easily tested independently.\n\n> I have similar observations as yours on my amd64 laptop. I also\n> verified that 0003 by itself is not effective. This indicates that the\n> (atleast some of the) regression caused by 0002 comes from larger\n> structure. Why would that happen?\n\nI don't know the exact reason, but it could be something as simple as\nhaving to load an additional cacheline that we previously didn't need\nto load. Or, perhaps more cachelines are being modified and that slows\ndown some cache eviction code. The PostgreSQL executor isn't very\nfriendly to CPU caches as we do tuple-at-a-time execution and\ncontinually switch to other nodes. That requires accessing executor\nstates only briefly before switching to another node to bubble tuples\nto the top of the plan.\n\n> > Given the performance now seems improved in all cases, I plan on\n> > pushing this change as a single commit.\n> >\n>\n> Agreed. I will review the code in detail by next week.\n\nThanks, but I've already pushed these patches. I ended up pushing\nv4-0001 as a separate commit. v4-0002 and v4-0003 went in as one. Feel\nfree to still have a look though.\n\nDavid\n\n\n", "msg_date": "Sat, 7 Sep 2024 01:37:20 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize WindowAgg's use of tuplestores" } ]
[ { "msg_contents": "The numeric round() and trunc() functions clamp the scale argument to\nthe range between +/- NUMERIC_MAX_RESULT_SCALE, which is +/- 2000.\nThat's a long way short of the actual allowed range of type numeric,\nso they produce incorrect results when rounding/truncating more than\n2000 digits before or after the decimal point. For example,\nround(1e-5000, 5000) returns 0 instead of 1e-5000.\n\nAttached is a patch fixing that, using the actual documented range of\ntype numeric.\n\nI've also tidied up a bit by replacing all instances of SHRT_MAX with\na new constant NUMERIC_WEIGHT_MAX, whose name more accurately\ndescribes the limit, as used in various other overflow checks.\n\nIn doing so, I also noticed a comment in power_var() which claimed\nthat ln_dweight could be as low as -SHRT_MAX (-32767), which is wrong.\nIt can only be as small as -NUMERIC_DSCALE_MAX (-16383), though that\ndoesn't affect the point being made in that comment.\n\nI'd like to treat this as a bug-fix and back-patch it, since the\ncurrent behaviour is clearly broken.\n\nRegards,\nDean", "msg_date": "Sun, 7 Jul 2024 12:28:32 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect results from numeric round() and trunc()" }, { "msg_contents": "On Sun, Jul 7, 2024, at 13:28, Dean Rasheed wrote:\n> The numeric round() and trunc() functions clamp the scale argument to\n> the range between +/- NUMERIC_MAX_RESULT_SCALE, which is +/- 2000.\n> That's a long way short of the actual allowed range of type numeric,\n> so they produce incorrect results when rounding/truncating more than\n> 2000 digits before or after the decimal point. For example,\n> round(1e-5000, 5000) returns 0 instead of 1e-5000.\n>\n> Attached is a patch fixing that, using the actual documented range of\n> type numeric.\n>\n> I've also tidied up a bit by replacing all instances of SHRT_MAX with\n> a new constant NUMERIC_WEIGHT_MAX, whose name more accurately\n> describes the limit, as used in various other overflow checks.\n>\n> In doing so, I also noticed a comment in power_var() which claimed\n> that ln_dweight could be as low as -SHRT_MAX (-32767), which is wrong.\n> It can only be as small as -NUMERIC_DSCALE_MAX (-16383), though that\n> doesn't affect the point being made in that comment.\n>\n> I'd like to treat this as a bug-fix and back-patch it, since the\n> current behaviour is clearly broken.\n\nFix seems straightforward to me.\nI agree it should be back-patched.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 07 Jul 2024 16:22:48 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect results from numeric round() and trunc()" }, { "msg_contents": "On Sun, Jul 7, 2024, at 13:28, Dean Rasheed wrote:\n> I've also tidied up a bit by replacing all instances of SHRT_MAX with\n> a new constant NUMERIC_WEIGHT_MAX, whose name more accurately\n> describes the limit, as used in various other overflow checks.\n\nHaving thought a bit more on this, I think we probably need a\nDEC_DIGITS sensitive definition of NUMERIC_WEIGHT_MAX,\nsince per spec the max range for numeric is 0x20000 (131072)\ndecimal digits.\n\nTherefore, I think perhaps what we want is:\n\n+#define NUMERIC_DSCALE_MIN 0\n+#define NUMERIC_WEIGHT_MAX ((0x20000/DEC_DIGITS)-1)\n+#define NUMERIC_WEIGHT_MIN (-(NUMERIC_DSCALE_MAX+1)/DEC_DIGITS)\n\nMaybe also 0x20000 (131072) should be a defined constant.\n\nRegards,\nJoel\n\n\n", "msg_date": "Mon, 08 Jul 2024 01:40:31 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect results from numeric round() and trunc()" }, { "msg_contents": "On Mon, 8 Jul 2024 at 00:40, Joel Jacobson <[email protected]> wrote:\n>\n> On Sun, Jul 7, 2024, at 13:28, Dean Rasheed wrote:\n> > I've also tidied up a bit by replacing all instances of SHRT_MAX with\n> > a new constant NUMERIC_WEIGHT_MAX, whose name more accurately\n> > describes the limit, as used in various other overflow checks.\n>\n> Having thought a bit more on this, I think we probably need a\n> DEC_DIGITS sensitive definition of NUMERIC_WEIGHT_MAX,\n> since per spec the max range for numeric is 0x20000 (131072)\n> decimal digits.\n>\n\nNo, the maximum weight is determined by the use of int16 to store the\nweight. Therefore if you did reduce DEC_DIGITS to 1 or 2, the number\nof decimal digits allowed before the decimal point would be reduced\ntoo.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:45:22 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect results from numeric round() and trunc()" }, { "msg_contents": "On Mon, Jul 8, 2024, at 11:45, Dean Rasheed wrote:\n> On Mon, 8 Jul 2024 at 00:40, Joel Jacobson <[email protected]> wrote:\n>>\n>> On Sun, Jul 7, 2024, at 13:28, Dean Rasheed wrote:\n>> > I've also tidied up a bit by replacing all instances of SHRT_MAX with\n>> > a new constant NUMERIC_WEIGHT_MAX, whose name more accurately\n>> > describes the limit, as used in various other overflow checks.\n>>\n>> Having thought a bit more on this, I think we probably need a\n>> DEC_DIGITS sensitive definition of NUMERIC_WEIGHT_MAX,\n>> since per spec the max range for numeric is 0x20000 (131072)\n>> decimal digits.\n>>\n>\n> No, the maximum weight is determined by the use of int16 to store the\n> weight. Therefore if you did reduce DEC_DIGITS to 1 or 2, the number\n> of decimal digits allowed before the decimal point would be reduced\n> too.\n\nOK, that can actually be seen as a feature, especially since it's\nof course more likely DEC_DIGITS could increase in the future\nthan decrease.\n\nFor example, let's say we would double it to 8,\nthen if NUMERIC_WEIGHT_MAX would still be 0x7FFF (32767),\nthen the maximum range for numeric would increase from 131072 to 262144\ndecimal digits allowed before the decimal point.\n\nLGTM.\n\nRegards,\nJoel\n\n\n", "msg_date": "Mon, 08 Jul 2024 12:08:10 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect results from numeric round() and trunc()" }, { "msg_contents": "On Mon, 8 Jul 2024 at 11:08, Joel Jacobson <[email protected]> wrote:\n>\n> LGTM.\n>\n\nThanks for the review. I have pushed and back-patched this.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 9 Jul 2024 08:28:41 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect results from numeric round() and trunc()" } ]
[ { "msg_contents": "Hi,\n\nI've noticed an issue with non-superusers who have the pg_maintain role.\nWhen they run VACUUM on a specific table within a specific schema,\nlike \"VACUUM mynsp.mytbl\", it fails if they don't have the USAGE privilege\non the schema. For example, the error message logged is\n\"ERROR: permission denied for schema mynsp\". However, running VACUUM\nwithout specifying the table name, such as \"VACUUM\",\ncompletes successfully and vacuums all tables, including those in schemas\nwhere the user lacks the USAGE privilege.\n\nIs this behavior intentional?\n\nThis issue also affects other maintenance commands covered by pg_maintain.\n\nI assumed that a pg_maintain user could run VACUUM on specific tables\nin any schema without needing additional privileges. So, shouldn't\npg_maintain users be able to perform maintenance commands as if they have\nUSAGE rights on all schemas?\n\nIf this has already been discussed and the current behavior is deemed proper,\nI'm sorry for bringing it up again. Even in that case, it would be helpful\nto document that USAGE privilege on the schema may be necessary in addition\nto pg_maintain to perform the maintenance command.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 8 Jul 2024 01:03:42 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "pg_maintain and USAGE privilege on schema" }, { "msg_contents": "On Mon, Jul 08, 2024 at 01:03:42AM +0900, Fujii Masao wrote:\n> I've noticed an issue with non-superusers who have the pg_maintain role.\n> When they run VACUUM on a specific table within a specific schema,\n> like \"VACUUM mynsp.mytbl\", it fails if they don't have the USAGE privilege\n> on the schema. For example, the error message logged is\n> \"ERROR: permission denied for schema mynsp\". However, running VACUUM\n> without specifying the table name, such as \"VACUUM\",\n> completes successfully and vacuums all tables, including those in schemas\n> where the user lacks the USAGE privilege.\n> \n> Is this behavior intentional?\n\nI'd consider it intentional because it matches the database owner behavior.\nIf the database owner does not have USAGE on a schema, they'll similarly be\nunable to VACUUM a specific table in that schema while being able to VACUUM\nit via a database-wide command. That's admittedly a little weird, but IMHO\nany changes in this area should apply to both pg_maintain and the database\nowner.\n\n> I assumed that a pg_maintain user could run VACUUM on specific tables\n> in any schema without needing additional privileges. So, shouldn't\n> pg_maintain users be able to perform maintenance commands as if they have\n> USAGE rights on all schemas?\n\nIt might be reasonable to give implicit USAGE privileges on all schemas\nduring maintenance commands to pg_maintain roles. I would be a little\nhesitant to consider this v17 material, though.\n\nThere are some other inconsistencies that predate MAINTAIN that I think we\nought to clear up at some point. For example, the privilege checks for\nREINDEX work a bit differently than VACUUM, ANALYZE, and CLUSTER. I doubt\nthat's causing anyone too much trouble in the field, but since we're\ngrouping these commands together as \"maintenance commands\" now, it'd be\nnice to make them as consistent as possible.\n\n-- \nnathan\n\n\n", "msg_date": "Sun, 7 Jul 2024 21:13:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" }, { "msg_contents": "On Mon, Jul 08, 2024 at 01:03:42AM +0900, Fujii Masao wrote:\n> If this has already been discussed and the current behavior is deemed proper,\n> I'm sorry for bringing it up again. Even in that case, it would be helpful\n> to document that USAGE privilege on the schema may be necessary in addition\n> to pg_maintain to perform the maintenance command.\n\nIf we don't proceed with the implicit-USAGE-privilege idea, +1 for\ndocumenting.\n\n-- \nnathan\n\n\n", "msg_date": "Sun, 7 Jul 2024 21:17:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" }, { "msg_contents": "\n\nOn 2024/07/08 11:13, Nathan Bossart wrote:\n> On Mon, Jul 08, 2024 at 01:03:42AM +0900, Fujii Masao wrote:\n>> I've noticed an issue with non-superusers who have the pg_maintain role.\n>> When they run VACUUM on a specific table within a specific schema,\n>> like \"VACUUM mynsp.mytbl\", it fails if they don't have the USAGE privilege\n>> on the schema. For example, the error message logged is\n>> \"ERROR: permission denied for schema mynsp\". However, running VACUUM\n>> without specifying the table name, such as \"VACUUM\",\n>> completes successfully and vacuums all tables, including those in schemas\n>> where the user lacks the USAGE privilege.\n>>\n>> Is this behavior intentional?\n> \n> I'd consider it intentional because it matches the database owner behavior.\n> If the database owner does not have USAGE on a schema, they'll similarly be\n> unable to VACUUM a specific table in that schema while being able to VACUUM\n> it via a database-wide command. \n\nYes, you're right.\n\n\n> That's admittedly a little weird, but IMHO\n> any changes in this area should apply to both pg_maintain and the database\n> owner.\n\nHowever, unlike the database owner, pg_maintain by definition should\nhave *all* the rights needed for maintenance tasks, including MAINTAIN\nrights on tables and USAGE rights on schemas? ISTM that both\npg_read_all_data and pg_write_all_data roles are defined similarly,\nwith USAGE rights on all schemas. So, granting USAGE rights to\npg_maintain, but not the database owner, doesn't seem so odd to me.\n\n\n>> I assumed that a pg_maintain user could run VACUUM on specific tables\n>> in any schema without needing additional privileges. So, shouldn't\n>> pg_maintain users be able to perform maintenance commands as if they have\n>> USAGE rights on all schemas?\n> \n> It might be reasonable to give implicit USAGE privileges on all schemas\n> during maintenance commands to pg_maintain roles. I would be a little\n> hesitant to consider this v17 material, though.\n\nThat's a valid concern. I'd like hear more opinions about this.\n\n\n> There are some other inconsistencies that predate MAINTAIN that I think we\n> ought to clear up at some point. For example, the privilege checks for\n> REINDEX work a bit differently than VACUUM, ANALYZE, and CLUSTER. I doubt\n> that's causing anyone too much trouble in the field, but since we're\n> grouping these commands together as \"maintenance commands\" now, it'd be\n> nice to make them as consistent as possible.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 10 Jul 2024 17:13:58 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" }, { "msg_contents": "On Wed, Jul 10, 2024 at 05:13:58PM +0900, Fujii Masao wrote:\n> However, unlike the database owner, pg_maintain by definition should\n> have *all* the rights needed for maintenance tasks, including MAINTAIN\n> rights on tables and USAGE rights on schemas? ISTM that both\n> pg_read_all_data and pg_write_all_data roles are defined similarly,\n> with USAGE rights on all schemas. So, granting USAGE rights to\n> pg_maintain, but not the database owner, doesn't seem so odd to me.\n\nIt doesn't seem so odd to me, either. But there are other things that\ncould prevent a role with privileges of pg_maintain from being able to\nVACUUM a table. For example, the role might not have LOGIN, or it might\nnot have CONNECT on the database. I think the argument for giving\npg_maintain roles implicit USAGE on all schemas for only maintenance\ncommands is that we already do that in some cases (e.g., a database-wide\nVACUUM).\n\n> I'd like hear more opinions about this.\n\n+1\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 09:04:24 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" }, { "msg_contents": "On Wed, 2024-07-10 at 17:13 +0900, Fujii Masao wrote:\n> ISTM that both\n> pg_read_all_data and pg_write_all_data roles are defined similarly,\n> with USAGE rights on all schemas.\n\nI'm not so sure that was a great idea to begin with. If you create a\nprivate schema with a SECURITY DEFINER function in it, it's a bit odd\nto allow someone with pg_read_all_data to execute it. Granted, that's\ndocumented behavior, but I'm not sure the privileges should be bundled\nin that fashion.\n\n> > It might be reasonable to give implicit USAGE privileges on all\n> > schemas\n> > during maintenance commands to pg_maintain roles.\n\nThat's an even more specific exception: having USAGE only in the\ncontext of a maintenance command. I think that's a new concept, right?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 10 Jul 2024 12:29:00 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" }, { "msg_contents": "On Wed, Jul 10, 2024 at 12:29:00PM -0700, Jeff Davis wrote:\n>> > It might be reasonable to give implicit USAGE privileges on all\n>> > schemas\n>> > during maintenance commands to pg_maintain roles.\n> \n> That's an even more specific exception: having USAGE only in the\n> context of a maintenance command. I think that's a new concept, right?\n\nAFAIK yes.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 14:32:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_maintain and USAGE privilege on schema" } ]
[ { "msg_contents": "Hello hackers,\n\nThis patch adds a mul_var_large() that is dispatched to from mul_var()\nfor var1ndigits >= 8, regardless of rscale.\n\nThe main idea with mul_var_large() is to reduce the \"n\" in O(n^2) by a factor\nof two.\n\nThis is achieved by first converting the (ndigits) number of int16 NBASE digits,\nto (ndigits/2) number of int32 NBASE^2 digits, as well as upgrading the\nint32 variables to int64-variables so that the products and carry values fit.\n\nThe existing mul_var() algorithm is then executed without structural change.\n\nFinally, the int32 NBASE^2 result digits are converted back to twice the number\nof int16 NBASE digits.\n\nThis adds overhead of approximately 4 * O(n), due to the conversion.\n\nBenchmarks indicates mul_var_large() starts to be beneficial when\nvar1 is at least 8 ndigits, or perhaps a little more.\nDefinitively an interesting speed-up for 100 ndigits and above.\n\nBenchmarked on Apple M3 Max so far:\n\n-- var1ndigits == var2ndigits == 10\nSELECT COUNT(*) FROM n_10 WHERE product = var1 * var2;\nTime: 3957.740 ms (00:03.958) -- HEAD\nTime: 3943.795 ms (00:03.944) -- mul_var_large\n\n-- var1ndigits == var2ndigits == 100\nSELECT COUNT(*) FROM n_100 WHERE product = var1 * var2;\nTime: 1532.594 ms (00:01.533) -- HEAD\nTime: 1065.974 ms (00:01.066) -- mul_var_large\n\n-- var1ndigits == var2ndigits == 1000\nSELECT COUNT(*) FROM n_1000 WHERE product = var1 * var2;\nTime: 3055.372 ms (00:03.055) -- HEAD\nTime: 2295.888 ms (00:02.296) -- mul_var_large\n\n-- var1ndigits and var2ndigits completely random,\n-- with random number of decimal digits\nSELECT COUNT(*) FROM n_mixed WHERE product = var1 * var2;\nTime: 46796.240 ms (00:46.796) -- HEAD\nTime: 27970.006 ms (00:27.970) -- mul_var_large\n\n-- var1ndigits == var2ndigits == 16384\nSELECT COUNT(*) FROM n_max WHERE product = var1 * var2;\nTime: 3191.145 ms (00:03.191) -- HEAD\nTime: 1836.404 ms (00:01.836) -- mul_var_large\n\nRegards,\nJoel", "msg_date": "Sun, 07 Jul 2024 21:46:20 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Sun, 7 Jul 2024 at 20:46, Joel Jacobson <[email protected]> wrote:\n>\n> This patch adds a mul_var_large() that is dispatched to from mul_var()\n> for var1ndigits >= 8, regardless of rscale.\n>\n> -- var1ndigits == var2ndigits == 16384\n> SELECT COUNT(*) FROM n_max WHERE product = var1 * var2;\n> Time: 3191.145 ms (00:03.191) -- HEAD\n> Time: 1836.404 ms (00:01.836) -- mul_var_large\n>\n\nThat's pretty nice. For some reason though, this patch seems to\nconsistently make the numeric_big regression test a bit slower:\n\nok 224 - numeric_big 280 ms [HEAD]\nok 224 - numeric_big 311 ms [patch]\n\nthough I do get a lot of variability when I run that test.\n\nI think this is related to this code:\n\n+ res_ndigits = var1ndigits + var2ndigits + 1;\n+ res_weight = var1->weight + var2->weight + 2 +\n+ ((res_ndigits * 2) - (var1->ndigits + var2->ndigits + 1));\n+ maxdigits = res_weight + 1 + (rscale + DEC_DIGITS - 1) / DEC_DIGITS +\n+ MUL_GUARD_DIGITS;\n+ res_ndigits = Min(res_ndigits, maxdigits);\n\nIn mul_var_large(), var1ndigits, var2ndigits, and res_ndigits are\ncounts of NBASE^2 digits, whereas maxdigits is a count of NBASE\ndigits, so it's not legit to compare them like that. I think it's\npretty confusing to use the same variable names as are used elsewhere\nfor different things.\n\nI also don't like basically duplicating the whole of mul_var() in a\nsecond function.\n\nThe fact that this is close to the current speed for numbers with\naround 8 digits is encouraging though. My first thought was that if it\ncould be made just a little faster, maybe it could replace mul_var()\nrather than duplicating it.\n\nI had a go at that in the attached v2 patch, which now gives a very\nnice speedup when running numeric_big:\n\nok 224 - numeric_big 159 ms [v2 patch]\n\nThe v2 patch includes the following additional optimisations:\n\n1). Use unsigned integers, rather than signed integers, as discussed\nover in [1].\n\n2). Attempt to fix the formulae incorporating maxdigits mentioned\nabove. This part really made my brain hurt, and I'm still not quite\nsure that I've got it right. In particular, it needs double-checking\nto ensure that it's not losing accuracy in the reduced-rscale case.\n\n3). Instead of converting var1 to base NBASE^2 and storing it, just\ncompute each base-NBASE^2 digit at the point where it's needed, at the\nstart of the outer loop.\n\n4). Instead of converting all of var2 to base NBASE^2, just convert\nthe part that actually contributes to the final result. That can make\na big difference when asked for a reduced-rscale result.\n\n5). Use 32-bit integers instead of 64-bit integers to hold the\nconverted digits of var2.\n\n6). Merge the final carry-propagation pass with the code to convert\nthe result back to base NBASE.\n\n7). Mark mul_var_short() as pg_noinline. That seemed to make a\ndifference in some tests, where this patch seemed to cause the\ncompiler to generate slightly less efficient code for mul_var_short()\nwhen it was inlined. In any case, it seems preferable to separate the\ntwo, especially if mul_var_short() grows in the future.\n\nOverall, those optimisations made quite a big difference for large inputs:\n\n-- var1ndigits1=16383, var2ndigits2=16383\ncall rate=23.991785 -- HEAD\ncall rate=35.19989 -- v1 patch\ncall rate=75.50344 -- v2 patch\n\nwhich is now a 3.1x speedup relative to HEAD.\n\nFor small inputs (above mul_var_short()'s 4-digit threshold), it's\npretty-much performance-neutral:\n\n-- var1ndigits1=5, var2ndigits2=5\ncall rate=6.045675e+06 -- HEAD\ncall rate=6.1231815e+06 -- v2 patch\n\nwhich is pretty-much in the region of random noise. It starts to\nbecome a more definite win for anything larger (in either input):\n\n-- var1ndigits1=5, var2ndigits2=10\ncall rate=5.437945e+06 -- HEAD\ncall rate=5.6906255e+06 -- v2 patch\n\n-- var1ndigits1=6, var2ndigits2=6\ncall rate=5.748427e+06 -- HEAD\ncall rate=5.953112e+06 -- v2 patch\n\n-- var1ndigits1=7, var2ndigits2=7\ncall rate=5.3638645e+06 -- HEAD\ncall rate=5.700681e+06 -- v2 patch\n\nWhat's less clear is whether there are any platforms for which this\nmakes it significantly slower.\n\nI tried testing with SIMD disabled, which ought to not affect the\nsmall-input cases, but actually seemed to favour the patch slightly:\n\n-- var1ndigits1=5, var2ndigits2=5 [SIMD disabled]\ncall rate=6.0269715e+06 -- HEAD\ncall rate=6.2982245e+06 -- v2 patch\n\nFor large inputs, disabling SIMD makes everything much slower, but it\nmade the relative difference larger:\n\n-- var1ndigits1=16383, var2ndigits2=16383 [SIMD disabled]\ncall rate=8.24175 -- HEAD\ncall rate=36.3987 -- v2 patch\n\nwhich is now 4.3x faster with the patch.\n\nThen I tried compiling with -m32, and unfortunately this made the\npatch slower than HEAD for small inputs:\n\n-- var1ndigits1=5, var2ndigits2=5 [-m32, SIMD disabled]\ncall rate=5.052332e+06 -- HEAD\ncall rate=3.883459e+06 -- v2 patch\n\n-- var1ndigits1=6, var2ndigits2=6 [-m32, SIMD disabled]\ncall rate=4.7221405e+06 -- HEAD\ncall rate=3.7965685e+06 -- v2 patch\n\n-- var1ndigits1=7, var2ndigits2=7 [-m32, SIMD disabled]\ncall rate=4.4638375e+06 -- HEAD\ncall rate=3.39948e+06 -- v2 patch\n\nand it doesn't reach parity until around ndigits=26, which is\ndisappointing. It does still get much faster for large inputs:\n\n-- var1ndigits1=16383, var2ndigits2=16383 [-m32, SIMD disabled]\ncall rate=5.6747904\ncall rate=20.104694\n\nand it still makes numeric_big much faster:\n\n[-m32, SIMD disabled]\nok 224 - numeric_big 596 ms [HEAD]\nok 224 - numeric_big 294 ms [v2 patch]\n\nI'm not sure whether compiling with -m32 is a realistic simulation of\nsystems people are really using. It's tempting to just regard such\nsystems as legacy, and ignore them, given the other large gains, but\nmaybe this is not the only case that gets slower.\n\nAnother option would be to only use this optimisation on 64-bit\nmachines, though I think that would make the code pretty messy, and it\nwould be throwing away the gains for large inputs, which might well\noutweigh the losses.\n\nThoughts anyone?\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/[email protected]", "msg_date": "Fri, 12 Jul 2024 13:34:01 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Fri, 12 Jul 2024 at 13:34, Dean Rasheed <[email protected]> wrote:\n>\n> Then I tried compiling with -m32, and unfortunately this made the\n> patch slower than HEAD for small inputs:\n>\n> -- var1ndigits1=5, var2ndigits2=5 [-m32, SIMD disabled]\n> call rate=5.052332e+06 -- HEAD\n> call rate=3.883459e+06 -- v2 patch\n>\n> -- var1ndigits1=6, var2ndigits2=6 [-m32, SIMD disabled]\n> call rate=4.7221405e+06 -- HEAD\n> call rate=3.7965685e+06 -- v2 patch\n>\n> -- var1ndigits1=7, var2ndigits2=7 [-m32, SIMD disabled]\n> call rate=4.4638375e+06 -- HEAD\n> call rate=3.39948e+06 -- v2 patch\n>\n> and it doesn't reach parity until around ndigits=26, which is\n> disappointing. It does still get much faster for large inputs\n>\n\nI spent some more time hacking on this, to try to improve the\nsituation for 32-bit builds. One of the biggest factors was the 64-bit\ndivision that is necessary during carry propagation, which is very\nslow -- every compiler/platform I looked at on godbolt.org turns this\ninto a call to a slow function like __udivdi3(). However, since we are\ndividing by a constant (NBASE^2), it can be done using the same\nmultiply-and-shift trick that compilers use in 64-bit builds, and that\nsignificantly improves performance.\n\nUnfortunately, in 32-bit builds, using 64-bit integers is still slower\nfor small inputs (below 20-30 NBASE digits), so in the end I figured\nthat it's best to retain the old 32-bit base-NBASE multiplication code\nfor small inputs, and only use 64-bit base-NBASE^2 multiplication when\nthe inputs are above a certain size threshold. Furthermore, since this\nthreshold is quite low, it's possible to make an additional\nsimplification: as long as the threshold is less than or equal to 42,\nit can be shown that there is no chance of 32-bit integer overflow,\nand so the \"maxdig\" tracking and renormalisation code is not needed.\nGetting rid of that makes the outer multiplication loop very simple,\nand makes quite a noticeable difference to the performance for inputs\nbelow the threshold.\n\nIn 64-bit builds, doing 64-bit base-NBASE^2 multiplication is faster\nfor all inputs over 4 or 5 NBASE digits, so the threshold can be much\nlower. However, for numbers near that threshold, it's a close thing,\nso it makes sense to also extend mul_var_small() to cover 1-6 digit\ninputs, since that gives a much larger gain for numbers of that size.\nThat's quite nice because it equates to inputs with up to 21-24\ndecimal digits, which I suspect are quite commonly used in practice.\n\nOne risk in having different thresholds in 32-bit and 64-bit builds is\nthat it opens up the possibility that the results from the\nreduced-rscale computations used by inexact functions like ln() and\nexp() might be be different (actually, this may already be a\npossibility, due to div_var_fast()'s use of floating point arithmetic,\nbut that seems considerably less likely). In practice though, this\nshould be extremely unlikely, due to the fact that any difference\nwould have to propagate through MUL_GUARD_DIGITS NBASE digits (8\ndecimal digits), and it doesn't show up in any of the existing tests.\nIMO a very small chance of different results on different platforms is\nprobably acceptable, but it wouldn't be acceptable to make the\nthreshold a runtime configurable parameter that could lead to\ndifferent results on the same platform.\n\nOverall, this has turned out to be more code than I would have liked,\nbut I think it's worth it, because the performance gains are pretty\nsubstantial across the board.\n\n(Here, I'm comparing with REL_17_STABLE, so that the effect of\nmul_var_short() for ndigits <= 6 can be seen.)\n\n32-bit build\n============\n\nBalanced inputs:\n\n ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n----------+----------+---------------+---------------+----------\n 1 | 1 | 5.973068e+06 | 6.873059e+06 | +15.07%\n 2 | 2 | 5.646598e+06 | 6.6456665e+06 | +17.69%\n 3 | 3 | 5.8176995e+06 | 7.0903175e+06 | +21.87%\n 4 | 4 | 5.545298e+06 | 6.9701605e+06 | +25.69%\n 5 | 5 | 5.2109155e+06 | 6.6406185e+06 | +27.44%\n 6 | 6 | 4.9276335e+06 | 6.478698e+06 | +31.48%\n 7 | 7 | 4.6595095e+06 | 4.8514485e+06 | +4.12%\n 8 | 8 | 4.898536e+06 | 5.1599975e+06 | +5.34%\n 9 | 9 | 4.537171e+06 | 4.830987e+06 | +6.48%\n 10 | 10 | 4.308139e+06 | 4.6029265e+06 | +6.84%\n 11 | 11 | 4.092612e+06 | 4.37966e+06 | +7.01%\n 12 | 12 | 3.9345035e+06 | 4.213998e+06 | +7.10%\n 13 | 13 | 3.7600162e+06 | 4.0115955e+06 | +6.69%\n 14 | 14 | 3.5959855e+06 | 3.8216508e+06 | +6.28%\n 15 | 15 | 3.3576732e+06 | 3.6070588e+06 | +7.43%\n 16 | 16 | 3.657139e+06 | 3.9067975e+06 | +6.83%\n 17 | 17 | 3.3601955e+06 | 3.5651722e+06 | +6.10%\n 18 | 18 | 3.1844472e+06 | 3.4542238e+06 | +8.47%\n 19 | 19 | 3.0286392e+06 | 3.2380662e+06 | +6.91%\n 20 | 20 | 2.9012185e+06 | 3.0496352e+06 | +5.12%\n 21 | 21 | 2.755444e+06 | 2.9508798e+06 | +7.09%\n 22 | 22 | 2.6263908e+06 | 2.8168945e+06 | +7.25%\n 23 | 23 | 2.5470438e+06 | 2.7056318e+06 | +6.23%\n 24 | 24 | 2.764418e+06 | 2.9597732e+06 | +7.07%\n 25 | 25 | 2.509954e+06 | 2.7368215e+06 | +9.04%\n 26 | 26 | 2.3699395e+06 | 2.565404e+06 | +8.25%\n 27 | 27 | 2.286344e+06 | 2.4400948e+06 | +6.72%\n 28 | 28 | 2.199087e+06 | 2.361374e+06 | +7.38%\n 29 | 29 | 2.1208148e+06 | 2.26925e+06 | +7.00%\n 30 | 30 | 2.0469475e+06 | 2.2127455e+06 | +8.10%\n 31 | 31 | 1.9973804e+06 | 2.3771615e+06 | +19.01%\n 32 | 32 | 2.1288205e+06 | 2.3166555e+06 | +8.82%\n 33 | 33 | 1.9876898e+06 | 2.1759028e+06 | +9.47%\n 34 | 34 | 1.8906434e+06 | 2.169534e+06 | +14.75%\n 35 | 35 | 1.8069352e+06 | 1.990085e+06 | +10.14%\n 36 | 36 | 1.7412926e+06 | 1.9940908e+06 | +14.52%\n 37 | 37 | 1.6956435e+06 | 1.8492525e+06 | +9.06%\n 38 | 38 | 1.6253338e+06 | 1.8493976e+06 | +13.79%\n 39 | 39 | 1.5734566e+06 | 1.9296996e+06 | +22.64%\n 40 | 40 | 1.6692021e+06 | 1.902438e+06 | +13.97%\n 50 | 50 | 1.1116885e+06 | 1.5319529e+06 | +37.80%\n 100 | 100 | 399552.38 | 722142.44 | +80.74%\n 250 | 250 | 81092.02 | 195967.67 | +141.66%\n 500 | 500 | 21654.633 | 58329.473 | +169.36%\n 1000 | 1000 | 5525.9775 | 16420.611 | +197.15%\n 2500 | 2500 | 907.98206 | 2825.7324 | +211.21%\n 5000 | 5000 | 230.26935 | 731.26105 | +217.57%\n 10000 | 10000 | 57.793922 | 179.12334 | +209.93%\n 16383 | 16383 | 21.446728 | 64.39028 | +200.23%\n\nUnbalanced inputs:\n\n ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n----------+----------+-----------+------------+----------\n 1 | 10000 | 42816.89 | 52843.01 | +23.42%\n 2 | 10000 | 41032.25 | 52111.957 | +27.00%\n 3 | 10000 | 39550.176 | 52262.477 | +32.14%\n 4 | 10000 | 38015.59 | 43962.535 | +15.64%\n 5 | 10000 | 36560.22 | 43736.305 | +19.63%\n 6 | 10000 | 35305.77 | 38204.2 | +8.21%\n 7 | 10000 | 33833.086 | 36533.387 | +7.98%\n 8 | 10000 | 32847.996 | 35774.715 | +8.91%\n 9 | 10000 | 31345.736 | 33831.926 | +7.93%\n 10 | 10000 | 30351.6 | 32715.969 | +7.79%\n 11 | 10000 | 29321.592 | 31478.398 | +7.36%\n 12 | 10000 | 28616.018 | 30861.885 | +7.85%\n 13 | 10000 | 28216.12 | 29510.95 | +4.59%\n 14 | 10000 | 27396.408 | 29077.73 | +6.14%\n 15 | 10000 | 26519.08 | 28235.08 | +6.47%\n 16 | 10000 | 25778.102 | 27538.908 | +6.83%\n 17 | 10000 | 26024.918 | 26677.498 | +2.51%\n 18 | 10000 | 25316.346 | 26729.395 | +5.58%\n 19 | 10000 | 24626.07 | 26076.979 | +5.89%\n 20 | 10000 | 23912.383 | 25709.967 | +7.52%\n 21 | 10000 | 23238.488 | 24761.57 | +6.55%\n 22 | 10000 | 22746.25 | 23925.934 | +5.19%\n 23 | 10000 | 22120.777 | 23442.34 | +5.97%\n 24 | 10000 | 21645.215 | 22771.193 | +5.20%\n 25 | 10000 | 21135.049 | 22185.893 | +4.97%\n 26 | 10000 | 20685.025 | 21831.74 | +5.54%\n 27 | 10000 | 20039.559 | 20854.793 | +4.07%\n 28 | 10000 | 19846.092 | 21072.758 | +6.18%\n 29 | 10000 | 19414.059 | 20289.414 | +4.51%\n 30 | 10000 | 18968.617 | 19774.797 | +4.25%\n 31 | 10000 | 18394.988 | 21307.074 | +15.83%\n 32 | 10000 | 18291.504 | 21349.691 | +16.72%\n 33 | 10000 | 17899.676 | 20885.299 | +16.68%\n 34 | 10000 | 17505.402 | 20620.105 | +17.79%\n 35 | 10000 | 17174.918 | 20383.594 | +18.68%\n 36 | 10000 | 16609.867 | 20342.18 | +22.47%\n 37 | 10000 | 16457.889 | 19953.91 | +21.24%\n 38 | 10000 | 15926.13 | 19783.203 | +24.22%\n 39 | 10000 | 15441.283 | 19288.654 | +24.92%\n 40 | 10000 | 15038.773 | 19415.52 | +29.10%\n 50 | 10000 | 11264.285 | 17608.648 | +56.32%\n 100 | 10000 | 6253.7637 | 11620.726 | +85.82%\n 250 | 10000 | 2696.207 | 5939.3857 | +120.29%\n 500 | 10000 | 1338.4141 | 3268.2004 | +144.18%\n 1000 | 10000 | 672.6717 | 1691.9614 | +151.53%\n 2500 | 10000 | 267.5996 | 708.20386 | +164.65%\n 5000 | 10000 | 131.50755 | 353.92822 | +169.13%\n\nnumeric_big regression test:\n\nok 224 - numeric_big 279 ms [PG17]\nok 224 - numeric_big 182 ms [v3 patch]\n\n\n64-bit build\n============\n\nBalanced inputs:\n\n ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n----------+----------+---------------+---------------+----------\n 1 | 1 | 8.507485e+06 | 9.53221e+06 | +12.04%\n 2 | 2 | 8.0975715e+06 | 9.431853e+06 | +16.48%\n 3 | 3 | 6.461359e+06 | 7.3669945e+06 | +14.02%\n 4 | 4 | 6.1728355e+06 | 7.152418e+06 | +15.87%\n 5 | 5 | 6.500831e+06 | 7.6977115e+06 | +18.41%\n 6 | 6 | 6.1784075e+06 | 7.3765005e+06 | +19.39%\n 7 | 7 | 5.90117e+06 | 6.2799965e+06 | +6.42%\n 8 | 8 | 5.9217105e+06 | 6.147141e+06 | +3.81%\n 9 | 9 | 5.477262e+06 | 5.9889475e+06 | +9.34%\n 10 | 10 | 5.2147235e+06 | 5.858963e+06 | +12.35%\n 11 | 11 | 4.882895e+06 | 5.6766675e+06 | +16.26%\n 12 | 12 | 4.61105e+06 | 5.559544e+06 | +20.57%\n 13 | 13 | 4.382494e+06 | 5.3770165e+06 | +22.69%\n 14 | 14 | 4.134509e+06 | 5.256462e+06 | +27.14%\n 15 | 15 | 3.7595882e+06 | 5.0751355e+06 | +34.99%\n 16 | 16 | 4.3353435e+06 | 4.970363e+06 | +14.65%\n 17 | 17 | 3.9258755e+06 | 4.935394e+06 | +25.71%\n 18 | 18 | 3.7562495e+06 | 4.8723875e+06 | +29.71%\n 19 | 19 | 3.4890312e+06 | 4.723648e+06 | +35.39%\n 20 | 20 | 3.289758e+06 | 4.6569305e+06 | +41.56%\n 21 | 21 | 3.103908e+06 | 4.4747755e+06 | +44.17%\n 22 | 22 | 2.9545148e+06 | 4.4227305e+06 | +49.69%\n 23 | 23 | 2.7975982e+06 | 4.5065035e+06 | +61.08%\n 24 | 24 | 3.2456168e+06 | 4.4578115e+06 | +37.35%\n 25 | 25 | 2.9515055e+06 | 4.0208335e+06 | +36.23%\n 26 | 26 | 2.8158568e+06 | 4.0437498e+06 | +43.61%\n 27 | 27 | 2.6376518e+06 | 3.8959785e+06 | +47.71%\n 28 | 28 | 2.5094318e+06 | 3.8648428e+06 | +54.01%\n 29 | 29 | 2.3714905e+06 | 3.67182e+06 | +54.83%\n 30 | 30 | 2.2456015e+06 | 3.6337285e+06 | +61.82%\n 31 | 31 | 2.169437e+06 | 3.7209152e+06 | +71.52%\n 32 | 32 | 2.5022498e+06 | 3.6609378e+06 | +46.31%\n 33 | 33 | 2.27133e+06 | 3.435459e+06 | +51.25%\n 34 | 34 | 2.1836462e+06 | 3.4042262e+06 | +55.90%\n 35 | 35 | 2.0753196e+06 | 3.2125422e+06 | +54.80%\n 36 | 36 | 1.9650498e+06 | 3.185525e+06 | +62.11%\n 37 | 37 | 1.8668318e+06 | 3.0366508e+06 | +62.66%\n 38 | 38 | 1.7678832e+06 | 3.014941e+06 | +70.54%\n 39 | 39 | 1.6764314e+06 | 3.1080448e+06 | +85.40%\n 40 | 40 | 1.9170026e+06 | 3.0942025e+06 | +61.41%\n 50 | 50 | 1.2242934e+06 | 2.3769868e+06 | +94.15%\n 100 | 100 | 401733.62 | 1.0854601e+06 | +170.19%\n 250 | 250 | 81861.45 | 249837.78 | +205.20%\n 500 | 500 | 21613.402 | 71239.04 | +229.61%\n 1000 | 1000 | 5551.617 | 18349.414 | +230.52%\n 2500 | 2500 | 906.501 | 3107.6462 | +242.82%\n 5000 | 5000 | 231.65045 | 794.86444 | +243.13%\n 10000 | 10000 | 58.372395 | 188.37387 | +222.71%\n 16383 | 16383 | 21.629467 | 73.58552 | +240.21%\n\nUnbalanced inputs:\n\n ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n----------+----------+-----------+------------+----------\n 1 | 10000 | 44137.258 | 62292.844 | +41.13%\n 2 | 10000 | 42009.895 | 62705.445 | +49.26%\n 3 | 10000 | 40569.617 | 58555.727 | +44.33%\n 4 | 10000 | 38914.03 | 49439.008 | +27.05%\n 5 | 10000 | 37361.39 | 47173.445 | +26.26%\n 6 | 10000 | 35807.61 | 42609.203 | +18.99%\n 7 | 10000 | 34386.684 | 49325.582 | +43.44%\n 8 | 10000 | 33380.312 | 49298.59 | +47.69%\n 9 | 10000 | 32228.17 | 46869.844 | +45.43%\n 10 | 10000 | 31022.46 | 47015.98 | +51.55%\n 11 | 10000 | 29782.623 | 45074 | +51.34%\n 12 | 10000 | 29540.896 | 44712.23 | +51.36%\n 13 | 10000 | 28521.643 | 42589.98 | +49.33%\n 14 | 10000 | 27772.59 | 43325.863 | +56.00%\n 15 | 10000 | 26871.25 | 41443 | +54.23%\n 16 | 10000 | 26179.322 | 41245.508 | +57.55%\n 17 | 10000 | 26367.402 | 39348.543 | +49.23%\n 18 | 10000 | 25769.176 | 40105.402 | +55.63%\n 19 | 10000 | 24869.504 | 38316.44 | +54.07%\n 20 | 10000 | 24395.436 | 37647.33 | +54.32%\n 21 | 10000 | 23532.748 | 36552.914 | +55.33%\n 22 | 10000 | 23151.842 | 36824.04 | +59.05%\n 23 | 10000 | 22661.33 | 35556.918 | +56.91%\n 24 | 10000 | 22113.502 | 34923.83 | +57.93%\n 25 | 10000 | 21481.773 | 33601.785 | +56.42%\n 26 | 10000 | 20943.576 | 34277.58 | +63.67%\n 27 | 10000 | 20437.605 | 32957.406 | +61.26%\n 28 | 10000 | 20049.12 | 32413.64 | +61.67%\n 29 | 10000 | 19674.787 | 31537.846 | +60.30%\n 30 | 10000 | 19092.572 | 32252.404 | +68.93%\n 31 | 10000 | 18761.932 | 30825.836 | +64.30%\n 32 | 10000 | 18480.184 | 30616.389 | +65.67%\n 33 | 10000 | 18130.89 | 29493.594 | +62.67%\n 34 | 10000 | 17750.996 | 30054.01 | +69.31%\n 35 | 10000 | 17406.83 | 29090.297 | +67.12%\n 36 | 10000 | 17138.23 | 29117.42 | +69.90%\n 37 | 10000 | 16666.799 | 28429.32 | +70.57%\n 38 | 10000 | 16144.025 | 29082.398 | +80.14%\n 39 | 10000 | 15548.838 | 28195.258 | +81.33%\n 40 | 10000 | 15305.571 | 27273.215 | +78.19%\n 50 | 10000 | 11099.766 | 25494.129 | +129.68%\n 100 | 10000 | 6310.7827 | 14895.447 | +136.03%\n 250 | 10000 | 2687.7397 | 7149.1016 | +165.99%\n 500 | 10000 | 1354.7455 | 3608.8845 | +166.39%\n 1000 | 10000 | 677.3838 | 1852.1256 | +173.42%\n 2500 | 10000 | 269.74582 | 748.5785 | +177.51%\n 5000 | 10000 | 132.6432 | 377.23288 | +184.40%\n\nnumeric_big regression test:\n\nok 224 - numeric_big 256 ms [PG17]\nok 224 - numeric_big 161 ms [v3 patch]\n\nRegards,\nDean", "msg_date": "Sun, 28 Jul 2024 20:18:07 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Sun, Jul 28, 2024, at 21:18, Dean Rasheed wrote:\n> Attachments:\n> * v3-0002-Optimise-numeric-multiplication-using-base-NBASE-.patch\n> * v3-0001-Extend-mul_var_short-to-5-and-6-digit-inputs.patch\n\nVery nice.\n\nI've done some initial benchmarks on my Intel Core i9-14900K machine.\n\nTo reduce noise, I've isolated a single CPU core, specifically CPU core id 31, to not get any work scheduled by the kernel:\n\n$ cat /proc/cmdline\nBOOT_IMAGE=/vmlinuz-5.15.0-116-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash isolcpus=31 intel_pstate=disable vt.handoff=7\n\nThen, I've used sched_setaffinity() from <sched.h> to ensure the benchmark run on CPU core id 31.\n\nI've also fixed the CPU frequency to 3.20 GHz:\n\n$ sudo cpufreq-info -c 31\n...\n current CPU frequency is 3.20 GHz (asserted by call to hardware).\n\nI've benchmarked each (var1ndigits, var2ndigits) 10 times per commit, in random order.\n\nI've benchmarked all commits after \"SQL/JSON: Various improvements to SQL/JSON query function docs\"\nwhich is the parent commit to \"Optimise numeric multiplication for short inputs.\",\nincluding the two patches.\n\nI've benchmarked each commit affecting numeric.c, and each such commit's parent commit, for comparison.\n\nThe accum_change column shows the accumulative percentage change since the baseline commit (SQL/JSON: Various improvements).\n\nThere is at least single digit percentage noise in the measurements,\nwhich is apparent since the rate fluctuates even between commits\nfor cases we know can't be affected by the change.\nStill, even with this noise level, the improvements are very visible and consistent.\n\n ndigits | rate | accum_change | summary\n---------------+------------+--------------+--------------------------------\n (1,1) | 1.702e+07 | | SQL/JSON: Various improvements\n (1,1) | 2.201e+07 | +29.32 % | Optimise numeric multiplicatio\n (1,1) | 2.268e+07 | +33.30 % | Use diff's --strip-trailing-cr\n (1,1) | 2.228e+07 | +30.92 % | Improve the numeric width_buck\n (1,1) | 2.195e+07 | +29.01 % | Add missing pointer dereferenc\n (1,1) | 2.241e+07 | +31.68 % | Extend mul_var_short() to 5 an\n (1,1) | 2.130e+07 | +25.17 % | Optimise numeric multiplicatio\n (1,2) | 1.585e+07 | | SQL/JSON: Various improvements\n (1,2) | 2.227e+07 | +40.49 % | Optimise numeric multiplicatio\n (1,2) | 2.140e+07 | +35.00 % | Use diff's --strip-trailing-cr\n (1,2) | 2.227e+07 | +40.51 % | Improve the numeric width_buck\n (1,2) | 2.183e+07 | +37.75 % | Add missing pointer dereferenc\n (1,2) | 2.241e+07 | +41.41 % | Extend mul_var_short() to 5 an\n (1,2) | 2.223e+07 | +40.26 % | Optimise numeric multiplicatio\n (1,3) | 1.554e+07 | | SQL/JSON: Various improvements\n (1,3) | 2.155e+07 | +38.70 % | Optimise numeric multiplicatio\n (1,3) | 2.140e+07 | +37.74 % | Use diff's --strip-trailing-cr\n (1,3) | 2.139e+07 | +37.66 % | Improve the numeric width_buck\n (1,3) | 2.234e+07 | +43.76 % | Add missing pointer dereferenc\n (1,3) | 2.142e+07 | +37.83 % | Extend mul_var_short() to 5 an\n (1,3) | 2.066e+07 | +32.97 % | Optimise numeric multiplicatio\n (1,4) | 1.450e+07 | | SQL/JSON: Various improvements\n (1,4) | 2.113e+07 | +45.70 % | Optimise numeric multiplicatio\n (1,4) | 2.121e+07 | +46.30 % | Use diff's --strip-trailing-cr\n (1,4) | 2.115e+07 | +45.85 % | Improve the numeric width_buck\n (1,4) | 2.166e+07 | +49.37 % | Add missing pointer dereferenc\n (1,4) | 2.053e+07 | +41.56 % | Extend mul_var_short() to 5 an\n (1,4) | 2.085e+07 | +43.82 % | Optimise numeric multiplicatio\n (1,8) | 1.440e+07 | | SQL/JSON: Various improvements\n (1,8) | 1.963e+07 | +36.38 % | Optimise numeric multiplicatio\n (1,8) | 2.018e+07 | +40.19 % | Use diff's --strip-trailing-cr\n (1,8) | 2.045e+07 | +42.05 % | Improve the numeric width_buck\n (1,8) | 1.998e+07 | +38.79 % | Add missing pointer dereferenc\n (1,8) | 1.953e+07 | +35.68 % | Extend mul_var_short() to 5 an\n (1,8) | 1.992e+07 | +38.36 % | Optimise numeric multiplicatio\n (1,16) | 9.444e+06 | | SQL/JSON: Various improvements\n (1,16) | 1.235e+07 | +30.75 % | Optimise numeric multiplicatio\n (1,16) | 1.232e+07 | +30.47 % | Use diff's --strip-trailing-cr\n (1,16) | 1.239e+07 | +31.18 % | Improve the numeric width_buck\n (1,16) | 1.222e+07 | +29.35 % | Add missing pointer dereferenc\n (1,16) | 1.220e+07 | +29.14 % | Extend mul_var_short() to 5 an\n (1,16) | 1.271e+07 | +34.54 % | Optimise numeric multiplicatio\n (1,32) | 5.790e+06 | | SQL/JSON: Various improvements\n (1,32) | 8.392e+06 | +44.95 % | Optimise numeric multiplicatio\n (1,32) | 8.459e+06 | +46.10 % | Use diff's --strip-trailing-cr\n (1,32) | 8.325e+06 | +43.79 % | Improve the numeric width_buck\n (1,32) | 8.242e+06 | +42.35 % | Add missing pointer dereferenc\n (1,32) | 8.288e+06 | +43.14 % | Extend mul_var_short() to 5 an\n (1,32) | 8.448e+06 | +45.91 % | Optimise numeric multiplicatio\n (1,64) | 3.540e+06 | | SQL/JSON: Various improvements\n (1,64) | 4.684e+06 | +32.31 % | Optimise numeric multiplicatio\n (1,64) | 4.840e+06 | +36.74 % | Use diff's --strip-trailing-cr\n (1,64) | 4.794e+06 | +35.43 % | Improve the numeric width_buck\n (1,64) | 4.721e+06 | +33.38 % | Add missing pointer dereferenc\n (1,64) | 4.785e+06 | +35.18 % | Extend mul_var_short() to 5 an\n (1,64) | 4.767e+06 | +34.66 % | Optimise numeric multiplicatio\n (1,128) | 1.873e+06 | | SQL/JSON: Various improvements\n (1,128) | 2.459e+06 | +31.29 % | Optimise numeric multiplicatio\n (1,128) | 2.461e+06 | +31.42 % | Use diff's --strip-trailing-cr\n (1,128) | 2.539e+06 | +35.54 % | Improve the numeric width_buck\n (1,128) | 2.498e+06 | +33.38 % | Add missing pointer dereferenc\n (1,128) | 2.489e+06 | +32.91 % | Extend mul_var_short() to 5 an\n (1,128) | 2.498e+06 | +33.39 % | Optimise numeric multiplicatio\n (1,256) | 9.659e+05 | | SQL/JSON: Various improvements\n (1,256) | 1.326e+06 | +37.29 % | Optimise numeric multiplicatio\n (1,256) | 1.340e+06 | +38.75 % | Use diff's --strip-trailing-cr\n (1,256) | 1.292e+06 | +33.78 % | Improve the numeric width_buck\n (1,256) | 1.321e+06 | +36.75 % | Add missing pointer dereferenc\n (1,256) | 1.299e+06 | +34.44 % | Extend mul_var_short() to 5 an\n (1,256) | 1.324e+06 | +37.04 % | Optimise numeric multiplicatio\n (1,512) | 5.071e+05 | | SQL/JSON: Various improvements\n (1,512) | 6.814e+05 | +34.37 % | Optimise numeric multiplicatio\n (1,512) | 6.697e+05 | +32.05 % | Use diff's --strip-trailing-cr\n (1,512) | 6.770e+05 | +33.50 % | Improve the numeric width_buck\n (1,512) | 6.688e+05 | +31.88 % | Add missing pointer dereferenc\n (1,512) | 6.743e+05 | +32.97 % | Extend mul_var_short() to 5 an\n (1,512) | 6.700e+05 | +32.11 % | Optimise numeric multiplicatio\n (1,1024) | 2.541e+05 | | SQL/JSON: Various improvements\n (1,1024) | 3.351e+05 | +31.86 % | Optimise numeric multiplicatio\n (1,1024) | 3.401e+05 | +33.83 % | Use diff's --strip-trailing-cr\n (1,1024) | 3.373e+05 | +32.74 % | Improve the numeric width_buck\n (1,1024) | 3.313e+05 | +30.37 % | Add missing pointer dereferenc\n (1,1024) | 3.377e+05 | +32.88 % | Extend mul_var_short() to 5 an\n (1,1024) | 3.411e+05 | +34.23 % | Optimise numeric multiplicatio\n (1,2048) | 1.248e+05 | | SQL/JSON: Various improvements\n (1,2048) | 1.653e+05 | +32.46 % | Optimise numeric multiplicatio\n (1,2048) | 1.668e+05 | +33.64 % | Use diff's --strip-trailing-cr\n (1,2048) | 1.652e+05 | +32.35 % | Improve the numeric width_buck\n (1,2048) | 1.651e+05 | +32.26 % | Add missing pointer dereferenc\n (1,2048) | 1.681e+05 | +34.70 % | Extend mul_var_short() to 5 an\n (1,2048) | 1.662e+05 | +33.18 % | Optimise numeric multiplicatio\n (1,4096) | 6.417e+04 | | SQL/JSON: Various improvements\n (1,4096) | 8.533e+04 | +32.98 % | Optimise numeric multiplicatio\n (1,4096) | 8.715e+04 | +35.81 % | Use diff's --strip-trailing-cr\n (1,4096) | 8.475e+04 | +32.07 % | Improve the numeric width_buck\n (1,4096) | 8.627e+04 | +34.44 % | Add missing pointer dereferenc\n (1,4096) | 8.742e+04 | +36.23 % | Extend mul_var_short() to 5 an\n (1,4096) | 8.534e+04 | +33.00 % | Optimise numeric multiplicatio\n (1,8192) | 3.150e+04 | | SQL/JSON: Various improvements\n (1,8192) | 4.208e+04 | +33.58 % | Optimise numeric multiplicatio\n (1,8192) | 4.216e+04 | +33.81 % | Use diff's --strip-trailing-cr\n (1,8192) | 4.211e+04 | +33.67 % | Improve the numeric width_buck\n (1,8192) | 4.239e+04 | +34.56 % | Add missing pointer dereferenc\n (1,8192) | 4.155e+04 | +31.90 % | Extend mul_var_short() to 5 an\n (1,8192) | 4.166e+04 | +32.22 % | Optimise numeric multiplicatio\n (1,16384) | 1.563e+04 | | SQL/JSON: Various improvements\n (1,16384) | 2.114e+04 | +35.24 % | Optimise numeric multiplicatio\n (1,16384) | 2.057e+04 | +31.59 % | Use diff's --strip-trailing-cr\n (1,16384) | 2.094e+04 | +33.97 % | Improve the numeric width_buck\n (1,16384) | 2.123e+04 | +35.84 % | Add missing pointer dereferenc\n (1,16384) | 2.088e+04 | +33.57 % | Extend mul_var_short() to 5 an\n (1,16384) | 2.090e+04 | +33.74 % | Optimise numeric multiplicatio\n (2,2) | 1.437e+07 | | SQL/JSON: Various improvements\n (2,2) | 2.248e+07 | +56.42 % | Optimise numeric multiplicatio\n (2,2) | 2.103e+07 | +46.31 % | Use diff's --strip-trailing-cr\n (2,2) | 2.238e+07 | +55.74 % | Improve the numeric width_buck\n (2,2) | 2.217e+07 | +54.29 % | Add missing pointer dereferenc\n (2,2) | 2.096e+07 | +45.84 % | Extend mul_var_short() to 5 an\n (2,2) | 2.070e+07 | +44.05 % | Optimise numeric multiplicatio\n (2,3) | 1.332e+07 | | SQL/JSON: Various improvements\n (2,3) | 2.035e+07 | +52.78 % | Optimise numeric multiplicatio\n (2,3) | 2.041e+07 | +53.24 % | Use diff's --strip-trailing-cr\n (2,3) | 2.125e+07 | +59.59 % | Improve the numeric width_buck\n (2,3) | 2.183e+07 | +63.96 % | Add missing pointer dereferenc\n (2,3) | 2.124e+07 | +59.47 % | Extend mul_var_short() to 5 an\n (2,3) | 2.016e+07 | +51.36 % | Optimise numeric multiplicatio\n (2,4) | 1.339e+07 | | SQL/JSON: Various improvements\n (2,4) | 1.955e+07 | +45.99 % | Optimise numeric multiplicatio\n (2,4) | 2.004e+07 | +49.71 % | Use diff's --strip-trailing-cr\n (2,4) | 1.982e+07 | +48.04 % | Improve the numeric width_buck\n (2,4) | 1.942e+07 | +45.08 % | Add missing pointer dereferenc\n (2,4) | 1.990e+07 | +48.65 % | Extend mul_var_short() to 5 an\n (2,4) | 1.893e+07 | +41.37 % | Optimise numeric multiplicatio\n (2,8) | 1.363e+07 | | SQL/JSON: Various improvements\n (2,8) | 1.855e+07 | +36.14 % | Optimise numeric multiplicatio\n (2,8) | 1.838e+07 | +34.92 % | Use diff's --strip-trailing-cr\n (2,8) | 1.873e+07 | +37.47 % | Improve the numeric width_buck\n (2,8) | 1.838e+07 | +34.91 % | Add missing pointer dereferenc\n (2,8) | 1.867e+07 | +36.98 % | Extend mul_var_short() to 5 an\n (2,8) | 1.773e+07 | +30.14 % | Optimise numeric multiplicatio\n (2,16) | 9.092e+06 | | SQL/JSON: Various improvements\n (2,16) | 1.213e+07 | +33.41 % | Optimise numeric multiplicatio\n (2,16) | 1.255e+07 | +37.99 % | Use diff's --strip-trailing-cr\n (2,16) | 1.168e+07 | +28.52 % | Improve the numeric width_buck\n (2,16) | 1.173e+07 | +29.07 % | Add missing pointer dereferenc\n (2,16) | 1.195e+07 | +31.48 % | Extend mul_var_short() to 5 an\n (2,16) | 1.174e+07 | +29.09 % | Optimise numeric multiplicatio\n (2,32) | 5.436e+06 | | SQL/JSON: Various improvements\n (2,32) | 7.685e+06 | +41.38 % | Optimise numeric multiplicatio\n (2,32) | 7.711e+06 | +41.87 % | Use diff's --strip-trailing-cr\n (2,32) | 7.787e+06 | +43.26 % | Improve the numeric width_buck\n (2,32) | 7.910e+06 | +45.53 % | Add missing pointer dereferenc\n (2,32) | 7.831e+06 | +44.06 % | Extend mul_var_short() to 5 an\n (2,32) | 7.939e+06 | +46.04 % | Optimise numeric multiplicatio\n (2,64) | 3.338e+06 | | SQL/JSON: Various improvements\n (2,64) | 4.689e+06 | +40.48 % | Optimise numeric multiplicatio\n (2,64) | 4.445e+06 | +33.16 % | Use diff's --strip-trailing-cr\n (2,64) | 4.569e+06 | +36.88 % | Improve the numeric width_buck\n (2,64) | 4.419e+06 | +32.38 % | Add missing pointer dereferenc\n (2,64) | 4.661e+06 | +39.62 % | Extend mul_var_short() to 5 an\n (2,64) | 4.497e+06 | +34.73 % | Optimise numeric multiplicatio\n (2,128) | 1.799e+06 | | SQL/JSON: Various improvements\n (2,128) | 2.348e+06 | +30.49 % | Optimise numeric multiplicatio\n (2,128) | 2.350e+06 | +30.60 % | Use diff's --strip-trailing-cr\n (2,128) | 2.457e+06 | +36.57 % | Improve the numeric width_buck\n (2,128) | 2.316e+06 | +28.71 % | Add missing pointer dereferenc\n (2,128) | 2.430e+06 | +35.07 % | Extend mul_var_short() to 5 an\n (2,128) | 2.401e+06 | +33.47 % | Optimise numeric multiplicatio\n (2,256) | 9.249e+05 | | SQL/JSON: Various improvements\n (2,256) | 1.249e+06 | +35.08 % | Optimise numeric multiplicatio\n (2,256) | 1.243e+06 | +34.38 % | Use diff's --strip-trailing-cr\n (2,256) | 1.243e+06 | +34.44 % | Improve the numeric width_buck\n (2,256) | 1.228e+06 | +32.73 % | Add missing pointer dereferenc\n (2,256) | 1.248e+06 | +34.88 % | Extend mul_var_short() to 5 an\n (2,256) | 1.262e+06 | +36.40 % | Optimise numeric multiplicatio\n (2,512) | 4.750e+05 | | SQL/JSON: Various improvements\n (2,512) | 6.210e+05 | +30.75 % | Optimise numeric multiplicatio\n (2,512) | 6.434e+05 | +35.47 % | Use diff's --strip-trailing-cr\n (2,512) | 6.387e+05 | +34.46 % | Improve the numeric width_buck\n (2,512) | 6.223e+05 | +31.03 % | Add missing pointer dereferenc\n (2,512) | 6.367e+05 | +34.06 % | Extend mul_var_short() to 5 an\n (2,512) | 6.524e+05 | +37.36 % | Optimise numeric multiplicatio\n (2,1024) | 2.411e+05 | | SQL/JSON: Various improvements\n (2,1024) | 3.227e+05 | +33.83 % | Optimise numeric multiplicatio\n (2,1024) | 3.249e+05 | +34.75 % | Use diff's --strip-trailing-cr\n (2,1024) | 3.278e+05 | +35.94 % | Improve the numeric width_buck\n (2,1024) | 3.162e+05 | +31.13 % | Add missing pointer dereferenc\n (2,1024) | 3.219e+05 | +33.49 % | Extend mul_var_short() to 5 an\n (2,1024) | 3.238e+05 | +34.30 % | Optimise numeric multiplicatio\n (2,2048) | 1.184e+05 | | SQL/JSON: Various improvements\n (2,2048) | 1.553e+05 | +31.15 % | Optimise numeric multiplicatio\n (2,2048) | 1.580e+05 | +33.47 % | Use diff's --strip-trailing-cr\n (2,2048) | 1.545e+05 | +30.55 % | Improve the numeric width_buck\n (2,2048) | 1.564e+05 | +32.12 % | Add missing pointer dereferenc\n (2,2048) | 1.564e+05 | +32.10 % | Extend mul_var_short() to 5 an\n (2,2048) | 1.603e+05 | +35.40 % | Optimise numeric multiplicatio\n (2,4096) | 6.244e+04 | | SQL/JSON: Various improvements\n (2,4096) | 8.198e+04 | +31.31 % | Optimise numeric multiplicatio\n (2,4096) | 8.268e+04 | +32.41 % | Use diff's --strip-trailing-cr\n (2,4096) | 8.200e+04 | +31.33 % | Improve the numeric width_buck\n (2,4096) | 8.366e+04 | +33.98 % | Add missing pointer dereferenc\n (2,4096) | 8.445e+04 | +35.26 % | Extend mul_var_short() to 5 an\n (2,4096) | 8.326e+04 | +33.35 % | Optimise numeric multiplicatio\n (2,8192) | 3.001e+04 | | SQL/JSON: Various improvements\n (2,8192) | 3.958e+04 | +31.89 % | Optimise numeric multiplicatio\n (2,8192) | 3.961e+04 | +32.00 % | Use diff's --strip-trailing-cr\n (2,8192) | 4.030e+04 | +34.30 % | Improve the numeric width_buck\n (2,8192) | 4.061e+04 | +35.31 % | Add missing pointer dereferenc\n (2,8192) | 4.075e+04 | +35.81 % | Extend mul_var_short() to 5 an\n (2,8192) | 4.147e+04 | +38.20 % | Optimise numeric multiplicatio\n (2,16384) | 1.583e+04 | | SQL/JSON: Various improvements\n (2,16384) | 1.989e+04 | +25.64 % | Optimise numeric multiplicatio\n (2,16384) | 1.967e+04 | +24.28 % | Use diff's --strip-trailing-cr\n (2,16384) | 1.966e+04 | +24.20 % | Improve the numeric width_buck\n (2,16384) | 1.954e+04 | +23.45 % | Add missing pointer dereferenc\n (2,16384) | 2.049e+04 | +29.45 % | Extend mul_var_short() to 5 an\n (2,16384) | 2.063e+04 | +30.37 % | Optimise numeric multiplicatio\n (3,3) | 1.248e+07 | | SQL/JSON: Various improvements\n (3,3) | 1.990e+07 | +59.48 % | Optimise numeric multiplicatio\n (3,3) | 2.096e+07 | +67.98 % | Use diff's --strip-trailing-cr\n (3,3) | 2.053e+07 | +64.47 % | Improve the numeric width_buck\n (3,3) | 2.084e+07 | +66.97 % | Add missing pointer dereferenc\n (3,3) | 2.029e+07 | +62.57 % | Extend mul_var_short() to 5 an\n (3,3) | 1.920e+07 | +53.88 % | Optimise numeric multiplicatio\n (3,4) | 1.270e+07 | | SQL/JSON: Various improvements\n (3,4) | 1.974e+07 | +55.39 % | Optimise numeric multiplicatio\n (3,4) | 1.976e+07 | +55.50 % | Use diff's --strip-trailing-cr\n (3,4) | 1.973e+07 | +55.31 % | Improve the numeric width_buck\n (3,4) | 1.926e+07 | +51.62 % | Add missing pointer dereferenc\n (3,4) | 1.931e+07 | +51.97 % | Extend mul_var_short() to 5 an\n (3,4) | 1.919e+07 | +51.02 % | Optimise numeric multiplicatio\n (3,8) | 1.244e+07 | | SQL/JSON: Various improvements\n (3,8) | 1.769e+07 | +42.24 % | Optimise numeric multiplicatio\n (3,8) | 1.709e+07 | +37.44 % | Use diff's --strip-trailing-cr\n (3,8) | 1.804e+07 | +45.04 % | Improve the numeric width_buck\n (3,8) | 1.772e+07 | +42.53 % | Add missing pointer dereferenc\n (3,8) | 1.699e+07 | +36.63 % | Extend mul_var_short() to 5 an\n (3,8) | 1.770e+07 | +42.30 % | Optimise numeric multiplicatio\n (3,16) | 7.919e+06 | | SQL/JSON: Various improvements\n (3,16) | 1.125e+07 | +42.09 % | Optimise numeric multiplicatio\n (3,16) | 1.123e+07 | +41.76 % | Use diff's --strip-trailing-cr\n (3,16) | 1.113e+07 | +40.48 % | Improve the numeric width_buck\n (3,16) | 1.124e+07 | +41.91 % | Add missing pointer dereferenc\n (3,16) | 1.143e+07 | +44.30 % | Extend mul_var_short() to 5 an\n (3,16) | 1.147e+07 | +44.84 % | Optimise numeric multiplicatio\n (3,32) | 5.507e+06 | | SQL/JSON: Various improvements\n (3,32) | 7.149e+06 | +29.82 % | Optimise numeric multiplicatio\n (3,32) | 7.206e+06 | +30.85 % | Use diff's --strip-trailing-cr\n (3,32) | 7.526e+06 | +36.67 % | Improve the numeric width_buck\n (3,32) | 7.238e+06 | +31.43 % | Add missing pointer dereferenc\n (3,32) | 7.413e+06 | +34.61 % | Extend mul_var_short() to 5 an\n (3,32) | 7.613e+06 | +38.24 % | Optimise numeric multiplicatio\n (3,64) | 3.258e+06 | | SQL/JSON: Various improvements\n (3,64) | 4.338e+06 | +33.15 % | Optimise numeric multiplicatio\n (3,64) | 4.265e+06 | +30.90 % | Use diff's --strip-trailing-cr\n (3,64) | 4.292e+06 | +31.73 % | Improve the numeric width_buck\n (3,64) | 4.342e+06 | +33.27 % | Add missing pointer dereferenc\n (3,64) | 4.373e+06 | +34.22 % | Extend mul_var_short() to 5 an\n (3,64) | 4.365e+06 | +33.98 % | Optimise numeric multiplicatio\n (3,128) | 1.675e+06 | | SQL/JSON: Various improvements\n (3,128) | 2.220e+06 | +32.55 % | Optimise numeric multiplicatio\n (3,128) | 2.232e+06 | +33.28 % | Use diff's --strip-trailing-cr\n (3,128) | 2.276e+06 | +35.87 % | Improve the numeric width_buck\n (3,128) | 2.275e+06 | +35.84 % | Add missing pointer dereferenc\n (3,128) | 2.309e+06 | +37.87 % | Extend mul_var_short() to 5 an\n (3,128) | 2.324e+06 | +38.74 % | Optimise numeric multiplicatio\n (3,256) | 9.046e+05 | | SQL/JSON: Various improvements\n (3,256) | 1.198e+06 | +32.45 % | Optimise numeric multiplicatio\n (3,256) | 1.217e+06 | +34.49 % | Use diff's --strip-trailing-cr\n (3,256) | 1.221e+06 | +35.02 % | Improve the numeric width_buck\n (3,256) | 1.225e+06 | +35.43 % | Add missing pointer dereferenc\n (3,256) | 1.230e+06 | +36.03 % | Extend mul_var_short() to 5 an\n (3,256) | 1.218e+06 | +34.69 % | Optimise numeric multiplicatio\n (3,512) | 4.675e+05 | | SQL/JSON: Various improvements\n (3,512) | 6.195e+05 | +32.50 % | Optimise numeric multiplicatio\n (3,512) | 6.199e+05 | +32.59 % | Use diff's --strip-trailing-cr\n (3,512) | 6.475e+05 | +38.49 % | Improve the numeric width_buck\n (3,512) | 6.284e+05 | +34.40 % | Add missing pointer dereferenc\n (3,512) | 6.214e+05 | +32.90 % | Extend mul_var_short() to 5 an\n (3,512) | 6.306e+05 | +34.88 % | Optimise numeric multiplicatio\n (3,1024) | 2.393e+05 | | SQL/JSON: Various improvements\n (3,1024) | 3.049e+05 | +27.40 % | Optimise numeric multiplicatio\n (3,1024) | 3.233e+05 | +35.10 % | Use diff's --strip-trailing-cr\n (3,1024) | 3.150e+05 | +31.63 % | Improve the numeric width_buck\n (3,1024) | 3.152e+05 | +31.70 % | Add missing pointer dereferenc\n (3,1024) | 3.284e+05 | +37.20 % | Extend mul_var_short() to 5 an\n (3,1024) | 3.132e+05 | +30.85 % | Optimise numeric multiplicatio\n (3,2048) | 1.190e+05 | | SQL/JSON: Various improvements\n (3,2048) | 1.599e+05 | +34.37 % | Optimise numeric multiplicatio\n (3,2048) | 1.545e+05 | +29.84 % | Use diff's --strip-trailing-cr\n (3,2048) | 1.544e+05 | +29.75 % | Improve the numeric width_buck\n (3,2048) | 1.551e+05 | +30.36 % | Add missing pointer dereferenc\n (3,2048) | 1.602e+05 | +34.61 % | Extend mul_var_short() to 5 an\n (3,2048) | 1.570e+05 | +31.91 % | Optimise numeric multiplicatio\n (3,4096) | 5.937e+04 | | SQL/JSON: Various improvements\n (3,4096) | 8.109e+04 | +36.57 % | Optimise numeric multiplicatio\n (3,4096) | 8.114e+04 | +36.66 % | Use diff's --strip-trailing-cr\n (3,4096) | 8.169e+04 | +37.59 % | Improve the numeric width_buck\n (3,4096) | 8.166e+04 | +37.54 % | Add missing pointer dereferenc\n (3,4096) | 8.058e+04 | +35.71 % | Extend mul_var_short() to 5 an\n (3,4096) | 8.166e+04 | +37.54 % | Optimise numeric multiplicatio\n (3,8192) | 2.937e+04 | | SQL/JSON: Various improvements\n (3,8192) | 3.974e+04 | +35.29 % | Optimise numeric multiplicatio\n (3,8192) | 4.010e+04 | +36.53 % | Use diff's --strip-trailing-cr\n (3,8192) | 3.933e+04 | +33.90 % | Improve the numeric width_buck\n (3,8192) | 3.999e+04 | +36.14 % | Add missing pointer dereferenc\n (3,8192) | 3.998e+04 | +36.09 % | Extend mul_var_short() to 5 an\n (3,8192) | 3.985e+04 | +35.67 % | Optimise numeric multiplicatio\n (3,16384) | 1.491e+04 | | SQL/JSON: Various improvements\n (3,16384) | 1.978e+04 | +32.63 % | Optimise numeric multiplicatio\n (3,16384) | 1.996e+04 | +33.85 % | Use diff's --strip-trailing-cr\n (3,16384) | 1.995e+04 | +33.80 % | Improve the numeric width_buck\n (3,16384) | 2.027e+04 | +35.91 % | Add missing pointer dereferenc\n (3,16384) | 1.986e+04 | +33.17 % | Extend mul_var_short() to 5 an\n (3,16384) | 2.038e+04 | +36.70 % | Optimise numeric multiplicatio\n (4,4) | 1.134e+07 | | SQL/JSON: Various improvements\n (4,4) | 2.022e+07 | +78.31 % | Optimise numeric multiplicatio\n (4,4) | 2.004e+07 | +76.67 % | Use diff's --strip-trailing-cr\n (4,4) | 1.961e+07 | +72.88 % | Improve the numeric width_buck\n (4,4) | 1.885e+07 | +66.21 % | Add missing pointer dereferenc\n (4,4) | 1.829e+07 | +61.30 % | Extend mul_var_short() to 5 an\n (4,4) | 1.883e+07 | +66.03 % | Optimise numeric multiplicatio\n (4,8) | 1.149e+07 | | SQL/JSON: Various improvements\n (4,8) | 1.734e+07 | +50.90 % | Optimise numeric multiplicatio\n (4,8) | 1.703e+07 | +48.17 % | Use diff's --strip-trailing-cr\n (4,8) | 1.752e+07 | +52.44 % | Improve the numeric width_buck\n (4,8) | 1.761e+07 | +53.27 % | Add missing pointer dereferenc\n (4,8) | 1.711e+07 | +48.86 % | Extend mul_var_short() to 5 an\n (4,8) | 1.633e+07 | +42.09 % | Optimise numeric multiplicatio\n (4,16) | 7.330e+06 | | SQL/JSON: Various improvements\n (4,16) | 1.075e+07 | +46.69 % | Optimise numeric multiplicatio\n (4,16) | 1.120e+07 | +52.80 % | Use diff's --strip-trailing-cr\n (4,16) | 1.103e+07 | +50.52 % | Improve the numeric width_buck\n (4,16) | 1.049e+07 | +43.15 % | Add missing pointer dereferenc\n (4,16) | 1.093e+07 | +49.16 % | Extend mul_var_short() to 5 an\n (4,16) | 1.053e+07 | +43.63 % | Optimise numeric multiplicatio\n (4,32) | 5.220e+06 | | SQL/JSON: Various improvements\n (4,32) | 6.915e+06 | +32.47 % | Optimise numeric multiplicatio\n (4,32) | 7.030e+06 | +34.67 % | Use diff's --strip-trailing-cr\n (4,32) | 6.870e+06 | +31.61 % | Improve the numeric width_buck\n (4,32) | 6.972e+06 | +33.56 % | Add missing pointer dereferenc\n (4,32) | 6.953e+06 | +33.19 % | Extend mul_var_short() to 5 an\n (4,32) | 6.648e+06 | +27.35 % | Optimise numeric multiplicatio\n (4,64) | 3.100e+06 | | SQL/JSON: Various improvements\n (4,64) | 3.899e+06 | +25.76 % | Optimise numeric multiplicatio\n (4,64) | 4.072e+06 | +31.36 % | Use diff's --strip-trailing-cr\n (4,64) | 4.044e+06 | +30.44 % | Improve the numeric width_buck\n (4,64) | 3.995e+06 | +28.86 % | Add missing pointer dereferenc\n (4,64) | 4.129e+06 | +33.18 % | Extend mul_var_short() to 5 an\n (4,64) | 4.088e+06 | +31.86 % | Optimise numeric multiplicatio\n (4,128) | 1.636e+06 | | SQL/JSON: Various improvements\n (4,128) | 2.068e+06 | +26.38 % | Optimise numeric multiplicatio\n (4,128) | 2.140e+06 | +30.78 % | Use diff's --strip-trailing-cr\n (4,128) | 2.186e+06 | +33.57 % | Improve the numeric width_buck\n (4,128) | 2.088e+06 | +27.63 % | Add missing pointer dereferenc\n (4,128) | 2.121e+06 | +29.62 % | Extend mul_var_short() to 5 an\n (4,128) | 2.011e+06 | +22.88 % | Optimise numeric multiplicatio\n (4,256) | 8.487e+05 | | SQL/JSON: Various improvements\n (4,256) | 1.099e+06 | +29.45 % | Optimise numeric multiplicatio\n (4,256) | 1.108e+06 | +30.53 % | Use diff's --strip-trailing-cr\n (4,256) | 1.109e+06 | +30.71 % | Improve the numeric width_buck\n (4,256) | 1.115e+06 | +31.37 % | Add missing pointer dereferenc\n (4,256) | 1.114e+06 | +31.26 % | Extend mul_var_short() to 5 an\n (4,256) | 1.077e+06 | +26.85 % | Optimise numeric multiplicatio\n (4,512) | 4.397e+05 | | SQL/JSON: Various improvements\n (4,512) | 5.790e+05 | +31.69 % | Optimise numeric multiplicatio\n (4,512) | 5.995e+05 | +36.36 % | Use diff's --strip-trailing-cr\n (4,512) | 5.774e+05 | +31.33 % | Improve the numeric width_buck\n (4,512) | 5.573e+05 | +26.75 % | Add missing pointer dereferenc\n (4,512) | 5.779e+05 | +31.46 % | Extend mul_var_short() to 5 an\n (4,512) | 5.478e+05 | +24.59 % | Optimise numeric multiplicatio\n (4,1024) | 2.359e+05 | | SQL/JSON: Various improvements\n (4,1024) | 2.903e+05 | +23.04 % | Optimise numeric multiplicatio\n (4,1024) | 2.873e+05 | +21.78 % | Use diff's --strip-trailing-cr\n (4,1024) | 2.846e+05 | +20.64 % | Improve the numeric width_buck\n (4,1024) | 2.899e+05 | +22.89 % | Add missing pointer dereferenc\n (4,1024) | 2.815e+05 | +19.30 % | Extend mul_var_short() to 5 an\n (4,1024) | 2.793e+05 | +18.38 % | Optimise numeric multiplicatio\n (4,2048) | 1.132e+05 | | SQL/JSON: Various improvements\n (4,2048) | 1.438e+05 | +26.96 % | Optimise numeric multiplicatio\n (4,2048) | 1.453e+05 | +28.28 % | Use diff's --strip-trailing-cr\n (4,2048) | 1.407e+05 | +24.28 % | Improve the numeric width_buck\n (4,2048) | 1.432e+05 | +26.44 % | Add missing pointer dereferenc\n (4,2048) | 1.451e+05 | +28.10 % | Extend mul_var_short() to 5 an\n (4,2048) | 1.429e+05 | +26.22 % | Optimise numeric multiplicatio\n (4,4096) | 5.841e+04 | | SQL/JSON: Various improvements\n (4,4096) | 7.326e+04 | +25.43 % | Optimise numeric multiplicatio\n (4,4096) | 7.196e+04 | +23.20 % | Use diff's --strip-trailing-cr\n (4,4096) | 7.539e+04 | +29.07 % | Improve the numeric width_buck\n (4,4096) | 7.197e+04 | +23.23 % | Add missing pointer dereferenc\n (4,4096) | 7.391e+04 | +26.53 % | Extend mul_var_short() to 5 an\n (4,4096) | 7.060e+04 | +20.87 % | Optimise numeric multiplicatio\n (4,8192) | 2.825e+04 | | SQL/JSON: Various improvements\n (4,8192) | 3.679e+04 | +30.24 % | Optimise numeric multiplicatio\n (4,8192) | 3.617e+04 | +28.06 % | Use diff's --strip-trailing-cr\n (4,8192) | 3.685e+04 | +30.46 % | Improve the numeric width_buck\n (4,8192) | 3.645e+04 | +29.06 % | Add missing pointer dereferenc\n (4,8192) | 3.606e+04 | +27.68 % | Extend mul_var_short() to 5 an\n (4,8192) | 3.581e+04 | +26.78 % | Optimise numeric multiplicatio\n (4,16384) | 1.398e+04 | | SQL/JSON: Various improvements\n (4,16384) | 1.797e+04 | +28.54 % | Optimise numeric multiplicatio\n (4,16384) | 1.800e+04 | +28.73 % | Use diff's --strip-trailing-cr\n (4,16384) | 1.766e+04 | +26.33 % | Improve the numeric width_buck\n (4,16384) | 1.775e+04 | +26.96 % | Add missing pointer dereferenc\n (4,16384) | 1.827e+04 | +30.69 % | Extend mul_var_short() to 5 an\n (4,16384) | 1.735e+04 | +24.08 % | Optimise numeric multiplicatio\n (5,5) | 1.040e+07 | | SQL/JSON: Various improvements\n (5,5) | 1.015e+07 | -2.37 % | Optimise numeric multiplicatio\n (5,5) | 1.021e+07 | -1.80 % | Use diff's --strip-trailing-cr\n (5,5) | 1.099e+07 | +5.70 % | Improve the numeric width_buck\n (5,5) | 1.036e+07 | -0.31 % | Add missing pointer dereferenc\n (5,5) | 1.749e+07 | +68.21 % | Extend mul_var_short() to 5 an\n (5,5) | 1.657e+07 | +59.45 % | Optimise numeric multiplicatio\n (6,6) | 9.115e+06 | | SQL/JSON: Various improvements\n (6,6) | 1.030e+07 | +13.03 % | Optimise numeric multiplicatio\n (6,6) | 9.434e+06 | +3.50 % | Use diff's --strip-trailing-cr\n (6,6) | 8.876e+06 | -2.62 % | Improve the numeric width_buck\n (6,6) | 8.793e+06 | -3.53 % | Add missing pointer dereferenc\n (6,6) | 1.490e+07 | +63.49 % | Extend mul_var_short() to 5 an\n (6,6) | 1.589e+07 | +74.33 % | Optimise numeric multiplicatio\n (7,7) | 7.724e+06 | | SQL/JSON: Various improvements\n (7,7) | 7.446e+06 | -3.59 % | Optimise numeric multiplicatio\n (7,7) | 7.929e+06 | +2.66 % | Use diff's --strip-trailing-cr\n (7,7) | 7.481e+06 | -3.14 % | Improve the numeric width_buck\n (7,7) | 7.497e+06 | -2.93 % | Add missing pointer dereferenc\n (7,7) | 7.214e+06 | -6.60 % | Extend mul_var_short() to 5 an\n (7,7) | 1.024e+07 | +32.56 % | Optimise numeric multiplicatio\n (8,8) | 7.842e+06 | | SQL/JSON: Various improvements\n (8,8) | 7.827e+06 | -0.19 % | Optimise numeric multiplicatio\n (8,8) | 8.111e+06 | +3.44 % | Use diff's --strip-trailing-cr\n (8,8) | 8.156e+06 | +4.01 % | Improve the numeric width_buck\n (8,8) | 7.908e+06 | +0.85 % | Add missing pointer dereferenc\n (8,8) | 8.029e+06 | +2.40 % | Extend mul_var_short() to 5 an\n (8,8) | 9.644e+06 | +22.99 % | Optimise numeric multiplicatio\n (8,16) | 6.489e+06 | | SQL/JSON: Various improvements\n (8,16) | 6.276e+06 | -3.29 % | Optimise numeric multiplicatio\n (8,16) | 6.332e+06 | -2.42 % | Use diff's --strip-trailing-cr\n (8,16) | 6.463e+06 | -0.40 % | Improve the numeric width_buck\n (8,16) | 5.928e+06 | -8.65 % | Add missing pointer dereferenc\n (8,16) | 5.949e+06 | -8.32 % | Extend mul_var_short() to 5 an\n (8,16) | 8.349e+06 | +28.66 % | Optimise numeric multiplicatio\n (8,32) | 4.327e+06 | | SQL/JSON: Various improvements\n (8,32) | 4.324e+06 | -0.08 % | Optimise numeric multiplicatio\n (8,32) | 4.444e+06 | +2.68 % | Use diff's --strip-trailing-cr\n (8,32) | 4.335e+06 | +0.18 % | Improve the numeric width_buck\n (8,32) | 4.350e+06 | +0.52 % | Add missing pointer dereferenc\n (8,32) | 4.333e+06 | +0.13 % | Extend mul_var_short() to 5 an\n (8,32) | 6.288e+06 | +45.30 % | Optimise numeric multiplicatio\n (8,64) | 2.677e+06 | | SQL/JSON: Various improvements\n (8,64) | 2.674e+06 | -0.10 % | Optimise numeric multiplicatio\n (8,64) | 2.668e+06 | -0.31 % | Use diff's --strip-trailing-cr\n (8,64) | 2.704e+06 | +1.02 % | Improve the numeric width_buck\n (8,64) | 2.684e+06 | +0.28 % | Add missing pointer dereferenc\n (8,64) | 2.702e+06 | +0.96 % | Extend mul_var_short() to 5 an\n (8,64) | 3.876e+06 | +44.80 % | Optimise numeric multiplicatio\n (8,128) | 1.410e+06 | | SQL/JSON: Various improvements\n (8,128) | 1.418e+06 | +0.56 % | Optimise numeric multiplicatio\n (8,128) | 1.434e+06 | +1.69 % | Use diff's --strip-trailing-cr\n (8,128) | 1.452e+06 | +3.00 % | Improve the numeric width_buck\n (8,128) | 1.464e+06 | +3.79 % | Add missing pointer dereferenc\n (8,128) | 1.384e+06 | -1.87 % | Extend mul_var_short() to 5 an\n (8,128) | 2.224e+06 | +57.71 % | Optimise numeric multiplicatio\n (8,256) | 7.400e+05 | | SQL/JSON: Various improvements\n (8,256) | 7.473e+05 | +0.98 % | Optimise numeric multiplicatio\n (8,256) | 7.338e+05 | -0.85 % | Use diff's --strip-trailing-cr\n (8,256) | 7.401e+05 | +0.01 % | Improve the numeric width_buck\n (8,256) | 7.460e+05 | +0.80 % | Add missing pointer dereferenc\n (8,256) | 7.563e+05 | +2.20 % | Extend mul_var_short() to 5 an\n (8,256) | 1.190e+06 | +60.79 % | Optimise numeric multiplicatio\n (8,512) | 3.746e+05 | | SQL/JSON: Various improvements\n (8,512) | 3.834e+05 | +2.36 % | Optimise numeric multiplicatio\n (8,512) | 3.829e+05 | +2.21 % | Use diff's --strip-trailing-cr\n (8,512) | 3.840e+05 | +2.50 % | Improve the numeric width_buck\n (8,512) | 3.794e+05 | +1.27 % | Add missing pointer dereferenc\n (8,512) | 3.662e+05 | -2.25 % | Extend mul_var_short() to 5 an\n (8,512) | 6.290e+05 | +67.91 % | Optimise numeric multiplicatio\n (8,1024) | 2.036e+05 | | SQL/JSON: Various improvements\n (8,1024) | 2.070e+05 | +1.70 % | Optimise numeric multiplicatio\n (8,1024) | 2.011e+05 | -1.24 % | Use diff's --strip-trailing-cr\n (8,1024) | 2.011e+05 | -1.22 % | Improve the numeric width_buck\n (8,1024) | 2.032e+05 | -0.18 % | Add missing pointer dereferenc\n (8,1024) | 2.028e+05 | -0.38 % | Extend mul_var_short() to 5 an\n (8,1024) | 3.232e+05 | +58.76 % | Optimise numeric multiplicatio\n (8,2048) | 9.898e+04 | | SQL/JSON: Various improvements\n (8,2048) | 1.013e+05 | +2.37 % | Optimise numeric multiplicatio\n (8,2048) | 9.910e+04 | +0.12 % | Use diff's --strip-trailing-cr\n (8,2048) | 1.001e+05 | +1.09 % | Improve the numeric width_buck\n (8,2048) | 9.995e+04 | +0.98 % | Add missing pointer dereferenc\n (8,2048) | 9.741e+04 | -1.59 % | Extend mul_var_short() to 5 an\n (8,2048) | 1.544e+05 | +55.94 % | Optimise numeric multiplicatio\n (8,4096) | 5.071e+04 | | SQL/JSON: Various improvements\n (8,4096) | 5.104e+04 | +0.64 % | Optimise numeric multiplicatio\n (8,4096) | 5.118e+04 | +0.92 % | Use diff's --strip-trailing-cr\n (8,4096) | 5.123e+04 | +1.02 % | Improve the numeric width_buck\n (8,4096) | 5.072e+04 | +0.02 % | Add missing pointer dereferenc\n (8,4096) | 5.213e+04 | +2.80 % | Extend mul_var_short() to 5 an\n (8,4096) | 8.190e+04 | +61.49 % | Optimise numeric multiplicatio\n (8,8192) | 2.431e+04 | | SQL/JSON: Various improvements\n (8,8192) | 2.411e+04 | -0.80 % | Optimise numeric multiplicatio\n (8,8192) | 2.433e+04 | +0.10 % | Use diff's --strip-trailing-cr\n (8,8192) | 2.434e+04 | +0.14 % | Improve the numeric width_buck\n (8,8192) | 2.430e+04 | -0.04 % | Add missing pointer dereferenc\n (8,8192) | 2.520e+04 | +3.69 % | Extend mul_var_short() to 5 an\n (8,8192) | 3.958e+04 | +62.82 % | Optimise numeric multiplicatio\n (8,16384) | 1.222e+04 | | SQL/JSON: Various improvements\n (8,16384) | 1.224e+04 | +0.21 % | Optimise numeric multiplicatio\n (8,16384) | 1.211e+04 | -0.92 % | Use diff's --strip-trailing-cr\n (8,16384) | 1.202e+04 | -1.58 % | Improve the numeric width_buck\n (8,16384) | 1.232e+04 | +0.86 % | Add missing pointer dereferenc\n (8,16384) | 1.211e+04 | -0.92 % | Extend mul_var_short() to 5 an\n (8,16384) | 1.958e+04 | +60.24 % | Optimise numeric multiplicatio\n (16,16) | 4.325e+06 | | SQL/JSON: Various improvements\n (16,16) | 4.380e+06 | +1.28 % | Optimise numeric multiplicatio\n (16,16) | 4.258e+06 | -1.56 % | Use diff's --strip-trailing-cr\n (16,16) | 4.389e+06 | +1.48 % | Improve the numeric width_buck\n (16,16) | 4.265e+06 | -1.38 % | Add missing pointer dereferenc\n (16,16) | 4.266e+06 | -1.37 % | Extend mul_var_short() to 5 an\n (16,16) | 6.293e+06 | +45.50 % | Optimise numeric multiplicatio\n (16,32) | 3.289e+06 | | SQL/JSON: Various improvements\n (16,32) | 3.356e+06 | +2.04 % | Optimise numeric multiplicatio\n (16,32) | 3.226e+06 | -1.92 % | Use diff's --strip-trailing-cr\n (16,32) | 3.349e+06 | +1.83 % | Improve the numeric width_buck\n (16,32) | 3.307e+06 | +0.54 % | Add missing pointer dereferenc\n (16,32) | 3.212e+06 | -2.36 % | Extend mul_var_short() to 5 an\n (16,32) | 4.831e+06 | +46.89 % | Optimise numeric multiplicatio\n (16,64) | 2.060e+06 | | SQL/JSON: Various improvements\n (16,64) | 2.047e+06 | -0.66 % | Optimise numeric multiplicatio\n (16,64) | 2.005e+06 | -2.71 % | Use diff's --strip-trailing-cr\n (16,64) | 2.100e+06 | +1.93 % | Improve the numeric width_buck\n (16,64) | 2.062e+06 | +0.06 % | Add missing pointer dereferenc\n (16,64) | 1.814e+06 | -11.95 % | Extend mul_var_short() to 5 an\n (16,64) | 3.278e+06 | +59.07 % | Optimise numeric multiplicatio\n (16,128) | 1.174e+06 | | SQL/JSON: Various improvements\n (16,128) | 1.121e+06 | -4.52 % | Optimise numeric multiplicatio\n (16,128) | 1.142e+06 | -2.75 % | Use diff's --strip-trailing-cr\n (16,128) | 1.165e+06 | -0.79 % | Improve the numeric width_buck\n (16,128) | 1.163e+06 | -0.93 % | Add missing pointer dereferenc\n (16,128) | 1.049e+06 | -10.68 % | Extend mul_var_short() to 5 an\n (16,128) | 1.821e+06 | +55.05 % | Optimise numeric multiplicatio\n (16,256) | 5.786e+05 | | SQL/JSON: Various improvements\n (16,256) | 6.143e+05 | +6.15 % | Optimise numeric multiplicatio\n (16,256) | 6.141e+05 | +6.13 % | Use diff's --strip-trailing-cr\n (16,256) | 5.783e+05 | -0.06 % | Improve the numeric width_buck\n (16,256) | 5.837e+05 | +0.88 % | Add missing pointer dereferenc\n (16,256) | 5.725e+05 | -1.06 % | Extend mul_var_short() to 5 an\n (16,256) | 9.643e+05 | +66.64 % | Optimise numeric multiplicatio\n (16,512) | 2.984e+05 | | SQL/JSON: Various improvements\n (16,512) | 2.994e+05 | +0.33 % | Optimise numeric multiplicatio\n (16,512) | 3.016e+05 | +1.06 % | Use diff's --strip-trailing-cr\n (16,512) | 2.961e+05 | -0.77 % | Improve the numeric width_buck\n (16,512) | 2.972e+05 | -0.43 % | Add missing pointer dereferenc\n (16,512) | 2.967e+05 | -0.57 % | Extend mul_var_short() to 5 an\n (16,512) | 5.348e+05 | +79.21 % | Optimise numeric multiplicatio\n (16,1024) | 1.635e+05 | | SQL/JSON: Various improvements\n (16,1024) | 1.695e+05 | +3.66 % | Optimise numeric multiplicatio\n (16,1024) | 1.673e+05 | +2.28 % | Use diff's --strip-trailing-cr\n (16,1024) | 1.650e+05 | +0.87 % | Improve the numeric width_buck\n (16,1024) | 1.643e+05 | +0.48 % | Add missing pointer dereferenc\n (16,1024) | 1.617e+05 | -1.11 % | Extend mul_var_short() to 5 an\n (16,1024) | 2.789e+05 | +70.54 % | Optimise numeric multiplicatio\n (16,2048) | 7.988e+04 | | SQL/JSON: Various improvements\n (16,2048) | 8.323e+04 | +4.20 % | Optimise numeric multiplicatio\n (16,2048) | 8.180e+04 | +2.41 % | Use diff's --strip-trailing-cr\n (16,2048) | 8.048e+04 | +0.75 % | Improve the numeric width_buck\n (16,2048) | 8.065e+04 | +0.96 % | Add missing pointer dereferenc\n (16,2048) | 8.284e+04 | +3.72 % | Extend mul_var_short() to 5 an\n (16,2048) | 1.325e+05 | +65.90 % | Optimise numeric multiplicatio\n (16,4096) | 4.118e+04 | | SQL/JSON: Various improvements\n (16,4096) | 4.400e+04 | +6.84 % | Optimise numeric multiplicatio\n (16,4096) | 4.155e+04 | +0.89 % | Use diff's --strip-trailing-cr\n (16,4096) | 4.440e+04 | +7.81 % | Improve the numeric width_buck\n (16,4096) | 4.154e+04 | +0.88 % | Add missing pointer dereferenc\n (16,4096) | 4.274e+04 | +3.79 % | Extend mul_var_short() to 5 an\n (16,4096) | 6.959e+04 | +68.97 % | Optimise numeric multiplicatio\n (16,8192) | 1.963e+04 | | SQL/JSON: Various improvements\n (16,8192) | 1.910e+04 | -2.65 % | Optimise numeric multiplicatio\n (16,8192) | 1.927e+04 | -1.79 % | Use diff's --strip-trailing-cr\n (16,8192) | 1.946e+04 | -0.87 % | Improve the numeric width_buck\n (16,8192) | 1.925e+04 | -1.92 % | Add missing pointer dereferenc\n (16,8192) | 1.890e+04 | -3.68 % | Extend mul_var_short() to 5 an\n (16,8192) | 3.280e+04 | +67.15 % | Optimise numeric multiplicatio\n (16,16384) | 9.497e+03 | | SQL/JSON: Various improvements\n (16,16384) | 9.499e+03 | +0.02 % | Optimise numeric multiplicatio\n (16,16384) | 9.721e+03 | +2.35 % | Use diff's --strip-trailing-cr\n (16,16384) | 9.586e+03 | +0.94 % | Improve the numeric width_buck\n (16,16384) | 9.559e+03 | +0.65 % | Add missing pointer dereferenc\n (16,16384) | 9.744e+03 | +2.59 % | Extend mul_var_short() to 5 an\n (16,16384) | 1.627e+04 | +71.30 % | Optimise numeric multiplicatio\n (32,32) | 2.032e+06 | | SQL/JSON: Various improvements\n (32,32) | 2.051e+06 | +0.91 % | Optimise numeric multiplicatio\n (32,32) | 2.013e+06 | -0.95 % | Use diff's --strip-trailing-cr\n (32,32) | 2.034e+06 | +0.06 % | Improve the numeric width_buck\n (32,32) | 2.048e+06 | +0.75 % | Add missing pointer dereferenc\n (32,32) | 1.807e+06 | -11.10 % | Extend mul_var_short() to 5 an\n (32,32) | 3.309e+06 | +62.80 % | Optimise numeric multiplicatio\n (32,64) | 1.382e+06 | | SQL/JSON: Various improvements\n (32,64) | 1.344e+06 | -2.75 % | Optimise numeric multiplicatio\n (32,64) | 1.356e+06 | -1.89 % | Use diff's --strip-trailing-cr\n (32,64) | 1.370e+06 | -0.88 % | Improve the numeric width_buck\n (32,64) | 1.394e+06 | +0.84 % | Add missing pointer dereferenc\n (32,64) | 1.165e+06 | -15.71 % | Extend mul_var_short() to 5 an\n (32,64) | 2.340e+06 | +69.33 % | Optimise numeric multiplicatio\n (32,128) | 8.215e+05 | | SQL/JSON: Various improvements\n (32,128) | 8.368e+05 | +1.87 % | Optimise numeric multiplicatio\n (32,128) | 8.372e+05 | +1.90 % | Use diff's --strip-trailing-cr\n (32,128) | 8.154e+05 | -0.75 % | Improve the numeric width_buck\n (32,128) | 8.291e+05 | +0.92 % | Add missing pointer dereferenc\n (32,128) | 7.009e+05 | -14.68 % | Extend mul_var_short() to 5 an\n (32,128) | 1.393e+06 | +69.61 % | Optimise numeric multiplicatio\n (32,256) | 4.550e+05 | | SQL/JSON: Various improvements\n (32,256) | 4.596e+05 | +1.01 % | Optimise numeric multiplicatio\n (32,256) | 4.724e+05 | +3.84 % | Use diff's --strip-trailing-cr\n (32,256) | 4.598e+05 | +1.07 % | Improve the numeric width_buck\n (32,256) | 4.677e+05 | +2.81 % | Add missing pointer dereferenc\n (32,256) | 4.115e+05 | -9.56 % | Extend mul_var_short() to 5 an\n (32,256) | 8.199e+05 | +80.22 % | Optimise numeric multiplicatio\n (32,512) | 2.350e+05 | | SQL/JSON: Various improvements\n (32,512) | 2.277e+05 | -3.09 % | Optimise numeric multiplicatio\n (32,512) | 2.250e+05 | -4.23 % | Use diff's --strip-trailing-cr\n (32,512) | 2.290e+05 | -2.53 % | Improve the numeric width_buck\n (32,512) | 2.214e+05 | -5.76 % | Add missing pointer dereferenc\n (32,512) | 2.126e+05 | -9.52 % | Extend mul_var_short() to 5 an\n (32,512) | 4.135e+05 | +75.99 % | Optimise numeric multiplicatio\n (32,1024) | 1.189e+05 | | SQL/JSON: Various improvements\n (32,1024) | 1.222e+05 | +2.75 % | Optimise numeric multiplicatio\n (32,1024) | 1.218e+05 | +2.46 % | Use diff's --strip-trailing-cr\n (32,1024) | 1.243e+05 | +4.56 % | Improve the numeric width_buck\n (32,1024) | 1.219e+05 | +2.53 % | Add missing pointer dereferenc\n (32,1024) | 1.187e+05 | -0.19 % | Extend mul_var_short() to 5 an\n (32,1024) | 2.153e+05 | +81.09 % | Optimise numeric multiplicatio\n (32,2048) | 5.867e+04 | | SQL/JSON: Various improvements\n (32,2048) | 5.829e+04 | -0.64 % | Optimise numeric multiplicatio\n (32,2048) | 5.943e+04 | +1.30 % | Use diff's --strip-trailing-cr\n (32,2048) | 5.863e+04 | -0.05 % | Improve the numeric width_buck\n (32,2048) | 5.811e+04 | -0.95 % | Add missing pointer dereferenc\n (32,2048) | 6.030e+04 | +2.78 % | Extend mul_var_short() to 5 an\n (32,2048) | 1.050e+05 | +79.02 % | Optimise numeric multiplicatio\n (32,4096) | 3.015e+04 | | SQL/JSON: Various improvements\n (32,4096) | 3.045e+04 | +1.01 % | Optimise numeric multiplicatio\n (32,4096) | 2.990e+04 | -0.81 % | Use diff's --strip-trailing-cr\n (32,4096) | 2.991e+04 | -0.78 % | Improve the numeric width_buck\n (32,4096) | 3.044e+04 | +0.96 % | Add missing pointer dereferenc\n (32,4096) | 3.046e+04 | +1.03 % | Extend mul_var_short() to 5 an\n (32,4096) | 5.518e+04 | +83.03 % | Optimise numeric multiplicatio\n (32,8192) | 1.360e+04 | | SQL/JSON: Various improvements\n (32,8192) | 1.336e+04 | -1.74 % | Optimise numeric multiplicatio\n (32,8192) | 1.349e+04 | -0.80 % | Use diff's --strip-trailing-cr\n (32,8192) | 1.400e+04 | +2.93 % | Improve the numeric width_buck\n (32,8192) | 1.398e+04 | +2.76 % | Add missing pointer dereferenc\n (32,8192) | 1.347e+04 | -0.96 % | Extend mul_var_short() to 5 an\n (32,8192) | 2.423e+04 | +78.16 % | Optimise numeric multiplicatio\n (32,16384) | 6.732e+03 | | SQL/JSON: Various improvements\n (32,16384) | 6.688e+03 | -0.65 % | Optimise numeric multiplicatio\n (32,16384) | 7.033e+03 | +4.49 % | Use diff's --strip-trailing-cr\n (32,16384) | 6.688e+03 | -0.65 % | Improve the numeric width_buck\n (32,16384) | 6.868e+03 | +2.02 % | Add missing pointer dereferenc\n (32,16384) | 6.929e+03 | +2.94 % | Extend mul_var_short() to 5 an\n (32,16384) | 1.193e+04 | +77.20 % | Optimise numeric multiplicatio\n (64,64) | 7.035e+05 | | SQL/JSON: Various improvements\n (64,64) | 6.919e+05 | -1.65 % | Optimise numeric multiplicatio\n (64,64) | 6.896e+05 | -1.98 % | Use diff's --strip-trailing-cr\n (64,64) | 6.838e+05 | -2.81 % | Improve the numeric width_buck\n (64,64) | 7.163e+05 | +1.82 % | Add missing pointer dereferenc\n (64,64) | 5.491e+05 | -21.95 % | Extend mul_var_short() to 5 an\n (64,64) | 1.455e+06 | +106.74 % | Optimise numeric multiplicatio\n (64,128) | 4.060e+05 | | SQL/JSON: Various improvements\n (64,128) | 3.897e+05 | -4.01 % | Optimise numeric multiplicatio\n (64,128) | 3.858e+05 | -4.97 % | Use diff's --strip-trailing-cr\n (64,128) | 3.977e+05 | -2.03 % | Improve the numeric width_buck\n (64,128) | 3.954e+05 | -2.61 % | Add missing pointer dereferenc\n (64,128) | 3.391e+05 | -16.48 % | Extend mul_var_short() to 5 an\n (64,128) | 9.534e+05 | +134.85 % | Optimise numeric multiplicatio\n (64,256) | 2.412e+05 | | SQL/JSON: Various improvements\n (64,256) | 2.394e+05 | -0.77 % | Optimise numeric multiplicatio\n (64,256) | 2.441e+05 | +1.19 % | Use diff's --strip-trailing-cr\n (64,256) | 2.393e+05 | -0.81 % | Improve the numeric width_buck\n (64,256) | 2.463e+05 | +2.10 % | Add missing pointer dereferenc\n (64,256) | 2.170e+05 | -10.05 % | Extend mul_var_short() to 5 an\n (64,256) | 5.368e+05 | +122.53 % | Optimise numeric multiplicatio\n (64,512) | 1.163e+05 | | SQL/JSON: Various improvements\n (64,512) | 1.174e+05 | +0.94 % | Optimise numeric multiplicatio\n (64,512) | 1.172e+05 | +0.79 % | Use diff's --strip-trailing-cr\n (64,512) | 1.195e+05 | +2.75 % | Improve the numeric width_buck\n (64,512) | 1.199e+05 | +3.10 % | Add missing pointer dereferenc\n (64,512) | 1.116e+05 | -4.07 % | Extend mul_var_short() to 5 an\n (64,512) | 2.836e+05 | +143.79 % | Optimise numeric multiplicatio\n (64,1024) | 6.084e+04 | | SQL/JSON: Various improvements\n (64,1024) | 6.026e+04 | -0.96 % | Optimise numeric multiplicatio\n (64,1024) | 5.970e+04 | -1.87 % | Use diff's --strip-trailing-cr\n (64,1024) | 5.911e+04 | -2.85 % | Improve the numeric width_buck\n (64,1024) | 5.913e+04 | -2.81 % | Add missing pointer dereferenc\n (64,1024) | 5.920e+04 | -2.69 % | Extend mul_var_short() to 5 an\n (64,1024) | 1.411e+05 | +131.88 % | Optimise numeric multiplicatio\n (64,2048) | 3.163e+04 | | SQL/JSON: Various improvements\n (64,2048) | 3.102e+04 | -1.91 % | Optimise numeric multiplicatio\n (64,2048) | 3.105e+04 | -1.81 % | Use diff's --strip-trailing-cr\n (64,2048) | 3.106e+04 | -1.79 % | Improve the numeric width_buck\n (64,2048) | 3.078e+04 | -2.69 % | Add missing pointer dereferenc\n (64,2048) | 3.077e+04 | -2.72 % | Extend mul_var_short() to 5 an\n (64,2048) | 7.339e+04 | +132.04 % | Optimise numeric multiplicatio\n (64,4096) | 1.619e+04 | | SQL/JSON: Various improvements\n (64,4096) | 1.604e+04 | -0.95 % | Optimise numeric multiplicatio\n (64,4096) | 1.561e+04 | -3.60 % | Use diff's --strip-trailing-cr\n (64,4096) | 1.561e+04 | -3.60 % | Improve the numeric width_buck\n (64,4096) | 1.634e+04 | +0.92 % | Add missing pointer dereferenc\n (64,4096) | 1.618e+04 | -0.05 % | Extend mul_var_short() to 5 an\n (64,4096) | 3.784e+04 | +133.70 % | Optimise numeric multiplicatio\n (64,8192) | 7.097e+03 | | SQL/JSON: Various improvements\n (64,8192) | 7.160e+03 | +0.90 % | Optimise numeric multiplicatio\n (64,8192) | 7.165e+03 | +0.97 % | Use diff's --strip-trailing-cr\n (64,8192) | 7.032e+03 | -0.90 % | Improve the numeric width_buck\n (64,8192) | 7.094e+03 | -0.04 % | Add missing pointer dereferenc\n (64,8192) | 7.431e+03 | +4.71 % | Extend mul_var_short() to 5 an\n (64,8192) | 1.593e+04 | +124.42 % | Optimise numeric multiplicatio\n (64,16384) | 3.557e+03 | | SQL/JSON: Various improvements\n (64,16384) | 3.519e+03 | -1.07 % | Optimise numeric multiplicatio\n (64,16384) | 3.520e+03 | -1.06 % | Use diff's --strip-trailing-cr\n (64,16384) | 3.519e+03 | -1.08 % | Improve the numeric width_buck\n (64,16384) | 3.587e+03 | +0.84 % | Add missing pointer dereferenc\n (64,16384) | 3.583e+03 | +0.71 % | Extend mul_var_short() to 5 an\n (64,16384) | 7.995e+03 | +124.76 % | Optimise numeric multiplicatio\n (128,128) | 2.134e+05 | | SQL/JSON: Various improvements\n (128,128) | 2.192e+05 | +2.75 % | Optimise numeric multiplicatio\n (128,128) | 2.175e+05 | +1.96 % | Use diff's --strip-trailing-cr\n (128,128) | 2.136e+05 | +0.11 % | Improve the numeric width_buck\n (128,128) | 2.130e+05 | -0.16 % | Add missing pointer dereferenc\n (128,128) | 1.831e+05 | -14.18 % | Extend mul_var_short() to 5 an\n (128,128) | 5.572e+05 | +161.13 % | Optimise numeric multiplicatio\n (128,256) | 1.303e+05 | | SQL/JSON: Various improvements\n (128,256) | 1.327e+05 | +1.89 % | Optimise numeric multiplicatio\n (128,256) | 1.291e+05 | -0.87 % | Use diff's --strip-trailing-cr\n (128,256) | 1.335e+05 | +2.51 % | Improve the numeric width_buck\n (128,256) | 1.291e+05 | -0.89 % | Add missing pointer dereferenc\n (128,256) | 1.176e+05 | -9.69 % | Extend mul_var_short() to 5 an\n (128,256) | 3.317e+05 | +154.62 % | Optimise numeric multiplicatio\n (128,512) | 7.007e+04 | | SQL/JSON: Various improvements\n (128,512) | 6.934e+04 | -1.03 % | Optimise numeric multiplicatio\n (128,512) | 6.976e+04 | -0.45 % | Use diff's --strip-trailing-cr\n (128,512) | 6.872e+04 | -1.93 % | Improve the numeric width_buck\n (128,512) | 6.662e+04 | -4.92 % | Add missing pointer dereferenc\n (128,512) | 6.579e+04 | -6.10 % | Extend mul_var_short() to 5 an\n (128,512) | 1.824e+05 | +160.38 % | Optimise numeric multiplicatio\n (128,1024) | 3.443e+04 | | SQL/JSON: Various improvements\n (128,1024) | 3.350e+04 | -2.70 % | Optimise numeric multiplicatio\n (128,1024) | 3.481e+04 | +1.11 % | Use diff's --strip-trailing-cr\n (128,1024) | 3.378e+04 | -1.89 % | Improve the numeric width_buck\n (128,1024) | 3.440e+04 | -0.10 % | Add missing pointer dereferenc\n (128,1024) | 3.379e+04 | -1.86 % | Extend mul_var_short() to 5 an\n (128,1024) | 8.564e+04 | +148.74 % | Optimise numeric multiplicatio\n (128,2048) | 1.667e+04 | | SQL/JSON: Various improvements\n (128,2048) | 1.683e+04 | +0.95 % | Optimise numeric multiplicatio\n (128,2048) | 1.685e+04 | +1.06 % | Use diff's --strip-trailing-cr\n (128,2048) | 1.639e+04 | -1.73 % | Improve the numeric width_buck\n (128,2048) | 1.687e+04 | +1.16 % | Add missing pointer dereferenc\n (128,2048) | 1.685e+04 | +1.05 % | Extend mul_var_short() to 5 an\n (128,2048) | 4.560e+04 | +173.45 % | Optimise numeric multiplicatio\n (128,4096) | 8.790e+03 | | SQL/JSON: Various improvements\n (128,4096) | 8.799e+03 | +0.10 % | Optimise numeric multiplicatio\n (128,4096) | 8.788e+03 | -0.03 % | Use diff's --strip-trailing-cr\n (128,4096) | 8.966e+03 | +2.00 % | Improve the numeric width_buck\n (128,4096) | 9.210e+03 | +4.78 % | Add missing pointer dereferenc\n (128,4096) | 8.635e+03 | -1.76 % | Extend mul_var_short() to 5 an\n (128,4096) | 2.281e+04 | +159.53 % | Optimise numeric multiplicatio\n (128,8192) | 3.853e+03 | | SQL/JSON: Various improvements\n (128,8192) | 3.920e+03 | +1.74 % | Optimise numeric multiplicatio\n (128,8192) | 3.929e+03 | +1.96 % | Use diff's --strip-trailing-cr\n (128,8192) | 3.853e+03 | 0.00 % | Improve the numeric width_buck\n (128,8192) | 3.883e+03 | +0.79 % | Add missing pointer dereferenc\n (128,8192) | 3.851e+03 | -0.06 % | Extend mul_var_short() to 5 an\n (128,8192) | 9.636e+03 | +150.08 % | Optimise numeric multiplicatio\n (128,16384) | 1.859e+03 | | SQL/JSON: Various improvements\n (128,16384) | 1.892e+03 | +1.80 % | Optimise numeric multiplicatio\n (128,16384) | 1.876e+03 | +0.92 % | Use diff's --strip-trailing-cr\n (128,16384) | 1.891e+03 | +1.71 % | Improve the numeric width_buck\n (128,16384) | 1.893e+03 | +1.83 % | Add missing pointer dereferenc\n (128,16384) | 1.857e+03 | -0.09 % | Extend mul_var_short() to 5 an\n (128,16384) | 4.837e+03 | +160.19 % | Optimise numeric multiplicatio\n (256,256) | 5.756e+04 | | SQL/JSON: Various improvements\n (256,256) | 6.032e+04 | +4.78 % | Optimise numeric multiplicatio\n (256,256) | 5.920e+04 | +2.84 % | Use diff's --strip-trailing-cr\n (256,256) | 5.874e+04 | +2.04 % | Improve the numeric width_buck\n (256,256) | 5.813e+04 | +0.99 % | Add missing pointer dereferenc\n (256,256) | 5.270e+04 | -8.45 % | Extend mul_var_short() to 5 an\n (256,256) | 1.739e+05 | +202.12 % | Optimise numeric multiplicatio\n (256,512) | 3.266e+04 | | SQL/JSON: Various improvements\n (256,512) | 3.261e+04 | -0.14 % | Optimise numeric multiplicatio\n (256,512) | 3.420e+04 | +4.73 % | Use diff's --strip-trailing-cr\n (256,512) | 3.325e+04 | +1.80 % | Improve the numeric width_buck\n (256,512) | 3.127e+04 | -4.25 % | Add missing pointer dereferenc\n (256,512) | 3.081e+04 | -5.64 % | Extend mul_var_short() to 5 an\n (256,512) | 1.019e+05 | +212.01 % | Optimise numeric multiplicatio\n (256,1024) | 1.719e+04 | | SQL/JSON: Various improvements\n (256,1024) | 1.767e+04 | +2.83 % | Optimise numeric multiplicatio\n (256,1024) | 1.735e+04 | +0.93 % | Use diff's --strip-trailing-cr\n (256,1024) | 1.785e+04 | +3.86 % | Improve the numeric width_buck\n (256,1024) | 1.750e+04 | +1.80 % | Add missing pointer dereferenc\n (256,1024) | 1.718e+04 | -0.03 % | Extend mul_var_short() to 5 an\n (256,1024) | 4.776e+04 | +177.91 % | Optimise numeric multiplicatio\n (256,2048) | 8.793e+03 | | SQL/JSON: Various improvements\n (256,2048) | 8.750e+03 | -0.50 % | Optimise numeric multiplicatio\n (256,2048) | 8.587e+03 | -2.34 % | Use diff's --strip-trailing-cr\n (256,2048) | 8.712e+03 | -0.93 % | Improve the numeric width_buck\n (256,2048) | 8.551e+03 | -2.76 % | Add missing pointer dereferenc\n (256,2048) | 8.878e+03 | +0.96 % | Extend mul_var_short() to 5 an\n (256,2048) | 2.627e+04 | +198.77 % | Optimise numeric multiplicatio\n (256,4096) | 4.370e+03 | | SQL/JSON: Various improvements\n (256,4096) | 4.411e+03 | +0.92 % | Optimise numeric multiplicatio\n (256,4096) | 4.371e+03 | +0.02 % | Use diff's --strip-trailing-cr\n (256,4096) | 4.403e+03 | +0.76 % | Improve the numeric width_buck\n (256,4096) | 4.532e+03 | +3.70 % | Add missing pointer dereferenc\n (256,4096) | 4.583e+03 | +4.86 % | Extend mul_var_short() to 5 an\n (256,4096) | 1.320e+04 | +202.00 % | Optimise numeric multiplicatio\n (256,8192) | 1.963e+03 | | SQL/JSON: Various improvements\n (256,8192) | 1.956e+03 | -0.38 % | Optimise numeric multiplicatio\n (256,8192) | 1.938e+03 | -1.29 % | Use diff's --strip-trailing-cr\n (256,8192) | 1.957e+03 | -0.32 % | Improve the numeric width_buck\n (256,8192) | 1.942e+03 | -1.09 % | Add missing pointer dereferenc\n (256,8192) | 2.013e+03 | +2.53 % | Extend mul_var_short() to 5 an\n (256,8192) | 5.266e+03 | +168.21 % | Optimise numeric multiplicatio\n (256,16384) | 9.950e+02 | | SQL/JSON: Various improvements\n (256,16384) | 9.936e+02 | -0.15 % | Optimise numeric multiplicatio\n (256,16384) | 9.752e+02 | -2.00 % | Use diff's --strip-trailing-cr\n (256,16384) | 9.926e+02 | -0.24 % | Improve the numeric width_buck\n (256,16384) | 9.841e+02 | -1.10 % | Add missing pointer dereferenc\n (256,16384) | 1.011e+03 | +1.61 % | Extend mul_var_short() to 5 an\n (256,16384) | 2.661e+03 | +167.42 % | Optimise numeric multiplicatio\n (512,512) | 1.626e+04 | | SQL/JSON: Various improvements\n (512,512) | 1.602e+04 | -1.49 % | Optimise numeric multiplicatio\n (512,512) | 1.618e+04 | -0.51 % | Use diff's --strip-trailing-cr\n (512,512) | 1.602e+04 | -1.49 % | Improve the numeric width_buck\n (512,512) | 1.587e+04 | -2.44 % | Add missing pointer dereferenc\n (512,512) | 1.548e+04 | -4.79 % | Extend mul_var_short() to 5 an\n (512,512) | 5.094e+04 | +213.25 % | Optimise numeric multiplicatio\n (512,1024) | 8.460e+03 | | SQL/JSON: Various improvements\n (512,1024) | 8.611e+03 | +1.80 % | Optimise numeric multiplicatio\n (512,1024) | 8.456e+03 | -0.05 % | Use diff's --strip-trailing-cr\n (512,1024) | 8.381e+03 | -0.93 % | Improve the numeric width_buck\n (512,1024) | 8.692e+03 | +2.74 % | Add missing pointer dereferenc\n (512,1024) | 8.381e+03 | -0.93 % | Extend mul_var_short() to 5 an\n (512,1024) | 2.679e+04 | +216.68 % | Optimise numeric multiplicatio\n (512,2048) | 4.358e+03 | | SQL/JSON: Various improvements\n (512,2048) | 4.485e+03 | +2.91 % | Optimise numeric multiplicatio\n (512,2048) | 4.324e+03 | -0.78 % | Use diff's --strip-trailing-cr\n (512,2048) | 4.323e+03 | -0.81 % | Improve the numeric width_buck\n (512,2048) | 4.361e+03 | +0.06 % | Add missing pointer dereferenc\n (512,2048) | 4.407e+03 | +1.12 % | Extend mul_var_short() to 5 an\n (512,2048) | 1.406e+04 | +222.72 % | Optimise numeric multiplicatio\n (512,4096) | 2.210e+03 | | SQL/JSON: Various improvements\n (512,4096) | 2.271e+03 | +2.75 % | Optimise numeric multiplicatio\n (512,4096) | 2.251e+03 | +1.85 % | Use diff's --strip-trailing-cr\n (512,4096) | 2.229e+03 | +0.84 % | Improve the numeric width_buck\n (512,4096) | 2.210e+03 | -0.01 % | Add missing pointer dereferenc\n (512,4096) | 2.231e+03 | +0.94 % | Extend mul_var_short() to 5 an\n (512,4096) | 7.011e+03 | +217.25 % | Optimise numeric multiplicatio\n (512,8192) | 1.020e+03 | | SQL/JSON: Various improvements\n (512,8192) | 1.031e+03 | +1.02 % | Optimise numeric multiplicatio\n (512,8192) | 1.012e+03 | -0.83 % | Use diff's --strip-trailing-cr\n (512,8192) | 1.051e+03 | +3.05 % | Improve the numeric width_buck\n (512,8192) | 9.928e+02 | -2.69 % | Add missing pointer dereferenc\n (512,8192) | 1.030e+03 | +0.92 % | Extend mul_var_short() to 5 an\n (512,8192) | 2.871e+03 | +181.41 % | Optimise numeric multiplicatio\n (512,16384) | 5.121e+02 | | SQL/JSON: Various improvements\n (512,16384) | 5.084e+02 | -0.72 % | Optimise numeric multiplicatio\n (512,16384) | 5.032e+02 | -1.72 % | Use diff's --strip-trailing-cr\n (512,16384) | 5.034e+02 | -1.68 % | Improve the numeric width_buck\n (512,16384) | 5.075e+02 | -0.88 % | Add missing pointer dereferenc\n (512,16384) | 4.952e+02 | -3.28 % | Extend mul_var_short() to 5 an\n (512,16384) | 1.397e+03 | +172.76 % | Optimise numeric multiplicatio\n (1024,1024) | 4.230e+03 | | SQL/JSON: Various improvements\n (1024,1024) | 4.164e+03 | -1.56 % | Optimise numeric multiplicatio\n (1024,1024) | 4.192e+03 | -0.91 % | Use diff's --strip-trailing-cr\n (1024,1024) | 4.134e+03 | -2.29 % | Improve the numeric width_buck\n (1024,1024) | 4.115e+03 | -2.73 % | Add missing pointer dereferenc\n (1024,1024) | 4.230e+03 | 0.00 % | Extend mul_var_short() to 5 an\n (1024,1024) | 1.372e+04 | +224.40 % | Optimise numeric multiplicatio\n (1024,2048) | 2.179e+03 | | SQL/JSON: Various improvements\n (1024,2048) | 2.206e+03 | +1.28 % | Optimise numeric multiplicatio\n (1024,2048) | 2.198e+03 | +0.91 % | Use diff's --strip-trailing-cr\n (1024,2048) | 2.179e+03 | +0.03 % | Improve the numeric width_buck\n (1024,2048) | 2.239e+03 | +2.79 % | Add missing pointer dereferenc\n (1024,2048) | 2.278e+03 | +4.59 % | Extend mul_var_short() to 5 an\n (1024,2048) | 7.093e+03 | +225.60 % | Optimise numeric multiplicatio\n (1024,4096) | 1.124e+03 | | SQL/JSON: Various improvements\n (1024,4096) | 1.124e+03 | +0.01 % | Optimise numeric multiplicatio\n (1024,4096) | 1.125e+03 | +0.05 % | Use diff's --strip-trailing-cr\n (1024,4096) | 1.111e+03 | -1.22 % | Improve the numeric width_buck\n (1024,4096) | 1.135e+03 | +0.95 % | Add missing pointer dereferenc\n (1024,4096) | 1.146e+03 | +1.91 % | Extend mul_var_short() to 5 an\n (1024,4096) | 3.714e+03 | +230.29 % | Optimise numeric multiplicatio\n (1024,8192) | 5.069e+02 | | SQL/JSON: Various improvements\n (1024,8192) | 5.087e+02 | +0.35 % | Optimise numeric multiplicatio\n (1024,8192) | 5.178e+02 | +2.14 % | Use diff's --strip-trailing-cr\n (1024,8192) | 5.132e+02 | +1.24 % | Improve the numeric width_buck\n (1024,8192) | 5.163e+02 | +1.85 % | Add missing pointer dereferenc\n (1024,8192) | 5.123e+02 | +1.06 % | Extend mul_var_short() to 5 an\n (1024,8192) | 1.449e+03 | +185.92 % | Optimise numeric multiplicatio\n (1024,16384) | 2.534e+02 | | SQL/JSON: Various improvements\n (1024,16384) | 2.489e+02 | -1.80 % | Optimise numeric multiplicatio\n (1024,16384) | 2.559e+02 | +0.98 % | Use diff's --strip-trailing-cr\n (1024,16384) | 2.559e+02 | +0.97 % | Improve the numeric width_buck\n (1024,16384) | 2.556e+02 | +0.88 % | Add missing pointer dereferenc\n (1024,16384) | 2.465e+02 | -2.72 % | Extend mul_var_short() to 5 an\n (1024,16384) | 7.249e+02 | +186.04 % | Optimise numeric multiplicatio\n (2048,2048) | 1.082e+03 | | SQL/JSON: Various improvements\n (2048,2048) | 1.097e+03 | +1.39 % | Optimise numeric multiplicatio\n (2048,2048) | 1.083e+03 | +0.16 % | Use diff's --strip-trailing-cr\n (2048,2048) | 1.076e+03 | -0.54 % | Improve the numeric width_buck\n (2048,2048) | 1.071e+03 | -0.95 % | Add missing pointer dereferenc\n (2048,2048) | 1.092e+03 | +0.95 % | Extend mul_var_short() to 5 an\n (2048,2048) | 3.709e+03 | +242.92 % | Optimise numeric multiplicatio\n (2048,4096) | 5.609e+02 | | SQL/JSON: Various improvements\n (2048,4096) | 5.522e+02 | -1.55 % | Optimise numeric multiplicatio\n (2048,4096) | 5.572e+02 | -0.66 % | Use diff's --strip-trailing-cr\n (2048,4096) | 5.525e+02 | -1.49 % | Improve the numeric width_buck\n (2048,4096) | 5.577e+02 | -0.57 % | Add missing pointer dereferenc\n (2048,4096) | 5.624e+02 | +0.26 % | Extend mul_var_short() to 5 an\n (2048,4096) | 1.889e+03 | +236.76 % | Optimise numeric multiplicatio\n (2048,8192) | 2.505e+02 | | SQL/JSON: Various improvements\n (2048,8192) | 2.529e+02 | +0.96 % | Optimise numeric multiplicatio\n (2048,8192) | 2.482e+02 | -0.91 % | Use diff's --strip-trailing-cr\n (2048,8192) | 2.526e+02 | +0.83 % | Improve the numeric width_buck\n (2048,8192) | 2.510e+02 | +0.20 % | Add missing pointer dereferenc\n (2048,8192) | 2.606e+02 | +4.03 % | Extend mul_var_short() to 5 an\n (2048,8192) | 7.282e+02 | +190.68 % | Optimise numeric multiplicatio\n (2048,16384) | 1.262e+02 | | SQL/JSON: Various improvements\n (2048,16384) | 1.289e+02 | +2.18 % | Optimise numeric multiplicatio\n (2048,16384) | 1.272e+02 | +0.83 % | Use diff's --strip-trailing-cr\n (2048,16384) | 1.253e+02 | -0.64 % | Improve the numeric width_buck\n (2048,16384) | 1.289e+02 | +2.17 % | Add missing pointer dereferenc\n (2048,16384) | 1.313e+02 | +4.10 % | Extend mul_var_short() to 5 an\n (2048,16384) | 3.616e+02 | +186.60 % | Optimise numeric multiplicatio\n (4096,4096) | 2.670e+02 | | SQL/JSON: Various improvements\n (4096,4096) | 2.695e+02 | +0.93 % | Optimise numeric multiplicatio\n (4096,4096) | 2.747e+02 | +2.87 % | Use diff's --strip-trailing-cr\n (4096,4096) | 2.695e+02 | +0.94 % | Improve the numeric width_buck\n (4096,4096) | 2.720e+02 | +1.87 % | Add missing pointer dereferenc\n (4096,4096) | 2.716e+02 | +1.73 % | Extend mul_var_short() to 5 an\n (4096,4096) | 9.636e+02 | +260.88 % | Optimise numeric multiplicatio\n (4096,8192) | 1.241e+02 | | SQL/JSON: Various improvements\n (4096,8192) | 1.253e+02 | +0.93 % | Optimise numeric multiplicatio\n (4096,8192) | 1.229e+02 | -0.99 % | Use diff's --strip-trailing-cr\n (4096,8192) | 1.264e+02 | +1.88 % | Improve the numeric width_buck\n (4096,8192) | 1.252e+02 | +0.90 % | Add missing pointer dereferenc\n (4096,8192) | 1.240e+02 | -0.10 % | Extend mul_var_short() to 5 an\n (4096,8192) | 3.785e+02 | +205.02 % | Optimise numeric multiplicatio\n (4096,16384) | 6.437e+01 | | SQL/JSON: Various improvements\n (4096,16384) | 6.216e+01 | -3.43 % | Optimise numeric multiplicatio\n (4096,16384) | 6.221e+01 | -3.36 % | Use diff's --strip-trailing-cr\n (4096,16384) | 6.249e+01 | -2.91 % | Improve the numeric width_buck\n (4096,16384) | 6.285e+01 | -2.36 % | Add missing pointer dereferenc\n (4096,16384) | 6.276e+01 | -2.50 % | Extend mul_var_short() to 5 an\n (4096,16384) | 1.832e+02 | +184.59 % | Optimise numeric multiplicatio\n (8192,8192) | 6.047e+01 | | SQL/JSON: Various improvements\n (8192,8192) | 6.052e+01 | +0.09 % | Optimise numeric multiplicatio\n (8192,8192) | 5.996e+01 | -0.84 % | Use diff's --strip-trailing-cr\n (8192,8192) | 6.059e+01 | +0.21 % | Improve the numeric width_buck\n (8192,8192) | 5.863e+01 | -3.03 % | Add missing pointer dereferenc\n (8192,8192) | 6.115e+01 | +1.13 % | Extend mul_var_short() to 5 an\n (8192,8192) | 1.858e+02 | +207.25 % | Optimise numeric multiplicatio\n (8192,16384) | 3.197e+01 | | SQL/JSON: Various improvements\n (8192,16384) | 3.092e+01 | -3.29 % | Optimise numeric multiplicatio\n (8192,16384) | 3.101e+01 | -3.01 % | Use diff's --strip-trailing-cr\n (8192,16384) | 3.151e+01 | -1.44 % | Improve the numeric width_buck\n (8192,16384) | 3.055e+01 | -4.47 % | Add missing pointer dereferenc\n (8192,16384) | 3.095e+01 | -3.19 % | Extend mul_var_short() to 5 an\n (8192,16384) | 9.386e+01 | +193.53 % | Optimise numeric multiplicatio\n (16384,16384) | 1.518e+01 | | SQL/JSON: Various improvements\n (16384,16384) | 1.497e+01 | -1.38 % | Optimise numeric multiplicatio\n (16384,16384) | 1.476e+01 | -2.78 % | Use diff's --strip-trailing-cr\n (16384,16384) | 1.486e+01 | -2.07 % | Improve the numeric width_buck\n (16384,16384) | 1.500e+01 | -1.20 % | Add missing pointer dereferenc\n (16384,16384) | 1.490e+01 | -1.84 % | Extend mul_var_short() to 5 an\n (16384,16384) | 4.693e+01 | +209.15 % | Optimise numeric multiplicatio\n\n/Joel\n\n\n", "msg_date": "Mon, 29 Jul 2024 02:23:03 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, Jul 29, 2024, at 02:23, Joel Jacobson wrote:\n> Then, I've used sched_setaffinity() from <sched.h> to ensure the \n> benchmark run on CPU core id 31.\n\nI fixed a bug in my measure function, I had forgot to reset affinity after each\nbenchmark, so the PostgreSQL backend continued to use the core even after\nnumeric_mul had finished.\n\nNew results with less noise below.\n\nPardon the exceeding of 80 chars line width,\nbut felt important to include commit hash and relative delta.\n\n\n ndigits | rate | change | accum | commit | summary\n---------------+------------+-----------+-----------+---------+----------------------------------------------------\n (1,1) | 1.639e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,1) | 2.248e+07 | +37.16 % | +37.16 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,1) | 2.333e+07 | +3.77 % | +42.32 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,1) | 2.291e+07 | -1.81 % | +39.75 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,1) | 2.276e+07 | -0.64 % | +38.86 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,1) | 2.256e+07 | -0.86 % | +37.66 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,1) | 2.182e+07 | -3.32 % | +33.09 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,2) | 1.640e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,2) | 2.202e+07 | +34.28 % | +34.28 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,2) | 2.214e+07 | +0.58 % | +35.06 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,2) | 2.173e+07 | -1.85 % | +32.55 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,2) | 2.260e+07 | +3.98 % | +37.83 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,2) | 2.233e+07 | -1.19 % | +36.19 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,2) | 2.144e+07 | -3.97 % | +30.79 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,3) | 1.511e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,3) | 2.179e+07 | +44.20 % | +44.20 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,3) | 2.134e+07 | -2.05 % | +41.24 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,3) | 2.198e+07 | +2.99 % | +45.47 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,3) | 2.190e+07 | -0.39 % | +44.91 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,3) | 2.164e+07 | -1.16 % | +43.23 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,3) | 2.104e+07 | -2.79 % | +39.24 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,4) | 1.494e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,4) | 2.132e+07 | +42.71 % | +42.71 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,4) | 2.151e+07 | +0.91 % | +44.00 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,4) | 2.190e+07 | +1.82 % | +46.62 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,4) | 2.172e+07 | -0.82 % | +45.41 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,4) | 2.112e+07 | -2.75 % | +41.41 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,4) | 2.077e+07 | -1.67 % | +39.05 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,8) | 1.444e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,8) | 2.063e+07 | +42.85 % | +42.85 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,8) | 1.996e+07 | -3.25 % | +38.21 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,8) | 2.039e+07 | +2.12 % | +41.14 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,8) | 2.020e+07 | -0.89 % | +39.87 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,8) | 1.934e+07 | -4.28 % | +33.89 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,8) | 1.948e+07 | +0.73 % | +34.87 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,16) | 9.614e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,16) | 1.215e+07 | +26.37 % | +26.37 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,16) | 1.223e+07 | +0.68 % | +27.23 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,16) | 1.251e+07 | +2.26 % | +30.11 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,16) | 1.236e+07 | -1.17 % | +28.58 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,16) | 1.293e+07 | +4.62 % | +34.53 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,16) | 1.240e+07 | -4.16 % | +28.94 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,32) | 5.675e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,32) | 8.241e+06 | +45.22 % | +45.22 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,32) | 8.303e+06 | +0.74 % | +46.30 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,32) | 8.352e+06 | +0.60 % | +47.17 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,32) | 8.200e+06 | -1.82 % | +44.49 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,32) | 8.100e+06 | -1.22 % | +42.73 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,32) | 8.313e+06 | +2.62 % | +46.47 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,64) | 3.479e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,64) | 4.763e+06 | +36.91 % | +36.91 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,64) | 4.677e+06 | -1.79 % | +34.46 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,64) | 4.655e+06 | -0.48 % | +33.82 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,64) | 4.716e+06 | +1.31 % | +35.56 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,64) | 4.766e+06 | +1.06 % | +37.00 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,64) | 4.795e+06 | +0.61 % | +37.84 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,128) | 1.879e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,128) | 2.458e+06 | +30.81 % | +30.81 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,128) | 2.479e+06 | +0.88 % | +31.97 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,128) | 2.483e+06 | +0.16 % | +32.18 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,128) | 2.555e+06 | +2.90 % | +36.01 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,128) | 2.461e+06 | -3.70 % | +30.97 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,128) | 2.568e+06 | +4.35 % | +36.67 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,256) | 9.547e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,256) | 1.310e+06 | +37.20 % | +37.20 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,256) | 1.302e+06 | -0.59 % | +36.39 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,256) | 1.351e+06 | +3.72 % | +41.47 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,256) | 1.325e+06 | -1.88 % | +38.81 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,256) | 1.338e+06 | +0.95 % | +40.13 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,256) | 1.370e+06 | +2.44 % | +43.55 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,512) | 4.999e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,512) | 6.564e+05 | +31.31 % | +31.31 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,512) | 6.640e+05 | +1.16 % | +32.83 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,512) | 6.573e+05 | -1.01 % | +31.49 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,512) | 6.759e+05 | +2.83 % | +35.22 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,512) | 6.578e+05 | -2.68 % | +31.59 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,512) | 6.615e+05 | +0.57 % | +32.34 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,1024) | 2.567e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,1024) | 3.342e+05 | +30.17 % | +30.17 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,1024) | 3.343e+05 | +0.04 % | +30.23 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,1024) | 3.435e+05 | +2.76 % | +33.82 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,1024) | 3.408e+05 | -0.81 % | +32.73 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,1024) | 3.441e+05 | +0.98 % | +34.03 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,1024) | 3.340e+05 | -2.95 % | +30.08 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,2048) | 1.256e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,2048) | 1.648e+05 | +31.19 % | +31.19 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,2048) | 1.624e+05 | -1.46 % | +29.27 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,2048) | 1.648e+05 | +1.46 % | +31.16 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,2048) | 1.697e+05 | +2.98 % | +35.06 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,2048) | 1.634e+05 | -3.67 % | +30.10 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,2048) | 1.649e+05 | +0.89 % | +31.27 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,4096) | 6.430e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,4096) | 8.903e+04 | +38.46 % | +38.46 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,4096) | 8.379e+04 | -5.88 % | +30.32 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,4096) | 8.536e+04 | +1.87 % | +32.76 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,4096) | 8.609e+04 | +0.85 % | +33.88 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,4096) | 8.540e+04 | -0.80 % | +32.81 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,4096) | 8.616e+04 | +0.89 % | +34.00 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,8192) | 3.122e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,8192) | 4.227e+04 | +35.41 % | +35.41 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,8192) | 4.149e+04 | -1.85 % | +32.90 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,8192) | 4.221e+04 | +1.73 % | +35.21 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,8192) | 4.262e+04 | +0.97 % | +36.51 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,8192) | 4.188e+04 | -1.74 % | +34.14 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,8192) | 4.147e+04 | -0.96 % | +32.85 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1,16384) | 1.557e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1,16384) | 2.122e+04 | +36.29 % | +36.29 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1,16384) | 2.104e+04 | -0.84 % | +35.14 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1,16384) | 2.081e+04 | -1.06 % | +33.70 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1,16384) | 2.065e+04 | -0.80 % | +32.63 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1,16384) | 2.120e+04 | +2.68 % | +36.18 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1,16384) | 2.099e+04 | -1.01 % | +34.80 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,2) | 1.450e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,2) | 2.147e+07 | +48.08 % | +48.08 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,2) | 2.289e+07 | +6.63 % | +57.90 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,2) | 2.296e+07 | +0.29 % | +58.36 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,2) | 2.175e+07 | -5.28 % | +50.00 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,2) | 2.188e+07 | +0.63 % | +50.94 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,2) | 2.138e+07 | -2.33 % | +47.43 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,3) | 1.312e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,3) | 2.127e+07 | +62.10 % | +62.10 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,3) | 2.068e+07 | -2.80 % | +57.57 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,3) | 2.135e+07 | +3.26 % | +62.71 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,3) | 2.207e+07 | +3.38 % | +68.21 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,3) | 2.106e+07 | -4.59 % | +60.49 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,3) | 2.143e+07 | +1.74 % | +63.28 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,4) | 1.387e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,4) | 2.020e+07 | +45.66 % | +45.66 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,4) | 2.000e+07 | -0.96 % | +44.26 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,4) | 2.062e+07 | +3.08 % | +48.70 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,4) | 1.954e+07 | -5.21 % | +40.95 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,4) | 2.057e+07 | +5.25 % | +48.35 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,4) | 1.974e+07 | -4.03 % | +42.37 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,8) | 1.313e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,8) | 1.774e+07 | +35.19 % | +35.19 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,8) | 1.841e+07 | +3.76 % | +40.28 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,8) | 1.854e+07 | +0.67 % | +41.22 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,8) | 1.854e+07 | +0.03 % | +41.26 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,8) | 1.805e+07 | -2.63 % | +37.54 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,8) | 1.792e+07 | -0.76 % | +36.50 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,16) | 9.013e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,16) | 1.207e+07 | +33.91 % | +33.91 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,16) | 1.174e+07 | -2.77 % | +30.20 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,16) | 1.158e+07 | -1.32 % | +28.49 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,16) | 1.193e+07 | +3.04 % | +32.39 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,16) | 1.226e+07 | +2.75 % | +36.03 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,16) | 1.180e+07 | -3.78 % | +30.89 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,32) | 5.716e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,32) | 7.794e+06 | +36.35 % | +36.35 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,32) | 7.784e+06 | -0.12 % | +36.19 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,32) | 7.852e+06 | +0.87 % | +37.37 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,32) | 7.635e+06 | -2.76 % | +33.57 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,32) | 7.882e+06 | +3.24 % | +37.90 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,32) | 8.050e+06 | +2.13 % | +40.84 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,64) | 3.419e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,64) | 4.455e+06 | +30.30 % | +30.30 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,64) | 4.486e+06 | +0.70 % | +31.21 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,64) | 4.498e+06 | +0.27 % | +31.56 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,64) | 4.447e+06 | -1.14 % | +30.06 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,64) | 4.775e+06 | +7.37 % | +39.65 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,64) | 4.596e+06 | -3.75 % | +34.42 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,128) | 1.738e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,128) | 2.363e+06 | +35.95 % | +35.95 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,128) | 2.367e+06 | +0.16 % | +36.17 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,128) | 2.339e+06 | -1.16 % | +34.59 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,128) | 2.340e+06 | +0.05 % | +34.65 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,128) | 2.386e+06 | +1.98 % | +37.31 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,128) | 2.353e+06 | -1.41 % | +35.37 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,256) | 9.229e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,256) | 1.238e+06 | +34.15 % | +34.15 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,256) | 1.274e+06 | +2.92 % | +38.07 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,256) | 1.260e+06 | -1.12 % | +36.52 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,256) | 1.259e+06 | -0.04 % | +36.46 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,256) | 1.247e+06 | -0.98 % | +35.13 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,256) | 1.304e+06 | +4.54 % | +41.26 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,512) | 4.746e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,512) | 6.212e+05 | +30.87 % | +30.87 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,512) | 6.380e+05 | +2.71 % | +34.42 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,512) | 6.546e+05 | +2.59 % | +37.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,512) | 6.306e+05 | -3.65 % | +32.87 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,512) | 6.612e+05 | +4.85 % | +39.31 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,512) | 6.464e+05 | -2.25 % | +36.19 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,1024) | 2.446e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,1024) | 3.160e+05 | +29.22 % | +29.22 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,1024) | 3.278e+05 | +3.72 % | +34.03 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,1024) | 3.185e+05 | -2.85 % | +30.21 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,1024) | 3.190e+05 | +0.17 % | +30.44 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,1024) | 3.348e+05 | +4.94 % | +36.88 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,1024) | 3.260e+05 | -2.62 % | +33.29 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,2048) | 1.226e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,2048) | 1.551e+05 | +26.55 % | +26.55 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,2048) | 1.608e+05 | +3.66 % | +31.18 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,2048) | 1.576e+05 | -1.97 % | +28.60 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,2048) | 1.552e+05 | -1.50 % | +26.66 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,2048) | 1.577e+05 | +1.59 % | +28.67 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,2048) | 1.630e+05 | +3.35 % | +32.99 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,4096) | 6.170e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,4096) | 8.192e+04 | +32.77 % | +32.77 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,4096) | 8.433e+04 | +2.94 % | +36.68 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,4096) | 8.166e+04 | -3.17 % | +32.34 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,4096) | 8.083e+04 | -1.01 % | +31.00 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,4096) | 8.296e+04 | +2.64 % | +34.46 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,4096) | 8.333e+04 | +0.44 % | +35.05 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,8192) | 3.015e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,8192) | 4.013e+04 | +33.09 % | +33.09 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,8192) | 4.006e+04 | -0.16 % | +32.88 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,8192) | 4.087e+04 | +2.01 % | +35.54 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,8192) | 4.010e+04 | -1.87 % | +33.01 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,8192) | 4.027e+04 | +0.42 % | +33.56 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,8192) | 4.090e+04 | +1.57 % | +35.66 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2,16384) | 1.533e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2,16384) | 2.053e+04 | +33.89 % | +33.89 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2,16384) | 2.011e+04 | -2.04 % | +31.16 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2,16384) | 2.031e+04 | +1.00 % | +32.48 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2,16384) | 2.012e+04 | -0.96 % | +31.20 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2,16384) | 2.008e+04 | -0.20 % | +30.94 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2,16384) | 2.053e+04 | +2.26 % | +33.90 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,3) | 1.233e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,3) | 2.077e+07 | +68.44 % | +68.44 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,3) | 2.123e+07 | +2.23 % | +72.19 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,3) | 2.061e+07 | -2.90 % | +67.20 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,3) | 2.073e+07 | +0.56 % | +68.14 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,3) | 2.040e+07 | -1.57 % | +65.49 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,3) | 1.912e+07 | -6.30 % | +55.06 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,4) | 1.261e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,4) | 1.918e+07 | +52.08 % | +52.08 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,4) | 1.984e+07 | +3.46 % | +57.34 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,4) | 2.022e+07 | +1.91 % | +60.35 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,4) | 1.932e+07 | -4.48 % | +53.16 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,4) | 1.889e+07 | -2.21 % | +49.78 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,4) | 1.936e+07 | +2.47 % | +53.47 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,8) | 1.243e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,8) | 1.813e+07 | +45.88 % | +45.88 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,8) | 1.755e+07 | -3.20 % | +41.22 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,8) | 1.798e+07 | +2.41 % | +44.62 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,8) | 1.737e+07 | -3.39 % | +39.73 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,8) | 1.716e+07 | -1.20 % | +38.05 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,8) | 1.755e+07 | +2.27 % | +41.19 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,16) | 7.347e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,16) | 1.105e+07 | +50.46 % | +50.46 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,16) | 1.128e+07 | +2.03 % | +53.52 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,16) | 1.101e+07 | -2.36 % | +49.90 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,16) | 1.106e+07 | +0.40 % | +50.50 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,16) | 1.098e+07 | -0.73 % | +49.41 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,16) | 1.157e+07 | +5.41 % | +57.50 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,32) | 5.398e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,32) | 7.399e+06 | +37.08 % | +37.08 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,32) | 7.170e+06 | -3.09 % | +32.85 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,32) | 7.263e+06 | +1.29 % | +34.56 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,32) | 7.283e+06 | +0.27 % | +34.93 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,32) | 7.515e+06 | +3.18 % | +39.22 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,32) | 7.556e+06 | +0.55 % | +39.99 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,64) | 3.279e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,64) | 4.306e+06 | +31.30 % | +31.30 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,64) | 4.180e+06 | -2.94 % | +27.45 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,64) | 4.352e+06 | +4.13 % | +32.72 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,64) | 4.228e+06 | -2.86 % | +28.92 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,64) | 4.320e+06 | +2.18 % | +31.73 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,64) | 4.316e+06 | -0.10 % | +31.60 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,128) | 1.691e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,128) | 2.244e+06 | +32.71 % | +32.71 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,128) | 2.246e+06 | +0.09 % | +32.83 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,128) | 2.239e+06 | -0.29 % | +32.44 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,128) | 2.264e+06 | +1.09 % | +33.89 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,128) | 2.367e+06 | +4.54 % | +39.97 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,128) | 2.359e+06 | -0.32 % | +39.53 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,256) | 8.856e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,256) | 1.205e+06 | +36.04 % | +36.04 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,256) | 1.224e+06 | +1.57 % | +38.17 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,256) | 1.223e+06 | -0.07 % | +38.06 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,256) | 1.191e+06 | -2.60 % | +34.48 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,256) | 1.270e+06 | +6.61 % | +43.37 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,256) | 1.228e+06 | -3.26 % | +38.69 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,512) | 4.637e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,512) | 6.174e+05 | +33.14 % | +33.14 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,512) | 6.080e+05 | -1.53 % | +31.10 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,512) | 6.229e+05 | +2.45 % | +34.31 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,512) | 6.214e+05 | -0.24 % | +33.99 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,512) | 6.296e+05 | +1.33 % | +35.77 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,512) | 6.415e+05 | +1.89 % | +38.33 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,1024) | 2.389e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,1024) | 3.115e+05 | +30.41 % | +30.41 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,1024) | 3.144e+05 | +0.94 % | +31.64 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,1024) | 3.158e+05 | +0.44 % | +32.22 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,1024) | 3.241e+05 | +2.61 % | +35.67 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,1024) | 3.144e+05 | -2.98 % | +31.62 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,1024) | 3.162e+05 | +0.58 % | +32.39 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,2048) | 1.147e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,2048) | 1.549e+05 | +35.02 % | +35.02 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,2048) | 1.568e+05 | +1.25 % | +36.71 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,2048) | 1.519e+05 | -3.13 % | +32.42 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,2048) | 1.526e+05 | +0.44 % | +33.01 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,2048) | 1.567e+05 | +2.72 % | +36.62 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,2048) | 1.563e+05 | -0.28 % | +36.24 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,4096) | 5.982e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,4096) | 7.973e+04 | +33.29 % | +33.29 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,4096) | 8.063e+04 | +1.13 % | +34.80 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,4096) | 8.022e+04 | -0.51 % | +34.11 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,4096) | 8.249e+04 | +2.83 % | +37.90 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,4096) | 8.023e+04 | -2.74 % | +34.12 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,4096) | 8.141e+04 | +1.47 % | +36.09 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,8192) | 2.903e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,8192) | 3.987e+04 | +37.33 % | +37.33 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,8192) | 4.028e+04 | +1.05 % | +38.76 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,8192) | 4.098e+04 | +1.72 % | +41.16 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,8192) | 3.920e+04 | -4.34 % | +35.03 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,8192) | 3.915e+04 | -0.11 % | +34.88 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,8192) | 3.894e+04 | -0.54 % | +34.15 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (3,16384) | 1.448e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (3,16384) | 1.950e+04 | +34.71 % | +34.71 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (3,16384) | 1.967e+04 | +0.86 % | +35.87 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (3,16384) | 1.949e+04 | -0.95 % | +34.59 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (3,16384) | 1.950e+04 | +0.09 % | +34.71 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (3,16384) | 1.982e+04 | +1.63 % | +36.90 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (3,16384) | 1.973e+04 | -0.46 % | +36.28 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,4) | 1.172e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,4) | 1.941e+07 | +65.61 % | +65.61 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,4) | 2.019e+07 | +4.02 % | +72.27 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,4) | 1.943e+07 | -3.74 % | +65.83 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,4) | 1.863e+07 | -4.15 % | +58.95 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,4) | 1.857e+07 | -0.31 % | +58.46 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,4) | 1.899e+07 | +2.23 % | +61.99 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,8) | 1.213e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,8) | 1.721e+07 | +41.92 % | +41.92 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,8) | 1.709e+07 | -0.67 % | +40.97 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,8) | 1.738e+07 | +1.69 % | +43.35 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,8) | 1.675e+07 | -3.62 % | +38.15 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,8) | 1.659e+07 | -0.97 % | +36.81 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,8) | 1.672e+07 | +0.77 % | +37.87 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,16) | 7.979e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,16) | 1.091e+07 | +36.69 % | +36.69 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,16) | 1.095e+07 | +0.39 % | +37.23 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,16) | 1.089e+07 | -0.54 % | +36.49 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,16) | 1.092e+07 | +0.25 % | +36.83 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,16) | 1.083e+07 | -0.83 % | +35.70 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,16) | 1.061e+07 | -2.00 % | +32.99 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,32) | 5.234e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,32) | 6.820e+06 | +30.30 % | +30.30 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,32) | 6.995e+06 | +2.57 % | +33.65 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,32) | 7.239e+06 | +3.49 % | +38.31 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,32) | 6.980e+06 | -3.57 % | +33.36 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,32) | 7.181e+06 | +2.88 % | +37.20 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,32) | 6.865e+06 | -4.40 % | +31.16 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,64) | 3.222e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,64) | 3.963e+06 | +22.99 % | +22.99 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,64) | 4.018e+06 | +1.39 % | +24.71 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,64) | 3.956e+06 | -1.54 % | +22.78 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,64) | 3.949e+06 | -0.18 % | +22.56 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,64) | 4.069e+06 | +3.05 % | +26.29 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,64) | 3.855e+06 | -5.26 % | +19.65 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,128) | 1.687e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,128) | 2.081e+06 | +23.34 % | +23.34 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,128) | 2.090e+06 | +0.43 % | +23.87 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,128) | 2.132e+06 | +1.99 % | +26.34 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,128) | 2.129e+06 | -0.11 % | +26.21 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,128) | 2.082e+06 | -2.23 % | +23.39 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,128) | 2.098e+06 | +0.77 % | +24.35 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,256) | 8.638e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,256) | 1.094e+06 | +26.67 % | +26.67 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,256) | 1.098e+06 | +0.35 % | +27.11 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,256) | 1.118e+06 | +1.82 % | +29.42 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,256) | 1.107e+06 | -0.94 % | +28.20 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,256) | 1.137e+06 | +2.70 % | +31.66 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,256) | 1.095e+06 | -3.72 % | +26.76 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,512) | 4.400e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,512) | 5.711e+05 | +29.78 % | +29.78 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,512) | 5.725e+05 | +0.25 % | +30.10 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,512) | 5.726e+05 | +0.01 % | +30.12 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,512) | 5.733e+05 | +0.13 % | +30.29 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,512) | 5.655e+05 | -1.36 % | +28.52 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,512) | 5.621e+05 | -0.60 % | +27.74 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,1024) | 2.275e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,1024) | 2.886e+05 | +26.83 % | +26.83 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,1024) | 2.895e+05 | +0.32 % | +27.23 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,1024) | 2.909e+05 | +0.50 % | +27.87 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,1024) | 2.892e+05 | -0.62 % | +27.08 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,1024) | 2.889e+05 | -0.08 % | +26.97 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,1024) | 2.851e+05 | -1.31 % | +25.31 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,2048) | 1.152e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,2048) | 1.431e+05 | +24.25 % | +24.25 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,2048) | 1.395e+05 | -2.54 % | +21.09 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,2048) | 1.421e+05 | +1.93 % | +23.42 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,2048) | 1.448e+05 | +1.88 % | +25.75 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,2048) | 1.426e+05 | -1.56 % | +23.78 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,2048) | 1.405e+05 | -1.42 % | +22.02 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,4096) | 5.760e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,4096) | 7.459e+04 | +29.51 % | +29.51 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,4096) | 7.448e+04 | -0.16 % | +29.30 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,4096) | 7.590e+04 | +1.91 % | +31.77 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,4096) | 7.505e+04 | -1.12 % | +30.30 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,4096) | 7.665e+04 | +2.14 % | +33.08 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,4096) | 7.050e+04 | -8.02 % | +22.40 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,8192) | 2.765e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,8192) | 3.634e+04 | +31.44 % | +31.44 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,8192) | 3.666e+04 | +0.87 % | +32.59 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,8192) | 3.593e+04 | -2.00 % | +29.94 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,8192) | 3.572e+04 | -0.57 % | +29.20 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,8192) | 3.526e+04 | -1.30 % | +27.51 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,8192) | 3.502e+04 | -0.67 % | +26.65 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4,16384) | 1.405e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4,16384) | 1.859e+04 | +32.35 % | +32.35 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4,16384) | 1.806e+04 | -2.85 % | +28.57 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4,16384) | 1.807e+04 | +0.05 % | +28.63 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4,16384) | 1.792e+04 | -0.83 % | +27.57 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4,16384) | 1.841e+04 | +2.74 % | +31.07 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4,16384) | 1.742e+04 | -5.39 % | +24.01 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (5,5) | 1.043e+07 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (5,5) | 1.035e+07 | -0.82 % | -0.82 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (5,5) | 1.051e+07 | +1.60 % | +0.77 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (5,5) | 1.034e+07 | -1.60 % | -0.84 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (5,5) | 1.017e+07 | -1.64 % | -2.46 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (5,5) | 1.795e+07 | +76.45 % | +72.10 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (5,5) | 1.843e+07 | +2.67 % | +76.69 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (6,6) | 9.775e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (6,6) | 9.497e+06 | -2.84 % | -2.84 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (6,6) | 9.515e+06 | +0.18 % | -2.66 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (6,6) | 9.484e+06 | -0.32 % | -2.97 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (6,6) | 9.739e+06 | +2.68 % | -0.37 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (6,6) | 1.661e+07 | +70.60 % | +69.98 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (6,6) | 1.661e+07 | -0.01 % | +69.95 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (7,7) | 7.308e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (7,7) | 7.449e+06 | +1.93 % | +1.93 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (7,7) | 7.465e+06 | +0.21 % | +2.14 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (7,7) | 7.482e+06 | +0.23 % | +2.38 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (7,7) | 7.295e+06 | -2.49 % | -0.18 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (7,7) | 7.395e+06 | +1.36 % | +1.18 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (7,7) | 1.017e+07 | +37.49 % | +39.12 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,8) | 7.916e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,8) | 8.206e+06 | +3.67 % | +3.67 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,8) | 8.135e+06 | -0.87 % | +2.77 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,8) | 7.981e+06 | -1.90 % | +0.82 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,8) | 8.065e+06 | +1.06 % | +1.88 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,8) | 8.048e+06 | -0.21 % | +1.68 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,8) | 9.559e+06 | +18.77 % | +20.76 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,16) | 6.325e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,16) | 6.449e+06 | +1.95 % | +1.95 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,16) | 6.367e+06 | -1.27 % | +0.66 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,16) | 6.396e+06 | +0.46 % | +1.12 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,16) | 6.409e+06 | +0.19 % | +1.32 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,16) | 6.500e+06 | +1.42 % | +2.76 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,16) | 8.506e+06 | +30.86 % | +34.47 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,32) | 4.313e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,32) | 4.489e+06 | +4.09 % | +4.09 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,32) | 4.369e+06 | -2.68 % | +1.30 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,32) | 4.350e+06 | -0.42 % | +0.87 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,32) | 4.246e+06 | -2.40 % | -1.55 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,32) | 4.323e+06 | +1.81 % | +0.23 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,32) | 6.039e+06 | +39.70 % | +40.02 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,64) | 2.722e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,64) | 2.701e+06 | -0.77 % | -0.77 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,64) | 2.696e+06 | -0.21 % | -0.97 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,64) | 2.624e+06 | -2.67 % | -3.61 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,64) | 2.648e+06 | +0.93 % | -2.72 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,64) | 2.661e+06 | +0.50 % | -2.23 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,64) | 3.850e+06 | +44.64 % | +41.42 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,128) | 1.408e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,128) | 1.395e+06 | -0.97 % | -0.97 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,128) | 1.459e+06 | +4.61 % | +3.59 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,128) | 1.494e+06 | +2.42 % | +6.10 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,128) | 1.423e+06 | -4.76 % | +1.05 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,128) | 1.381e+06 | -2.97 % | -1.95 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,128) | 2.222e+06 | +60.92 % | +57.78 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,256) | 7.400e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,256) | 7.553e+05 | +2.06 % | +2.06 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,256) | 7.425e+05 | -1.69 % | +0.34 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,256) | 7.503e+05 | +1.05 % | +1.39 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,256) | 7.493e+05 | -0.13 % | +1.26 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,256) | 7.172e+05 | -4.29 % | -3.08 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,256) | 1.145e+06 | +59.66 % | +54.74 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,512) | 3.836e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,512) | 3.803e+05 | -0.87 % | -0.87 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,512) | 3.805e+05 | +0.04 % | -0.83 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,512) | 3.765e+05 | -1.03 % | -1.85 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,512) | 3.936e+05 | +4.53 % | +2.59 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,512) | 3.657e+05 | -7.09 % | -4.69 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,512) | 6.337e+05 | +73.30 % | +65.18 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,1024) | 2.028e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,1024) | 2.089e+05 | +3.06 % | +3.06 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,1024) | 2.070e+05 | -0.95 % | +2.08 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,1024) | 2.010e+05 | -2.90 % | -0.88 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,1024) | 2.011e+05 | +0.09 % | -0.79 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,1024) | 2.087e+05 | +3.77 % | +2.95 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,1024) | 3.206e+05 | +53.60 % | +58.13 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,2048) | 9.974e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,2048) | 9.833e+04 | -1.40 % | -1.40 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,2048) | 1.000e+05 | +1.72 % | +0.29 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,2048) | 1.006e+05 | +0.57 % | +0.87 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,2048) | 9.783e+04 | -2.76 % | -1.91 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,2048) | 1.022e+05 | +4.43 % | +2.43 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,2048) | 1.575e+05 | +54.22 % | +57.97 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,4096) | 5.160e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,4096) | 5.257e+04 | +1.89 % | +1.89 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,4096) | 5.111e+04 | -2.78 % | -0.94 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,4096) | 5.306e+04 | +3.82 % | +2.85 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,4096) | 5.112e+04 | -3.67 % | -0.93 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,4096) | 5.116e+04 | +0.08 % | -0.85 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,4096) | 8.478e+04 | +65.72 % | +64.32 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,8192) | 2.424e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,8192) | 2.380e+04 | -1.80 % | -1.80 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,8192) | 2.470e+04 | +3.75 % | +1.88 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,8192) | 2.407e+04 | -2.55 % | -0.72 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,8192) | 2.426e+04 | +0.80 % | +0.08 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,8192) | 2.402e+04 | -0.97 % | -0.89 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,8192) | 3.904e+04 | +62.50 % | +61.06 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8,16384) | 1.232e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8,16384) | 1.209e+04 | -1.81 % | -1.81 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8,16384) | 1.207e+04 | -0.20 % | -2.01 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8,16384) | 1.188e+04 | -1.60 % | -3.58 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8,16384) | 1.210e+04 | +1.89 % | -1.76 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8,16384) | 1.219e+04 | +0.78 % | -0.99 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8,16384) | 1.986e+04 | +62.86 % | +61.24 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,16) | 4.209e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,16) | 4.381e+06 | +4.08 % | +4.08 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,16) | 4.240e+06 | -3.20 % | +0.75 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,16) | 4.261e+06 | +0.50 % | +1.25 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,16) | 4.344e+06 | +1.94 % | +3.22 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,16) | 4.390e+06 | +1.06 % | +4.32 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,16) | 6.024e+06 | +37.21 % | +43.13 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,32) | 3.234e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,32) | 3.386e+06 | +4.68 % | +4.68 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,32) | 3.328e+06 | -1.72 % | +2.89 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,32) | 3.351e+06 | +0.70 % | +3.61 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,32) | 3.288e+06 | -1.89 % | +1.65 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,32) | 3.239e+06 | -1.49 % | +0.14 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,32) | 4.868e+06 | +50.31 % | +50.51 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,64) | 2.044e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,64) | 2.044e+06 | 0.00 % | 0.00 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,64) | 2.044e+06 | +0.01 % | 0.00 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,64) | 2.009e+06 | -1.69 % | -1.68 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,64) | 2.085e+06 | +3.75 % | +2.00 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,64) | 1.808e+06 | -13.27 % | -11.53 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,64) | 3.306e+06 | +82.88 % | +61.79 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,128) | 1.130e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,128) | 1.133e+06 | +0.22 % | +0.22 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,128) | 1.140e+06 | +0.61 % | +0.83 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,128) | 1.144e+06 | +0.37 % | +1.21 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,128) | 1.153e+06 | +0.80 % | +2.02 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,128) | 1.019e+06 | -11.58 % | -9.79 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,128) | 1.905e+06 | +86.82 % | +68.53 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,256) | 5.782e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,256) | 5.903e+05 | +2.10 % | +2.10 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,256) | 6.019e+05 | +1.96 % | +4.10 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,256) | 5.733e+05 | -4.74 % | -0.84 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,256) | 6.001e+05 | +4.67 % | +3.79 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,256) | 5.447e+05 | -9.22 % | -5.78 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,256) | 9.676e+05 | +77.62 % | +67.35 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,512) | 3.038e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,512) | 3.031e+05 | -0.22 % | -0.22 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,512) | 3.123e+05 | +3.01 % | +2.78 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,512) | 3.032e+05 | -2.91 % | -0.21 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,512) | 2.998e+05 | -1.13 % | -1.34 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,512) | 2.933e+05 | -2.16 % | -3.46 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,512) | 5.296e+05 | +80.58 % | +74.33 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,1024) | 1.662e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,1024) | 1.632e+05 | -1.83 % | -1.83 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,1024) | 1.665e+05 | +2.01 % | +0.14 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,1024) | 1.696e+05 | +1.90 % | +2.04 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,1024) | 1.650e+05 | -2.73 % | -0.74 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,1024) | 1.660e+05 | +0.62 % | -0.13 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,1024) | 2.755e+05 | +65.92 % | +65.71 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,2048) | 8.053e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,2048) | 8.282e+04 | +2.84 % | +2.84 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,2048) | 8.382e+04 | +1.21 % | +4.08 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,2048) | 8.044e+04 | -4.03 % | -0.12 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,2048) | 8.025e+04 | -0.24 % | -0.36 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,2048) | 8.147e+04 | +1.53 % | +1.17 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,2048) | 1.357e+05 | +66.59 % | +68.54 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,4096) | 4.231e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,4096) | 4.152e+04 | -1.87 % | -1.87 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,4096) | 4.190e+04 | +0.94 % | -0.95 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,4096) | 4.115e+04 | -1.80 % | -2.74 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,4096) | 4.117e+04 | +0.05 % | -2.69 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,4096) | 4.268e+04 | +3.67 % | +0.88 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,4096) | 7.145e+04 | +67.41 % | +68.88 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,8192) | 1.917e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,8192) | 1.923e+04 | +0.33 % | +0.33 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,8192) | 1.923e+04 | -0.01 % | +0.32 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,8192) | 1.905e+04 | -0.95 % | -0.63 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,8192) | 1.942e+04 | +1.95 % | +1.30 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,8192) | 1.976e+04 | +1.76 % | +3.09 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,8192) | 3.238e+04 | +63.88 % | +68.95 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16,16384) | 9.644e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16,16384) | 9.647e+03 | +0.02 % | +0.02 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16,16384) | 9.473e+03 | -1.80 % | -1.78 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16,16384) | 1.002e+04 | +5.73 % | +3.85 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16,16384) | 9.389e+03 | -6.26 % | -2.65 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16,16384) | 9.645e+03 | +2.73 % | +0.01 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16,16384) | 1.622e+04 | +68.14 % | +68.15 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,32) | 2.013e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,32) | 2.046e+06 | +1.65 % | +1.65 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,32) | 2.026e+06 | -0.96 % | +0.67 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,32) | 2.051e+06 | +1.19 % | +1.87 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,32) | 2.060e+06 | +0.44 % | +2.32 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,32) | 1.786e+06 | -13.27 % | -11.26 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,32) | 3.408e+06 | +90.80 % | +69.32 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,64) | 1.406e+06 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,64) | 1.354e+06 | -3.69 % | -3.69 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,64) | 1.395e+06 | +2.99 % | -0.81 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,64) | 1.370e+06 | -1.77 % | -2.56 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,64) | 1.343e+06 | -1.97 % | -4.48 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,64) | 1.119e+06 | -16.72 % | -20.45 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,64) | 2.356e+06 | +110.63 % | +67.56 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,128) | 7.979e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,128) | 8.295e+05 | +3.96 % | +3.96 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,128) | 8.132e+05 | -1.96 % | +1.92 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,128) | 8.153e+05 | +0.25 % | +2.18 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,128) | 8.377e+05 | +2.75 % | +4.98 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,128) | 7.242e+05 | -13.55 % | -9.24 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,128) | 1.393e+06 | +92.39 % | +74.61 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,256) | 4.770e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,256) | 4.680e+05 | -1.89 % | -1.89 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,256) | 4.595e+05 | -1.82 % | -3.67 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,256) | 4.645e+05 | +1.09 % | -2.63 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,256) | 4.557e+05 | -1.88 % | -4.46 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,256) | 4.161e+05 | -8.69 % | -12.76 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,256) | 7.811e+05 | +87.71 % | +63.75 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,512) | 2.304e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,512) | 2.260e+05 | -1.94 % | -1.94 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,512) | 2.321e+05 | +2.73 % | +0.73 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,512) | 2.262e+05 | -2.55 % | -1.84 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,512) | 2.202e+05 | -2.64 % | -4.43 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,512) | 2.125e+05 | -3.50 % | -7.77 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,512) | 4.050e+05 | +90.56 % | +75.75 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,1024) | 1.178e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,1024) | 1.221e+05 | +3.65 % | +3.65 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,1024) | 1.167e+05 | -4.40 % | -0.92 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,1024) | 1.211e+05 | +3.74 % | +2.79 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,1024) | 1.196e+05 | -1.20 % | +1.56 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,1024) | 1.188e+05 | -0.68 % | +0.87 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,1024) | 2.097e+05 | +76.48 % | +78.02 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,2048) | 6.023e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,2048) | 5.920e+04 | -1.72 % | -1.72 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,2048) | 5.869e+04 | -0.85 % | -2.56 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,2048) | 5.969e+04 | +1.69 % | -0.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,2048) | 5.970e+04 | +0.02 % | -0.89 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,2048) | 5.813e+04 | -2.63 % | -3.49 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,2048) | 1.057e+05 | +81.75 % | +75.41 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,4096) | 3.015e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,4096) | 3.042e+04 | +0.92 % | +0.92 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,4096) | 3.015e+04 | -0.91 % | 0.00 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,4096) | 3.042e+04 | +0.91 % | +0.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,4096) | 3.101e+04 | +1.93 % | +2.85 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,4096) | 3.015e+04 | -2.77 % | 0.00 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,4096) | 5.671e+04 | +88.11 % | +88.12 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,8192) | 1.397e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,8192) | 1.358e+04 | -2.79 % | -2.79 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,8192) | 1.346e+04 | -0.91 % | -3.68 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,8192) | 1.371e+04 | +1.87 % | -1.88 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,8192) | 1.360e+04 | -0.78 % | -2.65 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,8192) | 1.371e+04 | +0.78 % | -1.89 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,8192) | 2.439e+04 | +77.94 % | +74.58 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,16384) | 6.677e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (32,16384) | 6.734e+03 | +0.85 % | +0.85 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (32,16384) | 6.798e+03 | +0.94 % | +1.80 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (32,16384) | 6.858e+03 | +0.89 % | +2.70 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (32,16384) | 6.617e+03 | -3.51 % | -0.90 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (32,16384) | 6.991e+03 | +5.65 % | +4.70 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,16384) | 1.212e+04 | +73.37 % | +81.51 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,64) | 7.302e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,64) | 6.785e+05 | -7.08 % | -7.08 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,64) | 7.102e+05 | +4.67 % | -2.74 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,64) | 7.107e+05 | +0.07 % | -2.67 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,64) | 7.102e+05 | -0.07 % | -2.74 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,64) | 5.515e+05 | -22.34 % | -24.47 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,64) | 1.432e+06 | +159.69 % | +96.14 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,128) | 3.659e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,128) | 3.689e+05 | +0.82 % | +0.82 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,128) | 3.663e+05 | -0.71 % | +0.10 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,128) | 3.767e+05 | +2.86 % | +2.96 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,128) | 3.762e+05 | -0.15 % | +2.81 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,128) | 3.204e+05 | -14.83 % | -12.44 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,128) | 9.630e+05 | +200.58 % | +163.19 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,256) | 2.509e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,256) | 2.396e+05 | -4.52 % | -4.52 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,256) | 2.440e+05 | +1.84 % | -2.77 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,256) | 2.372e+05 | -2.77 % | -5.47 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,256) | 2.394e+05 | +0.91 % | -4.60 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,256) | 2.194e+05 | -8.36 % | -12.57 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,256) | 5.368e+05 | +144.70 % | +113.94 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,512) | 1.193e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,512) | 1.203e+05 | +0.80 % | +0.80 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,512) | 1.213e+05 | +0.82 % | +1.63 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,512) | 1.201e+05 | -0.93 % | +0.68 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,512) | 1.196e+05 | -0.43 % | +0.25 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,512) | 1.121e+05 | -6.32 % | -6.09 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,512) | 2.865e+05 | +155.62 % | +140.06 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,1024) | 5.983e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,1024) | 6.092e+04 | +1.81 % | +1.81 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,1024) | 6.036e+04 | -0.91 % | +0.88 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,1024) | 6.092e+04 | +0.93 % | +1.82 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,1024) | 6.095e+04 | +0.05 % | +1.87 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,1024) | 6.044e+04 | -0.84 % | +1.02 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,1024) | 1.462e+05 | +141.83 % | +144.29 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,2048) | 3.104e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,2048) | 3.160e+04 | +1.80 % | +1.80 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,2048) | 3.160e+04 | -0.01 % | +1.79 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,2048) | 3.133e+04 | -0.87 % | +0.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,2048) | 3.103e+04 | -0.96 % | -0.06 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,2048) | 3.131e+04 | +0.91 % | +0.85 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,2048) | 7.198e+04 | +129.92 % | +131.88 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,4096) | 1.603e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,4096) | 1.618e+04 | +0.93 % | +0.93 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,4096) | 1.589e+04 | -1.76 % | -0.85 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,4096) | 1.589e+04 | -0.04 % | -0.89 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,4096) | 1.589e+04 | +0.01 % | -0.87 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,4096) | 1.560e+04 | -1.85 % | -2.70 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,4096) | 3.748e+04 | +140.29 % | +133.80 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,8192) | 7.092e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,8192) | 7.085e+03 | -0.11 % | -0.11 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,8192) | 7.026e+03 | -0.83 % | -0.93 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,8192) | 7.170e+03 | +2.04 % | +1.09 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,8192) | 7.092e+03 | -1.09 % | -0.01 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,8192) | 7.081e+03 | -0.15 % | -0.15 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,8192) | 1.591e+04 | +124.70 % | +124.36 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,16384) | 3.516e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (64,16384) | 3.548e+03 | +0.91 % | +0.91 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (64,16384) | 3.546e+03 | -0.05 % | +0.85 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (64,16384) | 3.581e+03 | +0.98 % | +1.84 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (64,16384) | 3.553e+03 | -0.79 % | +1.04 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (64,16384) | 3.579e+03 | +0.75 % | +1.80 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,16384) | 7.986e+03 | +123.12 % | +127.13 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,128) | 2.065e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,128) | 2.086e+05 | +1.01 % | +1.01 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,128) | 2.106e+05 | +0.96 % | +1.99 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,128) | 2.126e+05 | +0.97 % | +2.97 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,128) | 2.084e+05 | -1.99 % | +0.92 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,128) | 1.750e+05 | -16.01 % | -15.24 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,128) | 5.567e+05 | +218.07 % | +169.60 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,256) | 1.225e+05 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,256) | 1.247e+05 | +1.86 % | +1.86 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,256) | 1.281e+05 | +2.73 % | +4.64 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,256) | 1.297e+05 | +1.23 % | +5.93 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,256) | 1.247e+05 | -3.84 % | +1.86 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,256) | 1.142e+05 | -8.41 % | -6.71 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,256) | 3.410e+05 | +198.51 % | +178.48 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,512) | 6.696e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,512) | 6.699e+04 | +0.05 % | +0.05 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,512) | 6.749e+04 | +0.74 % | +0.79 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,512) | 6.821e+04 | +1.07 % | +1.86 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,512) | 6.348e+04 | -6.94 % | -5.20 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,512) | 6.313e+04 | -0.54 % | -5.72 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,512) | 1.842e+05 | +191.83 % | +175.14 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,1024) | 3.443e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,1024) | 3.365e+04 | -2.27 % | -2.27 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,1024) | 3.350e+04 | -0.45 % | -2.71 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,1024) | 3.380e+04 | +0.91 % | -1.83 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,1024) | 3.354e+04 | -0.79 % | -2.60 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,1024) | 3.380e+04 | +0.79 % | -1.83 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,1024) | 8.632e+04 | +155.39 % | +150.71 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,2048) | 1.755e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,2048) | 1.741e+04 | -0.83 % | -0.83 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,2048) | 1.709e+04 | -1.80 % | -2.61 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,2048) | 1.738e+04 | +1.68 % | -0.97 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,2048) | 1.758e+04 | +1.14 % | +0.16 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,2048) | 1.742e+04 | -0.90 % | -0.74 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,2048) | 4.631e+04 | +165.82 % | +163.84 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,4096) | 8.514e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,4096) | 8.674e+03 | +1.88 % | +1.88 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,4096) | 8.581e+03 | -1.07 % | +0.78 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,4096) | 8.433e+03 | -1.72 % | -0.95 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,4096) | 8.273e+03 | -1.90 % | -2.83 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,4096) | 8.338e+03 | +0.79 % | -2.06 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,4096) | 2.386e+04 | +186.19 % | +180.30 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,8192) | 3.891e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,8192) | 4.037e+03 | +3.76 % | +3.76 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,8192) | 3.880e+03 | -3.90 % | -0.29 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,8192) | 3.920e+03 | +1.05 % | +0.76 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,8192) | 3.856e+03 | -1.65 % | -0.91 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,8192) | 3.916e+03 | +1.58 % | +0.66 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,8192) | 9.759e+03 | +149.19 % | +150.83 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,16384) | 1.895e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (128,16384) | 1.931e+03 | +1.90 % | +1.90 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (128,16384) | 1.895e+03 | -1.88 % | -0.01 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (128,16384) | 1.911e+03 | +0.88 % | +0.87 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (128,16384) | 1.929e+03 | +0.95 % | +1.83 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (128,16384) | 1.912e+03 | -0.92 % | +0.89 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,16384) | 4.829e+03 | +152.60 % | +154.85 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,256) | 5.990e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,256) | 5.853e+04 | -2.29 % | -2.29 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,256) | 5.875e+04 | +0.37 % | -1.93 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,256) | 5.876e+04 | +0.02 % | -1.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,256) | 5.697e+04 | -3.05 % | -4.90 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,256) | 5.288e+04 | -7.17 % | -11.72 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,256) | 1.712e+05 | +223.70 % | +185.77 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,512) | 3.232e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,512) | 3.357e+04 | +3.88 % | +3.88 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,512) | 3.234e+04 | -3.66 % | +0.08 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,512) | 3.296e+04 | +1.92 % | +2.00 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,512) | 3.094e+04 | -6.12 % | -4.25 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,512) | 3.064e+04 | -0.99 % | -5.20 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,512) | 1.019e+05 | +232.50 % | +215.22 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,1024) | 1.711e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,1024) | 1.742e+04 | +1.82 % | +1.82 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,1024) | 1.742e+04 | -0.01 % | +1.81 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,1024) | 1.693e+04 | -2.78 % | -1.02 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,1024) | 1.742e+04 | +2.91 % | +1.85 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,1024) | 1.710e+04 | -1.84 % | -0.02 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,1024) | 4.881e+04 | +185.38 % | +185.31 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,2048) | 9.051e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,2048) | 8.710e+03 | -3.77 % | -3.77 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,2048) | 8.652e+03 | -0.67 % | -4.41 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,2048) | 8.796e+03 | +1.66 % | -2.82 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,2048) | 8.622e+03 | -1.98 % | -4.75 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,2048) | 8.616e+03 | -0.07 % | -4.81 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,2048) | 2.530e+04 | +193.63 % | +179.51 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,4096) | 4.409e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,4096) | 4.455e+03 | +1.05 % | +1.05 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,4096) | 4.379e+03 | -1.71 % | -0.67 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,4096) | 4.348e+03 | -0.72 % | -1.39 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,4096) | 4.454e+03 | +2.45 % | +1.03 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,4096) | 4.373e+03 | -1.83 % | -0.82 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,4096) | 1.345e+04 | +207.51 % | +204.98 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,8192) | 1.986e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,8192) | 1.934e+03 | -2.58 % | -2.58 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,8192) | 1.986e+03 | +2.68 % | +0.02 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,8192) | 1.959e+03 | -1.37 % | -1.35 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,8192) | 2.004e+03 | +2.32 % | +0.94 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,8192) | 1.966e+03 | -1.90 % | -0.98 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,8192) | 5.460e+03 | +177.66 % | +174.94 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (256,16384) | 1.003e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (256,16384) | 9.851e+02 | -1.79 % | -1.79 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (256,16384) | 9.836e+02 | -0.14 % | -1.94 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (256,16384) | 9.753e+02 | -0.85 % | -2.77 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (256,16384) | 1.037e+03 | +6.35 % | +3.40 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (256,16384) | 9.946e+02 | -4.11 % | -0.85 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (256,16384) | 2.661e+03 | +167.57 % | +165.30 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,512) | 1.685e+04 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,512) | 1.653e+04 | -1.86 % | -1.86 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,512) | 1.668e+04 | +0.90 % | -0.97 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,512) | 1.684e+04 | +0.93 % | -0.06 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,512) | 1.600e+04 | -4.96 % | -5.01 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,512) | 1.555e+04 | -2.80 % | -7.67 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,512) | 5.216e+04 | +235.33 % | +209.61 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,1024) | 8.525e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,1024) | 8.730e+03 | +2.41 % | +2.41 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,1024) | 8.568e+03 | -1.87 % | +0.50 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,1024) | 8.566e+03 | -0.02 % | +0.48 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,1024) | 8.566e+03 | -0.01 % | +0.47 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,1024) | 8.697e+03 | +1.53 % | +2.01 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,1024) | 2.679e+04 | +208.07 % | +214.26 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,2048) | 4.402e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,2048) | 4.362e+03 | -0.91 % | -0.91 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,2048) | 4.401e+03 | +0.91 % | -0.01 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,2048) | 4.398e+03 | -0.07 % | -0.08 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,2048) | 4.359e+03 | -0.90 % | -0.98 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,2048) | 4.362e+03 | +0.08 % | -0.90 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,2048) | 1.383e+04 | +216.94 % | +214.08 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,4096) | 2.316e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,4096) | 2.232e+03 | -3.61 % | -3.61 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,4096) | 2.215e+03 | -0.77 % | -4.35 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,4096) | 2.188e+03 | -1.21 % | -5.50 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,4096) | 2.230e+03 | +1.89 % | -3.72 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,4096) | 2.252e+03 | +0.98 % | -2.77 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,4096) | 7.158e+03 | +217.91 % | +209.10 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,8192) | 9.847e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,8192) | 1.015e+03 | +3.04 % | +3.04 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,8192) | 1.008e+03 | -0.62 % | +2.40 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,8192) | 1.022e+03 | +1.33 % | +3.77 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,8192) | 1.011e+03 | -1.09 % | +2.64 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,8192) | 9.940e+02 | -1.65 % | +0.94 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,8192) | 2.849e+03 | +186.61 % | +189.31 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (512,16384) | 5.133e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (512,16384) | 5.032e+02 | -1.97 % | -1.97 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (512,16384) | 4.949e+02 | -1.64 % | -3.58 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (512,16384) | 5.035e+02 | +1.73 % | -1.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (512,16384) | 5.130e+02 | +1.90 % | -0.05 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (512,16384) | 4.992e+02 | -2.69 % | -2.74 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (512,16384) | 1.464e+03 | +193.16 % | +185.13 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1024,1024) | 4.277e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1024,1024) | 4.232e+03 | -1.04 % | -1.04 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1024,1024) | 4.194e+03 | -0.90 % | -1.93 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1024,1024) | 4.195e+03 | +0.03 % | -1.91 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1024,1024) | 4.341e+03 | +3.48 % | +1.51 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1024,1024) | 4.155e+03 | -4.28 % | -2.84 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1024,1024) | 1.360e+04 | +227.21 % | +217.91 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1024,2048) | 2.189e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1024,2048) | 2.168e+03 | -0.93 % | -0.93 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1024,2048) | 2.169e+03 | +0.04 % | -0.89 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1024,2048) | 2.272e+03 | +4.73 % | +3.80 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1024,2048) | 2.189e+03 | -3.66 % | 0.00 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1024,2048) | 2.185e+03 | -0.18 % | -0.18 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1024,2048) | 7.159e+03 | +227.65 % | +227.07 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1024,4096) | 1.125e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1024,4096) | 1.125e+03 | +0.01 % | +0.01 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1024,4096) | 1.115e+03 | -0.91 % | -0.89 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1024,4096) | 1.125e+03 | +0.91 % | +0.01 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1024,4096) | 1.157e+03 | +2.81 % | +2.83 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1024,4096) | 1.136e+03 | -1.81 % | +0.97 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1024,4096) | 3.707e+03 | +226.42 % | +229.59 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1024,8192) | 5.046e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1024,8192) | 5.134e+02 | +1.73 % | +1.73 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1024,8192) | 5.141e+02 | +0.14 % | +1.88 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1024,8192) | 5.135e+02 | -0.11 % | +1.76 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1024,8192) | 5.045e+02 | -1.76 % | -0.03 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1024,8192) | 5.041e+02 | -0.09 % | -0.12 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1024,8192) | 1.464e+03 | +190.46 % | +190.12 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (1024,16384) | 2.511e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (1024,16384) | 2.532e+02 | +0.83 % | +0.83 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (1024,16384) | 2.511e+02 | -0.82 % | 0.00 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (1024,16384) | 2.488e+02 | -0.92 % | -0.92 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (1024,16384) | 2.490e+02 | +0.05 % | -0.87 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (1024,16384) | 2.487e+02 | -0.10 % | -0.96 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (1024,16384) | 7.248e+02 | +191.41 % | +188.60 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2048,2048) | 1.093e+03 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2048,2048) | 1.114e+03 | +1.90 % | +1.90 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2048,2048) | 1.064e+03 | -4.54 % | -2.72 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2048,2048) | 1.073e+03 | +0.91 % | -1.84 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2048,2048) | 1.083e+03 | +0.87 % | -0.99 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2048,2048) | 1.077e+03 | -0.52 % | -1.51 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2048,2048) | 3.743e+03 | +247.54 % | +242.31 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2048,4096) | 5.569e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2048,4096) | 5.471e+02 | -1.77 % | -1.77 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2048,4096) | 5.575e+02 | +1.91 % | +0.11 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2048,4096) | 5.473e+02 | -1.82 % | -1.72 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2048,4096) | 5.628e+02 | +2.82 % | +1.05 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2048,4096) | 5.520e+02 | -1.90 % | -0.87 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2048,4096) | 1.889e+03 | +242.23 % | +239.23 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2048,8192) | 2.523e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2048,8192) | 2.521e+02 | -0.04 % | -0.04 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2048,8192) | 2.545e+02 | +0.94 % | +0.90 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2048,8192) | 2.569e+02 | +0.92 % | +1.83 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2048,8192) | 2.477e+02 | -3.59 % | -1.82 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2048,8192) | 2.521e+02 | +1.79 % | -0.06 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2048,8192) | 7.424e+02 | +194.50 % | +194.31 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (2048,16384) | 1.251e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (2048,16384) | 1.274e+02 | +1.83 % | +1.83 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (2048,16384) | 1.312e+02 | +3.03 % | +4.92 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (2048,16384) | 1.298e+02 | -1.09 % | +3.77 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (2048,16384) | 1.263e+02 | -2.71 % | +0.96 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (2048,16384) | 1.262e+02 | -0.09 % | +0.87 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (2048,16384) | 3.753e+02 | +197.41 % | +199.99 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4096,4096) | 2.645e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4096,4096) | 2.645e+02 | -0.01 % | -0.01 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4096,4096) | 2.669e+02 | +0.89 % | +0.88 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4096,4096) | 2.646e+02 | -0.84 % | +0.03 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4096,4096) | 2.705e+02 | +2.21 % | +2.24 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4096,4096) | 2.743e+02 | +1.41 % | +3.68 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4096,4096) | 9.454e+02 | +244.67 % | +257.36 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4096,8192) | 1.258e+02 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4096,8192) | 1.234e+02 | -1.89 % | -1.89 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4096,8192) | 1.244e+02 | +0.77 % | -1.14 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4096,8192) | 1.257e+02 | +1.09 % | -0.06 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4096,8192) | 1.236e+02 | -1.69 % | -1.75 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4096,8192) | 1.245e+02 | +0.76 % | -1.00 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4096,8192) | 3.863e+02 | +210.20 % | +207.09 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (4096,16384) | 6.339e+01 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (4096,16384) | 6.442e+01 | +1.62 % | +1.62 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (4096,16384) | 6.339e+01 | -1.59 % | 0.00 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (4096,16384) | 6.288e+01 | -0.81 % | -0.80 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (4096,16384) | 6.312e+01 | +0.38 % | -0.43 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (4096,16384) | 6.536e+01 | +3.55 % | +3.10 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (4096,16384) | 1.842e+02 | +181.84 % | +190.58 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8192,8192) | 5.904e+01 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8192,8192) | 5.957e+01 | +0.91 % | +0.91 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8192,8192) | 6.031e+01 | +1.24 % | +2.16 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8192,8192) | 5.898e+01 | -2.21 % | -0.10 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8192,8192) | 6.206e+01 | +5.22 % | +5.11 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8192,8192) | 6.157e+01 | -0.79 % | +4.29 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8192,8192) | 1.950e+02 | +216.66 % | +230.24 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (8192,16384) | 3.029e+01 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (8192,16384) | 3.095e+01 | +2.19 % | +2.19 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (8192,16384) | 3.057e+01 | -1.22 % | +0.94 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (8192,16384) | 3.077e+01 | +0.63 % | +1.57 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (8192,16384) | 3.117e+01 | +1.31 % | +2.90 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (8192,16384) | 3.147e+01 | +0.98 % | +3.91 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (8192,16384) | 9.908e+01 | +214.79 % | +227.10 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (16384,16384) | 1.515e+01 | | | 42de72f | SQL/JSON: Various improvements to SQL/JSON query f\n (16384,16384) | 1.481e+01 | -2.19 % | -2.19 % | ca481d3 | Optimise numeric multiplication for short inputs.\n (16384,16384) | 1.474e+01 | -0.51 % | -2.69 % | 628c1d1 | Use diff's --strip-trailing-cr flag where appropri\n (16384,16384) | 1.485e+01 | +0.75 % | -1.96 % | 0dcf753 | Improve the numeric width_bucket() computation. Fo\n (16384,16384) | 1.542e+01 | +3.84 % | +1.80 % | da87dc0 | Add missing pointer dereference in pg_backend_memo\n (16384,16384) | 1.538e+01 | -0.27 % | +1.53 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (16384,16384) | 4.689e+01 | +204.93 % | +209.58 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n\n/Joel\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:42:17 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, Jul 29, 2024, at 16:42, Joel Jacobson wrote:\n> New results with less noise below.\n>\n> Pardon the exceeding of 80 chars line width,\n> but felt important to include commit hash and relative delta.\n>\n>\n> ndigits | rate | change | accum | commit | \n> summary\n> ---------------+------------+-----------+-----------+---------+----------------------------------------------------\n\nI've reviewed the benchmark results, and it looks like v3-0001 made some cases a bit slower:\n\n (32,32) | 1.786e+06 | -13.27 % | -11.26 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,64) | 1.119e+06 | -16.72 % | -20.45 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (32,128) | 7.242e+05 | -13.55 % | -9.24 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,64) | 5.515e+05 | -22.34 % | -24.47 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (64,128) | 3.204e+05 | -14.83 % | -12.44 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n (128,128) | 1.750e+05 | -16.01 % | -15.24 % | v3-0001 | Extend mul_var_short() to 5 and 6-digit inputs. Co\n\nThanks to v3-0002, they are all still significantly faster when both patches have been applied,\nbut I wonder if it is expected or not, that v3-0001 temporarily made them a bit slower?\n\nSame cases with v3-0002 applied:\n\n (32,32) | 3.408e+06 | +90.80 % | +69.32 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,64) | 2.356e+06 | +110.63 % | +67.56 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (32,128) | 1.393e+06 | +92.39 % | +74.61 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (64,64) | 1.432e+06 | +159.69 % | +96.14 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n (128,128) | 5.567e+05 | +218.07 % | +169.60 % | v3-0002 | Optimise numeric multiplication using base-NBASE^2\n\n/Joel\n\n\n", "msg_date": "Mon, 29 Jul 2024 19:57:21 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, 29 Jul 2024 at 18:57, Joel Jacobson <[email protected]> wrote:\n>\n> Thanks to v3-0002, they are all still significantly faster when both patches have been applied,\n> but I wonder if it is expected or not, that v3-0001 temporarily made them a bit slower?\n>\n\nThere's no obvious reason why 0001 would make those cases slower, but\nthe fact that, together with 0002, it's a significant net win, and the\ngains for 5 and 6-digit inputs make it worthwhile, in my opinion.\n\nSomething I did notice in my tests was that if ndigits was a small\nmultiple of 8, the old code was disproportionately faster, which can\nbe explained by the fact that the computation fits exactly into a\nwhole number of XMM register operations, with no remaining digits to\nprocess. For example, these results from above:\n\n ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n----------+----------+---------------+---------------+----------\n 15 | 15 | 3.7595882e+06 | 5.0751355e+06 | +34.99%\n 16 | 16 | 4.3353435e+06 | 4.970363e+06 | +14.65%\n 17 | 17 | 3.9258755e+06 | 4.935394e+06 | +25.71%\n\n 23 | 23 | 2.7975982e+06 | 4.5065035e+06 | +61.08%\n 24 | 24 | 3.2456168e+06 | 4.4578115e+06 | +37.35%\n 25 | 25 | 2.9515055e+06 | 4.0208335e+06 | +36.23%\n\n 31 | 31 | 2.169437e+06 | 3.7209152e+06 | +71.52%\n 32 | 32 | 2.5022498e+06 | 3.6609378e+06 | +46.31%\n 33 | 33 | 2.27133e+06 | 3.435459e+06 | +51.25%\n\n(Note how 16x16 was much faster than 15x15, for example.)\n\nThe patched code seems to do a better job at levelling out and coping\nwith arbitrary-sized inputs, not just those that fit exactly into a\nwhole number of loops using SSE2 operations.\n\nSomething else I noticed was that the relative gains for large numbers\nof digits were much higher with clang than with gcc:\n\ngcc 13.3.0:\n\n 16383 | 16383 | 21.629467 | 73.58552 | +240.21%\n\nclang 15.0.7:\n\n 16383 | 16383 | 11.562384 | 73.00517 | +531.40%\n\nThat seems to be because clang doesn't do a good job of generating\nefficient SSE2 code in the old case of 16-bit x 16-bit\nmultiplications. Looking on godbolt.org, it generates\noverly-complicated code using PMULUDQ, which actually does 32-bit x\n32-bit multiplications. Gcc, on the other hand, generates a much more\ncompact loop, using PMULHW and PMULLW, which is much faster. With the\npatch, they both generate the same SSE2 code, so the results are\npretty consistent.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 29 Jul 2024 21:01:55 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, Jul 29, 2024, at 22:01, Dean Rasheed wrote:\n> On Mon, 29 Jul 2024 at 18:57, Joel Jacobson <[email protected]> wrote:\n>>\n>> Thanks to v3-0002, they are all still significantly faster when both patches have been applied,\n>> but I wonder if it is expected or not, that v3-0001 temporarily made them a bit slower?\n>>\n>\n> There's no obvious reason why 0001 would make those cases slower, but\n> the fact that, together with 0002, it's a significant net win, and the\n> gains for 5 and 6-digit inputs make it worthwhile, in my opinion.\n\nYes, I agree, I just thought it was noteworthy, but not a problem per se.\n\n> Something I did notice in my tests was that if ndigits was a small\n> multiple of 8, the old code was disproportionately faster, which can\n> be explained by the fact that the computation fits exactly into a\n> whole number of XMM register operations, with no remaining digits to\n> process. For example, these results from above:\n>\n> ndigits1 | ndigits2 | PG17 rate | patch rate | % change\n> ----------+----------+---------------+---------------+----------\n> 15 | 15 | 3.7595882e+06 | 5.0751355e+06 | +34.99%\n> 16 | 16 | 4.3353435e+06 | 4.970363e+06 | +14.65%\n> 17 | 17 | 3.9258755e+06 | 4.935394e+06 | +25.71%\n>\n> 23 | 23 | 2.7975982e+06 | 4.5065035e+06 | +61.08%\n> 24 | 24 | 3.2456168e+06 | 4.4578115e+06 | +37.35%\n> 25 | 25 | 2.9515055e+06 | 4.0208335e+06 | +36.23%\n>\n> 31 | 31 | 2.169437e+06 | 3.7209152e+06 | +71.52%\n> 32 | 32 | 2.5022498e+06 | 3.6609378e+06 | +46.31%\n> 33 | 33 | 2.27133e+06 | 3.435459e+06 | +51.25%\n>\n> (Note how 16x16 was much faster than 15x15, for example.)\n>\n> The patched code seems to do a better job at levelling out and coping\n> with arbitrary-sized inputs, not just those that fit exactly into a\n> whole number of loops using SSE2 operations.\n\nThat's nice.\n\n> Something else I noticed was that the relative gains for large numbers\n> of digits were much higher with clang than with gcc:\n>\n> gcc 13.3.0:\n>\n> 16383 | 16383 | 21.629467 | 73.58552 | +240.21%\n>\n> clang 15.0.7:\n>\n> 16383 | 16383 | 11.562384 | 73.00517 | +531.40%\n>\n> That seems to be because clang doesn't do a good job of generating\n> efficient SSE2 code in the old case of 16-bit x 16-bit\n> multiplications. Looking on godbolt.org, it generates\n> overly-complicated code using PMULUDQ, which actually does 32-bit x\n> 32-bit multiplications. Gcc, on the other hand, generates a much more\n> compact loop, using PMULHW and PMULLW, which is much faster. With the\n> patch, they both generate the same SSE2 code, so the results are\n> pretty consistent.\n\nVery nice.\n\nI've now also had an initial look at the actual code of the patches:\n\n* v3-0001\n\nLooks pretty straight forward, nice with the PRODSUM macros,\nthat really improved readability a lot.\n\nI like these simplifications, how `var2ndigits` is used instead of `res_ndigits`:\n-\t\t\tfor (int i = res_ndigits - 3; i >= 1; i--)\n+\t\t\tfor (int i = var2ndigits - 1; i >= 1; i--)\n\nBut I wonder why does `case 1:` not follow the same pattern?\n\t\t\tfor (int i = res_ndigits - 2; i >= 0; i--)\n\n* v3-0002\n\nI think it's non-obvious if the separate code paths for 32-bit and 64-bit,\nusing `#if SIZEOF_DATUM < 8`, to get *fast* 32-bit support, outweighs\nthe benefits of simpler code.\n\nYou brought up the question if 32-bit systems should be regarded\nas legacy previously in this thread.\n\nUnfortunately, we didn't get any feedback, so I'm starting a separate\nthread, with subject \"Is fast 32-bit code still important?\", hoping to get\nmore input to help us make judgement calls.\n\n/Joel\n\n\n", "msg_date": "Mon, 29 Jul 2024 22:38:45 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, 29 Jul 2024 at 21:39, Joel Jacobson <[email protected]> wrote:\n>\n> I like these simplifications, how `var2ndigits` is used instead of `res_ndigits`:\n> - for (int i = res_ndigits - 3; i >= 1; i--)\n> + for (int i = var2ndigits - 1; i >= 1; i--)\n>\n> But I wonder why does `case 1:` not follow the same pattern?\n> for (int i = res_ndigits - 2; i >= 0; i--)\n>\n\nAh yes, that should be made the same. (I think I did do that at one\npoint, but then accidentally reverted it during a code refactoring.)\n\n> * v3-0002\n>\n> I think it's non-obvious if the separate code paths for 32-bit and 64-bit,\n> using `#if SIZEOF_DATUM < 8`, to get *fast* 32-bit support, outweighs\n> the benefits of simpler code.\n>\n> You brought up the question if 32-bit systems should be regarded\n> as legacy previously in this thread.\n>\n> Unfortunately, we didn't get any feedback, so I'm starting a separate\n> thread, with subject \"Is fast 32-bit code still important?\", hoping to get\n> more input to help us make judgement calls.\n>\n\nLooking at that other thread that you found [1], I think it's entirely\npossible that there are people who care about 32-bit systems, which\nmeans that we might well get complaints, if we make it slower for\nthem. Unfortunately, I don't have any way to test that (I doubt that\nrunning a 32-bit executable on my x86-64 system is a realistic test).\n\nRegards,\nDean\n\n[1] https://postgr.es/m/[email protected]\n\n\n", "msg_date": "Mon, 29 Jul 2024 22:59:57 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Mon, 29 Jul 2024 at 21:39, Joel Jacobson <[email protected]> wrote:\n>> I think it's non-obvious if the separate code paths for 32-bit and 64-bit,\n>> using `#if SIZEOF_DATUM < 8`, to get *fast* 32-bit support, outweighs\n>> the benefits of simpler code.\n\n> Looking at that other thread that you found [1], I think it's entirely\n> possible that there are people who care about 32-bit systems, which\n> means that we might well get complaints, if we make it slower for\n> them. Unfortunately, I don't have any way to test that (I doubt that\n> running a 32-bit executable on my x86-64 system is a realistic test).\n\nI think we've already done things that might impact 32-bit systems\nnegatively (5e1f3b9eb for instance), and not heard a lot of pushback.\nI would argue that anyone still running PG on 32-bit must have pretty\nminimal performance requirements, so that they're unlikely to care if\nnumeric_mul gets slightly faster or slower. Obviously a *big*\nperformance drop might get pushback.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jul 2024 18:31:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Jul 30, 2024, at 00:31, Tom Lane wrote:\n> Dean Rasheed <[email protected]> writes:\n>> On Mon, 29 Jul 2024 at 21:39, Joel Jacobson <[email protected]> wrote:\n>>> I think it's non-obvious if the separate code paths for 32-bit and 64-bit,\n>>> using `#if SIZEOF_DATUM < 8`, to get *fast* 32-bit support, outweighs\n>>> the benefits of simpler code.\n>\n>> Looking at that other thread that you found [1], I think it's entirely\n>> possible that there are people who care about 32-bit systems, which\n>> means that we might well get complaints, if we make it slower for\n>> them. Unfortunately, I don't have any way to test that (I doubt that\n>> running a 32-bit executable on my x86-64 system is a realistic test).\n>\n> I think we've already done things that might impact 32-bit systems\n> negatively (5e1f3b9eb for instance), and not heard a lot of pushback.\n> I would argue that anyone still running PG on 32-bit must have pretty\n> minimal performance requirements, so that they're unlikely to care if\n> numeric_mul gets slightly faster or slower. Obviously a *big*\n> performance drop might get pushback.\n\nThanks for guidance. Sounds reasonable to me.\n\nNoted from 5e1f3b9eb:\n\"While it adds some space on 32-bit machines, we aren't optimizing for that case anymore.\"\n\nIn this case, the extra 32-bit numeric_mul code seems to be 89 lines of code, excluding comments.\nTo me, this seems like quite a lot, so I lean on thinking we should omit that code for now.\nWe can always add it later if we get pushback.\n\n/Joel\n\n\n", "msg_date": "Mon, 05 Aug 2024 14:34:31 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, 5 Aug 2024 at 13:34, Joel Jacobson <[email protected]> wrote:\n>\n> Noted from 5e1f3b9eb:\n> \"While it adds some space on 32-bit machines, we aren't optimizing for that case anymore.\"\n>\n> In this case, the extra 32-bit numeric_mul code seems to be 89 lines of code, excluding comments.\n> To me, this seems like quite a lot, so I lean on thinking we should omit that code for now.\n> We can always add it later if we get pushback.\n>\n\nOK, I guess that's reasonable. There is no clear-cut right answer\nhere, but I don't really want to have a lot of 32-bit-specific code\nthat significantly complicates this function, making it harder to\nmaintain. Without that code, the patch becomes much simpler, which\nseems like a decent justification for any performance tradeoffs on\n32-bit machines that are unlikely to affect many people anyway.\n\nRegards,\nDean", "msg_date": "Tue, 6 Aug 2024 12:52:32 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Aug 6, 2024, at 13:52, Dean Rasheed wrote:\n> On Mon, 5 Aug 2024 at 13:34, Joel Jacobson <[email protected]> wrote:\n>>\n>> Noted from 5e1f3b9eb:\n>> \"While it adds some space on 32-bit machines, we aren't optimizing for that case anymore.\"\n>>\n>> In this case, the extra 32-bit numeric_mul code seems to be 89 lines of code, excluding comments.\n>> To me, this seems like quite a lot, so I lean on thinking we should omit that code for now.\n>> We can always add it later if we get pushback.\n>>\n>\n> OK, I guess that's reasonable. There is no clear-cut right answer\n> here, but I don't really want to have a lot of 32-bit-specific code\n> that significantly complicates this function, making it harder to\n> maintain. Without that code, the patch becomes much simpler, which\n> seems like a decent justification for any performance tradeoffs on\n> 32-bit machines that are unlikely to affect many people anyway.\n>\n> Regards,\n> Dean\n>\n> Attachments:\n> * v4-0001-Extend-mul_var_short-to-5-and-6-digit-inputs.patch\n> * v4-0002-Optimise-numeric-multiplication-using-base-NBASE-.patch\n\nI've reviewed and tested both patches and think they are ready to be committed.\n\nNeat with the pairs variables, really improved readability a lot,\ncompared to my first version.\n\nAlso neat you found a way to adjust the res_weight in a simpler way\nthan my quite lengthy expression.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 11 Aug 2024 22:04:26 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Sun, Aug 11, 2024, at 22:04, Joel Jacobson wrote:\n>> Attachments:\n>> * v4-0001-Extend-mul_var_short-to-5-and-6-digit-inputs.patch\n>> * v4-0002-Optimise-numeric-multiplication-using-base-NBASE-.patch\n>\n> I've reviewed and tested both patches and think they are ready to be committed.\n\nIn addition, I've also tested reduced rscale specifically, due to what you wrote earlier:\n\n> 2). Attempt to fix the formulae incorporating maxdigits mentioned\n> above. This part really made my brain hurt, and I'm still not quite\n> sure that I've got it right. In particular, it needs double-checking\n> to ensure that it's not losing accuracy in the reduced-rscale case.\n\nTo test if there are any differences that actually matter in the result,\nI patched mul_var to log what combinations that occur when running\nthe test suite:\n\n```\n\tif (rscale != var1->dscale + var2->dscale)\n\t{\n\t\tprintf(\"NUMERIC_REDUCED_RSCALE %d,%d,%d,%d,%d\\n\", var1ndigits, var2ndigits, var1->dscale, var2->dscale, rscale - (var1->dscale + var2->dscale));\n\t}\n```\n\nI also added a SQL-callable numeric_mul_rscale(var1, var2, rscale_adjustment) function,\nto be able to check for differences for the reduced rscale combinations.\n\nI then ran the test suite against my db and extracted the seen combinations:\n\n```\nmake installcheck\ngrep -E \"^NUMERIC_REDUCED_RSCALE \\d+,\\d+,\\d+,\\d+,-\\d+$\" logfile | sort -u | awk '{print $2}' > plausible_rscale_adjustments.csv\n```\n\nThis test didn't produce any differences between HEAD and the two patches applied.\n\n% psql -f test-mul_var-verify.sql\nCREATE TABLE\nCOPY 1413\n var1ndigits | var2ndigits | var1dscale | var2dscale | rscale_adjustment | var1 | var2 | expected | numeric_mul_rscale\n-------------+-------------+------------+------------+-------------------+------+------+----------+--------------------\n(0 rows)\n\nAttaching patch as .txt to not confuse cfbot.\n\nRegards,\nJoel", "msg_date": "Mon, 12 Aug 2024 12:47:08 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, Aug 12, 2024, at 12:47, Joel Jacobson wrote:\n>> 2). Attempt to fix the formulae incorporating maxdigits mentioned\n>> above. This part really made my brain hurt, and I'm still not quite\n>> sure that I've got it right. In particular, it needs double-checking\n>> to ensure that it's not losing accuracy in the reduced-rscale case.\n>\n> To test if there are any differences that actually matter in the result,\n> I patched mul_var to log what combinations that occur when running\n> the test suite:\n\nI expanded the test to generate 10k different random numerics\nfor each of the reduced rscale cases.\n\nThis actually found some differences,\nwhere the last decimal digit differs by one,\nexcept in one case where the last decimal digit differs by two.\n\nNot sure if this is a real problem though,\nsince these differences might not affect the result of the SQL-callable functions.\n\nThe case found with the smallest rscale adjustment was this one:\n-[ RECORD 1 ]------+--------------------------------\nvar1 | 0.0000000000009873307197037692\nvar2 | 0.426697279270850\nrscale_adjustment | -15\nexpected | 0.0000000000004212913318381285\nnumeric_mul_rscale | 0.0000000000004212913318381284\ndiff | -0.0000000000000000000000000001\n\nHere is a count grouped by diff:\n\n diff | count\n--------------+----------\n 0.000e+00 | 14114384\n 1.000e-108 | 1\n 1.000e-211 | 1\n 1.000e-220 | 2\n 1.000e-228 | 6\n 1.000e-232 | 2\n 1.000e-235 | 1\n 1.000e-28 | 13\n 1.000e-36 | 1\n 1.000e-51 | 2\n 1.000e-67 | 1\n 1.000e-68 | 1\n 1.000e-80 | 1\n -1.000e-1024 | 2485\n -1.000e-108 | 3\n -1.000e-144 | 2520\n -1.000e-16 | 2514\n -1.000e-228 | 4\n -1.000e-232 | 1\n -1.000e-27 | 36\n -1.000e-28 | 538\n -1.000e-32 | 2513\n -1.000e-48 | 2473\n -1.000e-68 | 1\n -1.000e-80 | 2494\n -2.000e-16 | 2\n(26 rows)\n\nShould I investigate where each reduced rscale case originates from,\nand then try to test the actual SQL-callable functions with values\nthat cause the same inputs to mul_var as the cases found,\nor do we feel confident these differences are not problematic?\n\nRegards,\nJoel", "msg_date": "Mon, 12 Aug 2024 17:14:23 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, Aug 12, 2024, at 17:14, Joel Jacobson wrote:\n> The case found with the smallest rscale adjustment was this one:\n> -[ RECORD 1 ]------+--------------------------------\n> var1 | 0.0000000000009873307197037692\n> var2 | 0.426697279270850\n> rscale_adjustment | -15\n> expected | 0.0000000000004212913318381285\n> numeric_mul_rscale | 0.0000000000004212913318381284\n> diff | -0.0000000000000000000000000001\n\nTo avoid confusion, correction: I mean \"largest\", since rscale_adjustment is less than or equal to zero.\n\nHere is a group by rscale_adjustment to get a better picture:\n\nSELECT\n rscale_adjustment,\n COUNT(*)\nFROM\n test_numeric_mul_rscale,\n numeric_mul_rscale(var1, var2, rscale_adjustment)\nWHERE numeric_mul_rscale IS DISTINCT FROM expected\nGROUP BY rscale_adjustment\nORDER BY rscale_adjustment;\n\n rscale_adjustment | count\n-------------------+-------\n -237 | 2\n -235 | 1\n -232 | 3\n -229 | 2\n -228 | 8\n -218 | 1\n -108 | 4\n -77 | 1\n -67 | 1\n -51 | 2\n -38 | 3\n -36 | 1\n -28 | 5\n -22 | 42\n -17 | 7\n -16 | 14959\n -15 | 574\n(17 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Mon, 12 Aug 2024 17:17:29 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Mon, 12 Aug 2024 at 16:17, Joel Jacobson <[email protected]> wrote:\n>\n> On Mon, Aug 12, 2024, at 17:14, Joel Jacobson wrote:\n> > The case found with the smallest rscale adjustment was this one:\n> > -[ RECORD 1 ]------+--------------------------------\n> > var1 | 0.0000000000009873307197037692\n> > var2 | 0.426697279270850\n> > rscale_adjustment | -15\n> > expected | 0.0000000000004212913318381285\n> > numeric_mul_rscale | 0.0000000000004212913318381284\n> > diff | -0.0000000000000000000000000001\n>\n\nHmm, interesting example. There will of course always be cases where\nthe result isn't exact, but HEAD does produce the expected result in\nthis case, and the intention is to always produce a result at least as\naccurate as HEAD, so it isn't working as expected.\n\nLooking more closely, the problem is that to fully compute the\nrequired guard digits, it is necessary to compute at least one extra\noutput base-NBASE digit, because the product of base-NBASE^2 digits\ncontributes to the next base-NBASE digit up. So instead of\n\n maxdigitpairs = (maxdigits + 1) / 2;\n\nwe should do\n\n maxdigitpairs = maxdigits / 2 + 1;\n\nAdditionally, since maxdigits is based on res_weight, we should\nactually do the res_weight adjustments for odd-length inputs before\ncomputing maxdigits. (Otherwise we're actually computing more digits\nthan strictly necessary for odd-length inputs, so this is a minor\noptimisation.)\n\nUpdated patch attached, which fixes the above example and all the\nother differences produced by your test. I think, with a little\nthought, it ought to be possible to produce examples that round\nincorrectly in a more systematic (less brute-force) way. It should\nthen be possible to construct examples where the patch differs from\nHEAD, but hopefully only by being more accurate, not less.\n\nRegards,\nDean", "msg_date": "Mon, 12 Aug 2024 23:56:38 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Aug 13, 2024, at 00:56, Dean Rasheed wrote:\n> On Mon, 12 Aug 2024 at 16:17, Joel Jacobson <[email protected]> wrote:\n>>\n>> On Mon, Aug 12, 2024, at 17:14, Joel Jacobson wrote:\n>> > The case found with the smallest rscale adjustment was this one:\n>> > -[ RECORD 1 ]------+--------------------------------\n>> > var1 | 0.0000000000009873307197037692\n>> > var2 | 0.426697279270850\n>> > rscale_adjustment | -15\n>> > expected | 0.0000000000004212913318381285\n>> > numeric_mul_rscale | 0.0000000000004212913318381284\n>> > diff | -0.0000000000000000000000000001\n>>\n>\n> Hmm, interesting example. There will of course always be cases where\n> the result isn't exact, but HEAD does produce the expected result in\n> this case, and the intention is to always produce a result at least as\n> accurate as HEAD, so it isn't working as expected.\n>\n..\n> Updated patch attached, which fixes the above example and all the\n> other differences produced by your test. I think, with a little\n> thought, it ought to be possible to produce examples that round\n> incorrectly in a more systematic (less brute-force) way. It should\n> then be possible to construct examples where the patch differs from\n> HEAD, but hopefully only by being more accurate, not less.\n\nI reran the tests and v5 produces much fewer diffs than v4.\nNot sure if the remaining ones are problematic or not.\n\njoel@Joels-MBP postgresql % ./test-mul_var-init.sh\nHEAD is now at a67a49648d Rename C23 keyword\nSET\nDROP TABLE\nCREATE TABLE\nCOPY 1413\nDROP TABLE\nCREATE TABLE\n setseed\n---------\n\n(1 row)\n\nINSERT 0 14130000\nCOPY 14130000\n\njoel@Joels-MBP postgresql % ./test-mul_var-verify-v4.sh\nHEAD is now at a67a49648d Rename C23 keyword\nSET\nDROP TABLE\nCREATE TABLE\nCOPY 14130000\nExpanded display is on.\n-[ RECORD 1 ]------+--------------------------------\nvar1 | 0.0000000000009873307197037692\nvar2 | 0.426697279270850\nrscale_adjustment | -15\nexpected | 0.0000000000004212913318381285\nnumeric_mul_rscale | 0.0000000000004212913318381284\ndiff | -0.0000000000000000000000000001\n\nExpanded display is off.\n diff | count\n--------------+----------\n 0.000e+00 | 14114384\n 1.000e-108 | 1\n 1.000e-211 | 1\n 1.000e-220 | 2\n 1.000e-228 | 6\n 1.000e-232 | 2\n 1.000e-235 | 1\n 1.000e-28 | 13\n 1.000e-36 | 1\n 1.000e-51 | 2\n 1.000e-67 | 1\n 1.000e-68 | 1\n 1.000e-80 | 1\n -1.000e-1024 | 2485\n -1.000e-108 | 3\n -1.000e-144 | 2520\n -1.000e-16 | 2514\n -1.000e-228 | 4\n -1.000e-232 | 1\n -1.000e-27 | 36\n -1.000e-28 | 538\n -1.000e-32 | 2513\n -1.000e-48 | 2473\n -1.000e-68 | 1\n -1.000e-80 | 2494\n -2.000e-16 | 2\n(26 rows)\n\n rscale_adjustment | count\n-------------------+-------\n -237 | 2\n -235 | 1\n -232 | 3\n -229 | 2\n -228 | 8\n -218 | 1\n -108 | 4\n -77 | 1\n -67 | 1\n -51 | 2\n -38 | 3\n -36 | 1\n -28 | 5\n -22 | 42\n -17 | 7\n -16 | 14959\n -15 | 574\n(17 rows)\n\njoel@Joels-MBP postgresql % ./test-mul_var-verify-v5.sh\nHEAD is now at a67a49648d Rename C23 keyword\nSET\nDROP TABLE\nCREATE TABLE\nCOPY 14130000\nExpanded display is on.\n-[ RECORD 1 ]------+-------------------------------\nvar1 | 0.0000000000000000489673392928\nvar2 | 6.713030439846337\nrscale_adjustment | -15\nexpected | 0.0000000000000003287192392308\nnumeric_mul_rscale | 0.0000000000000003287192392309\ndiff | 0.0000000000000000000000000001\n\nExpanded display is off.\n diff | count\n--------------+----------\n 0.000e+00 | 14129971\n 1.000e-1024 | 1\n 1.000e-144 | 1\n 1.000e-16 | 1\n 1.000e-211 | 1\n 1.000e-220 | 2\n 1.000e-228 | 5\n 1.000e-232 | 1\n 1.000e-235 | 1\n 1.000e-28 | 8\n 1.000e-32 | 2\n 1.000e-36 | 1\n 1.000e-51 | 2\n 1.000e-67 | 1\n 1.000e-68 | 1\n 1.000e-80 | 1\n(16 rows)\n\n rscale_adjustment | count\n-------------------+-------\n -237 | 1\n -235 | 1\n -232 | 1\n -229 | 2\n -228 | 4\n -218 | 1\n -77 | 1\n -67 | 1\n -51 | 2\n -38 | 1\n -36 | 1\n -28 | 2\n -17 | 4\n -16 | 5\n -15 | 2\n(15 rows)\n\nRegards,\nJoel\n\n\n", "msg_date": "Tue, 13 Aug 2024 09:49:37 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Aug 13, 2024, at 09:49, Joel Jacobson wrote:\n> I reran the tests and v5 produces much fewer diffs than v4.\n> Not sure if the remaining ones are problematic or not.\n\nAttaching scripts if needed.\n\nRegards,\nJoel", "msg_date": "Tue, 13 Aug 2024 09:50:44 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, 13 Aug 2024 at 08:49, Joel Jacobson <[email protected]> wrote:\n>\n> I reran the tests and v5 produces much fewer diffs than v4.\n> Not sure if the remaining ones are problematic or not.\n>\n> joel@Joels-MBP postgresql % ./test-mul_var-verify-v5.sh\n> HEAD is now at a67a49648d Rename C23 keyword\n> SET\n> DROP TABLE\n> CREATE TABLE\n> COPY 14130000\n> Expanded display is on.\n> -[ RECORD 1 ]------+-------------------------------\n> var1 | 0.0000000000000000489673392928\n> var2 | 6.713030439846337\n> rscale_adjustment | -15\n> expected | 0.0000000000000003287192392308\n> numeric_mul_rscale | 0.0000000000000003287192392309\n> diff | 0.0000000000000000000000000001\n>\n\nYes, that's exactly the sort of thing you'd expect to see. The exact\nproduct of var1 and var2 in that case is\n\n 0.0000_0000_0000_0003_2871_9239_2308_5000_4574_2504_736\n\nso numeric_mul_rscale() with the patch is producing the correctly\nrounded result, and \"expected\" is the result from HEAD, which is off\nby 1 in the final digit.\n\nTo make it easier to hit such cases, I tested with the attached test\nscript, which intentionally produces pairs of numbers whose product\ncontains '5' followed by 5 zeros, and rounds at the digit before the\n'5', so the correct answer should round up, but the truncated product\nis quite likely not to do so.\n\nWith HEAD, this gives 710,017 out of 1,000,000 cases that are off by 1\nin the final digit (always 1 too low in the final digit), and with the\nv5 patch, it gives 282,595 cases. Furthermore, it's an exact subset:\n\nselect count(*) from diffs1; -- HEAD\n count\n--------\n 710017\n(1 row)\n\npgdevel=# select count(*) from diffs2; -- v5 patch\n count\n--------\n 282595\n(1 row)\n\nselect * from diffs2 except select * from diffs1;\n n | z | m | w | x | y | expected | numeric_mul_rscale\n---+---+---+---+---+---+----------+--------------------\n(0 rows)\n\nwhich is exactly what I was hoping to see (no cases where the patch\nmade it less accurate).\n\nRegards,\nDean", "msg_date": "Tue, 13 Aug 2024 11:23:32 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Aug 13, 2024, at 12:23, Dean Rasheed wrote:\n> On Tue, 13 Aug 2024 at 08:49, Joel Jacobson <[email protected]> wrote:\n>>\n>> I reran the tests and v5 produces much fewer diffs than v4.\n>> Not sure if the remaining ones are problematic or not.\n...\n>\n> Yes, that's exactly the sort of thing you'd expect to see. The exact\n> product of var1 and var2 in that case is\n>\n> 0.0000_0000_0000_0003_2871_9239_2308_5000_4574_2504_736\n>\n> so numeric_mul_rscale() with the patch is producing the correctly\n> rounded result, and \"expected\" is the result from HEAD, which is off\n> by 1 in the final digit.\n>\n> To make it easier to hit such cases, I tested with the attached test\n> script, which intentionally produces pairs of numbers whose product\n> contains '5' followed by 5 zeros, and rounds at the digit before the\n> '5', so the correct answer should round up, but the truncated product\n> is quite likely not to do so.\n>\n> With HEAD, this gives 710,017 out of 1,000,000 cases that are off by 1\n> in the final digit (always 1 too low in the final digit), and with the\n> v5 patch, it gives 282,595 cases. Furthermore, it's an exact subset:\n>\n> select count(*) from diffs1; -- HEAD\n> count\n> --------\n> 710017\n> (1 row)\n>\n> pgdevel=# select count(*) from diffs2; -- v5 patch\n> count\n> --------\n> 282595\n> (1 row)\n>\n> select * from diffs2 except select * from diffs1;\n> n | z | m | w | x | y | expected | numeric_mul_rscale\n> ---+---+---+---+---+---+----------+--------------------\n> (0 rows)\n>\n> which is exactly what I was hoping to see (no cases where the patch\n> made it less accurate).\n\nNice. I got the same results:\n\nselect count(*) from diffs_head;\n count\n--------\n 710017\n(1 row)\n\nselect count(*) from diffs_v4;\n count\n--------\n 344045\n(1 row)\n\nselect count(*) from diffs_v5;\n count\n--------\n 282595\n(1 row)\n\nselect count(*) from (select * from diffs_v4 except select * from diffs_head) as q;\n count\n-------\n 37236\n(1 row)\n\nselect count(*) from (select * from diffs_v5 except select * from diffs_head) as q;\n count\n-------\n 0\n(1 row)\n\nI think this is acceptable, since it produces more correct results.\n\nRegards,\nJoel\n\n\n", "msg_date": "Tue, 13 Aug 2024 13:01:51 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, Aug 13, 2024, at 13:01, Joel Jacobson wrote:\n> I think this is acceptable, since it produces more correct results.\n\nIn addition, I've traced the rscale_adjustment -15 mul_var() calls to originate\nfrom numeric_exp() and numeric_power(), so I thought it would be good to\nbrute-force test those as well, to get an idea of the probability of different\nresults from those functions.\n\nBrute-force testing of course doesn't prove it's impossible to happen,\nbut millions of inputs didn't cause any observable differences in the\nreturned results, so I think it's at least very improbable to\nhappen in practice.\n\nRegards,\nJoel", "msg_date": "Wed, 14 Aug 2024 08:30:55 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Wed, 14 Aug 2024 at 07:31, Joel Jacobson <[email protected]> wrote:\n>\n> I think this is acceptable, since it produces more correct results.\n>\n\nThanks for checking. I did a bit more testing myself and didn't see\nany problems, so I have committed both these patches.\n\n> In addition, I've traced the rscale_adjustment -15 mul_var() calls to originate\n> from numeric_exp() and numeric_power(), so I thought it would be good to\n> brute-force test those as well, to get an idea of the probability of different\n> results from those functions.\n>\n> Brute-force testing of course doesn't prove it's impossible to happen,\n> but millions of inputs didn't cause any observable differences in the\n> returned results, so I think it's at least very improbable to\n> happen in practice.\n>\n\nIndeed, there certainly will be cases where the result changes. I saw\nsome with ln(), for which HEAD rounded the final digit the wrong way,\nand the result is now correct, but the opposite cannot be ruled out\neither, since these functions are inherently inexact. The aim is to\nhave them generate the correctly rounded result in the vast majority\nof cases, while accepting an occasional off-by-one error in the final\ndigit. Having them generate the correct result in all cases is\ncertainly possible, but would require a fair bit of additional code\nthat probably isn't worth the effort.\n\nIn my testing, exp() rounded the final digit incorrectly with a\nprobability of roughly 1 in 50-100 million when computing results with\na handful of digits (consistent with the \"+8\" digits added to\n\"sig_digits\"), rising to roughly 1 in 5-10 million when computing\naround 1000 digits (presumably because we don't take into account the\nnumber of Taylor series terms when deciding on the local rscale). That\nwasn't affected significantly by the patch, and it's not surprising\nthat you saw nothing with brute-force testing.\n\nIn any case, I'm pretty content with those results.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 15 Aug 2024 10:56:09 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Wed, 14 Aug 2024 at 07:31, Joel Jacobson <[email protected]> wrote:\n>> I think this is acceptable, since it produces more correct results.\n\n> Thanks for checking. I did a bit more testing myself and didn't see\n> any problems, so I have committed both these patches.\n\nAbout a dozen buildfarm members are complaining thus (eg [1]):\n\ngcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -ftree-vectorize -I. -I. -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o numeric.o numeric.c\nnumeric.c: In function \\342\\200\\230mul_var\\342\\200\\231:\nnumeric.c:9209:9: warning: \\342\\200\\230carry\\342\\200\\231 may be used uninitialized in this function [-Wmaybe-uninitialized]\n term = PRODSUM1(var1digits, 0, var2digits, 0) + carry;\n ^\nnumeric.c:8972:10: note: \\342\\200\\230carry\\342\\200\\231 was declared here\n uint32 carry;\n ^\n\nI guess these compilers aren't able to convince themselves that the\nfirst switch must initialize \"carry\".\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=arowana&dt=2024-08-24%2004%3A19%3A29&stg=build\n\n\n", "msg_date": "Sat, 24 Aug 2024 14:17:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Sat, 24 Aug 2024 at 19:17, Tom Lane <[email protected]> wrote:\n>\n> About a dozen buildfarm members are complaining\n>\n\nAh, OK. I've pushed a fix.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 26 Aug 2024 11:48:03 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> Ah, OK. I've pushed a fix.\n\nThere is an open CF entry pointing at this thread [1].\nShouldn't it be marked committed now?\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/49/5115/\n\n\n", "msg_date": "Tue, 03 Sep 2024 16:30:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Tue, 3 Sept 2024 at 21:31, Tom Lane <[email protected]> wrote:\n>\n> Dean Rasheed <[email protected]> writes:\n> > Ah, OK. I've pushed a fix.\n>\n> There is an open CF entry pointing at this thread [1].\n> Shouldn't it be marked committed now?\n>\n\nOops, yes I missed that CF entry. I've closed it now.\n\nJoel, are you still planning to work on the Karatsuba multiplication\npatch? If not, we should close that CF entry too.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 4 Sep 2024 08:22:40 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" }, { "msg_contents": "On Wed, Sep 4, 2024, at 09:22, Dean Rasheed wrote:\n> On Tue, 3 Sept 2024 at 21:31, Tom Lane <[email protected]> wrote:\n>>\n>> Dean Rasheed <[email protected]> writes:\n>> > Ah, OK. I've pushed a fix.\n>>\n>> There is an open CF entry pointing at this thread [1].\n>> Shouldn't it be marked committed now?\n>>\n>\n> Oops, yes I missed that CF entry. I've closed it now.\n>\n> Joel, are you still planning to work on the Karatsuba multiplication\n> patch? If not, we should close that CF entry too.\n\nNo, I think it's probably not worth it given that we have now optimises mul_var() in other ways. Will maybe have a look at it again in the future. Patch withdrawn for now.\n\nThanks for really good guidance and help on the numeric, much fun and I've learned a lot.\n\n/Joel\n\n\n", "msg_date": "Wed, 04 Sep 2024 14:52:53 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize mul_var() for var1ndigits >= 8" } ]
[ { "msg_contents": "Hello hackers,\n\nI'm not hopeful this idea will be fruitful, but maybe we can find solutions\nto the problems together.\n\nThe idea is to increase the numeric NBASE from 1e4 to 1e8, which could possibly\ngive a significant performance boost of all operations across the board,\non 64-bit architectures, for many inputs.\n\nLast time numeric's base was changed was back in 2003, when d72f6c75038 changed\nit from 10 to 10000. Back then, 32-bit architectures were still dominant,\nso base-10000 was clearly the best choice at this time.\n\nToday, since 64-bit architectures are dominant, NBASE=1e8 seems like it would\nhave been the best choice, since the square of that still fits in\na 64-bit signed int.\n\nChanging NBASE might seem impossible at first, due to the existing numeric data\non disk, and incompatibility issues when numeric data is transferred on the\nwire.\n\nHere are some ideas on how to work around some of these:\n\n- Incrementally changing the data on disk, e.g. upon UPDATE/INSERT\nand supporting both NBASE=1e4 (16-bit) and NBASE=1e8 (32-bit)\nwhen reading data.\n\n- Due to the lack of a version field in the NumericVar struct,\nwe need a way to detect if a Numeric value on disk uses\nthe existing NBASE=1e4, or NBASE=1e8.\nOne hack I've thought about is to exploit the fact that NUMERIC_NBYTES,\ndefined as:\n #define NUMERIC_NBYTES(num) (VARSIZE(num) - NUMERIC_HEADER_SIZE(num))\nwill always be divisible by two, since a NumericDigit is an int16 (2 bytes).\nThe idea is then to let \"NUMERIC_NBYTES divisible by three\"\nindicate NBASE=1e8, at the cost of one to three extra padding bytes.\n\nAnother important aspect is disk space utilization, which is of course better\nfor NBASE=1e4, since it packs the data more tightly.\nI think this is the main disadvantage of NBASE=1e8, but perhaps users would be\nwilling to sacrifice some disk, if they would get better run-time performance.\n\nAs said initially, this might be completely unrealistic,\nbut interested to hear if anyone else have had similar dreams.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 07 Jul 2024 22:39:46 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Thoughts on NBASE=100000000" }, { "msg_contents": "On Sun, 7 Jul 2024, 22:40 Joel Jacobson, <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> I'm not hopeful this idea will be fruitful, but maybe we can find solutions\n> to the problems together.\n>\n> The idea is to increase the numeric NBASE from 1e4 to 1e8, which could possibly\n> give a significant performance boost of all operations across the board,\n> on 64-bit architectures, for many inputs.\n>\n> Last time numeric's base was changed was back in 2003, when d72f6c75038 changed\n> it from 10 to 10000. Back then, 32-bit architectures were still dominant,\n> so base-10000 was clearly the best choice at this time.\n>\n> Today, since 64-bit architectures are dominant, NBASE=1e8 seems like it would\n> have been the best choice, since the square of that still fits in\n> a 64-bit signed int.\n\nBack then 64-bit was by far not as dominant (server and consumer chips\nwith AMD64 ISA only got released that year after the commit), so I\ndon't think 1e8 would have been the best choice at that point in time.\nWould be better now, yes, but not back then.\n\n> Changing NBASE might seem impossible at first, due to the existing numeric data\n> on disk, and incompatibility issues when numeric data is transferred on the\n> wire.\n>\n> Here are some ideas on how to work around some of these:\n>\n> - Incrementally changing the data on disk, e.g. upon UPDATE/INSERT\n> and supporting both NBASE=1e4 (16-bit) and NBASE=1e8 (32-bit)\n> when reading data.\n\nI think that a dynamic decision would make more sense here. At low\nprecision, the overhead of 4+1 bytes vs 2 bytes is quite significant.\nThis sounds important for overall storage concerns, especially if the\npadding bytes (mentioned below) are added to indicate types.\n\n> - Due to the lack of a version field in the NumericVar struct,\n> we need a way to detect if a Numeric value on disk uses\n> the existing NBASE=1e4, or NBASE=1e8.\n> One hack I've thought about is to exploit the fact that NUMERIC_NBYTES,\n> defined as:\n> #define NUMERIC_NBYTES(num) (VARSIZE(num) - NUMERIC_HEADER_SIZE(num))\n> will always be divisible by two, since a NumericDigit is an int16 (2 bytes).\n> The idea is then to let \"NUMERIC_NBYTES divisible by three\"\n> indicate NBASE=1e8, at the cost of one to three extra padding bytes.\n\nDo you perhaps mean NUMERIC_NBYTES *not divisible by 2*, i.e. an\nuneven NUMERIC_NBYTES as indicator for NBASE=1e8, rather than only\nmultiples of 3? I'm asking because there are many integers divisible\nby both 2 and 3 (all integer multiples of 6; that's 50% of the\nmultiples of 3), so with the multiple-of-3 scheme we might need up to\n5 pad bytes to get to the next multiple of 3 that isn't also a\nmultiple of 2. Additionally, if the last digit woud've fit in\nNBASE_1e4, then the 1e8-based numeric value could even be 7 bytes\nlarger than the equivalent 1e4-based numeric.\n\nWhile I don't think this is worth implementing for general usage, it\ncould be worth exploring for the larger numeric values, where the\nrelative overhead of the larger representation is lower.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:45:13 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Thoughts on NBASE=100000000" }, { "msg_contents": "On Mon, Jul 8, 2024, at 12:45, Matthias van de Meent wrote:\n> On Sun, 7 Jul 2024, 22:40 Joel Jacobson, <[email protected]> wrote:\n>> Today, since 64-bit architectures are dominant, NBASE=1e8 seems like it would\n>> have been the best choice, since the square of that still fits in\n>> a 64-bit signed int.\n>\n> Back then 64-bit was by far not as dominant (server and consumer chips\n> with AMD64 ISA only got released that year after the commit), so I\n> don't think 1e8 would have been the best choice at that point in time.\n> Would be better now, yes, but not back then.\n\nOh, grammar mistake by me!\nI meant to say it \"would be the best choice\", in line with what I wrote above:\n\n>> Last time numeric's base was changed was back in 2003, when d72f6c75038 changed\n>> it from 10 to 10000. Back then, 32-bit architectures were still dominant,\n>> so base-10000 was clearly the best choice at this time.\n\n>> Changing NBASE might seem impossible at first, due to the existing numeric data\n>> on disk, and incompatibility issues when numeric data is transferred on the\n>> wire.\n>>\n>> Here are some ideas on how to work around some of these:\n>>\n>> - Incrementally changing the data on disk, e.g. upon UPDATE/INSERT\n>> and supporting both NBASE=1e4 (16-bit) and NBASE=1e8 (32-bit)\n>> when reading data.\n>\n> I think that a dynamic decision would make more sense here. At low\n> precision, the overhead of 4+1 bytes vs 2 bytes is quite significant.\n> This sounds important for overall storage concerns, especially if the\n> padding bytes (mentioned below) are added to indicate types.\n\nRight, I agree.\n\nAnother idea: It seems possible to reduce the disk space for numerics\nthat fit into one byte, i.e. 0 <= val <= 255, which could be communicated\nvia NUMERIC_NBYTES=1.\nAt least the value 0 should be quite common.\n\n>> - Due to the lack of a version field in the NumericVar struct,\n>> we need a way to detect if a Numeric value on disk uses\n>> the existing NBASE=1e4, or NBASE=1e8.\n>> One hack I've thought about is to exploit the fact that NUMERIC_NBYTES,\n>> defined as:\n>> #define NUMERIC_NBYTES(num) (VARSIZE(num) - NUMERIC_HEADER_SIZE(num))\n>> will always be divisible by two, since a NumericDigit is an int16 (2 bytes).\n>> The idea is then to let \"NUMERIC_NBYTES divisible by three\"\n>> indicate NBASE=1e8, at the cost of one to three extra padding bytes.\n>\n> Do you perhaps mean NUMERIC_NBYTES *not divisible by 2*, i.e. an\n> uneven NUMERIC_NBYTES as indicator for NBASE=1e8, rather than only\n> multiples of 3?\n\nOh, yes of course! Thinko.\n\n> While I don't think this is worth implementing for general usage, it\n> could be worth exploring for the larger numeric values, where the\n> relative overhead of the larger representation is lower.\n\nYes, I agree it's definitively seems like a win for larger numeric values.\nNot sure about smaller numeric values, maybe it's possible\nto improve upon.\n\nRegards,\nJoel\n\n\n", "msg_date": "Mon, 08 Jul 2024 13:42:51 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Thoughts on NBASE=100000000" } ]
[ { "msg_contents": "NextCopyFrom, currently, it is:\n--------------\nif (cstate->defaults[m])\n{\n......\n}\n\n/*\n* If ON_ERROR is specified with IGNORE, skip rows with soft\n* errors\n*/\nelse if (!InputFunctionCallSafe(&in_functions[m],\nstring,\ntypioparams[m],\natt->atttypmod,\n(Node *) cstate->escontext,\n&values[m]))\n{\nAssert(cstate->opts.on_error != COPY_ON_ERROR_STOP);\n\n---------------\nshould it be:\n\nif (cstate->defaults[m])\n{\n....\n}\nelse if (!InputFunctionCallSafe(&in_functions[m],\nstring,\ntypioparams[m],\natt->atttypmod,\n(Node *) cstate->escontext,\n&values[m]))\n{\n/*\n* If ON_ERROR is specified with IGNORE, skip rows with soft\n* errors\n*/\nAssert(cstate->opts.on_error != COPY_ON_ERROR_STOP);\n\n?\n\n\nhttps://www.postgresql.org/docs/devel/source-format.html\ndid't mention it, so I came to ask.\n(personally, \"if\" and \"else if\" no empty new line make it more\nreadable, i think.\n\n\n", "msg_date": "Mon, 8 Jul 2024 08:22:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "codeing Conventions in NextCopyFrom" } ]
[ { "msg_contents": "In [1], Andres reported a -Wuse-after-free bug in the\nATExecAttachPartition() function. I've created a patch to address it\nwith pointers from Amit offlist.\n\nThe issue was that the partBoundConstraint variable was utilized after\nthe list_concat() function. This could potentially lead to accessing\nthe partBoundConstraint variable after its memory has been freed.\n\nThe issue was resolved by using the return value of the list_concat()\nfunction, instead of using the list1 argument of list_concat(). I\ncopied the partBoundConstraint variable to a new variable named\npartConstraint and used it for the previous references before invoking\nget_proposed_default_constraint(). I confirmed that the\neval_const_expressions(), make_ands_explicit(),\nmap_partition_varattnos(), QueuePartitionConstraintValidation()\nfunctions do not modify the memory location pointed to by the\npartBoundConstraint variable. Therefore, it is safe to use it for the\nnext reference in get_proposed_default_constraint()\n\nAttaching the patch. Please review and share the comments if any.\nThanks to Andres for spotting the bug and some off-list advice on how\nto reproduce it.\n\n[1]: https://www.postgresql.org/message-id/flat/202311151802.ngj2la66jwgi%40alvherre.pgsql#4fc5622772ba0244c1ad203f5fc56701\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft", "msg_date": "Mon, 8 Jul 2024 12:51:16 +0530", "msg_from": "Nitin Jadhav <[email protected]>", "msg_from_op": true, "msg_subject": "Address the -Wuse-after-free warning in ATExecAttachPartition()" }, { "msg_contents": "On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n<[email protected]> wrote:\n>\n> In [1], Andres reported a -Wuse-after-free bug in the\n> ATExecAttachPartition() function. I've created a patch to address it\n> with pointers from Amit offlist.\n>\n> The issue was that the partBoundConstraint variable was utilized after\n> the list_concat() function. This could potentially lead to accessing\n> the partBoundConstraint variable after its memory has been freed.\n>\n> The issue was resolved by using the return value of the list_concat()\n> function, instead of using the list1 argument of list_concat(). I\n> copied the partBoundConstraint variable to a new variable named\n> partConstraint and used it for the previous references before invoking\n> get_proposed_default_constraint(). I confirmed that the\n> eval_const_expressions(), make_ands_explicit(),\n> map_partition_varattnos(), QueuePartitionConstraintValidation()\n> functions do not modify the memory location pointed to by the\n> partBoundConstraint variable. Therefore, it is safe to use it for the\n> next reference in get_proposed_default_constraint()\n>\n> Attaching the patch. Please review and share the comments if any.\n> Thanks to Andres for spotting the bug and some off-list advice on how\n> to reproduce it.\n\nThe patch LGTM.\n\nCurious how to reproduce the bug ;)\n\n>\n> [1]: https://www.postgresql.org/message-id/flat/202311151802.ngj2la66jwgi%40alvherre.pgsql#4fc5622772ba0244c1ad203f5fc56701\n>\n> Best Regards,\n> Nitin Jadhav\n> Azure Database for PostgreSQL\n> Microsoft\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 8 Jul 2024 18:08:21 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Address the -Wuse-after-free warning in ATExecAttachPartition()" }, { "msg_contents": "Hi Junwang,\n\nOn Mon, Jul 8, 2024 at 7:08 PM Junwang Zhao <[email protected]> wrote:\n> On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n> <[email protected]> wrote:\n> >\n> > In [1], Andres reported a -Wuse-after-free bug in the\n> > ATExecAttachPartition() function. I've created a patch to address it\n> > with pointers from Amit offlist.\n> >\n> > The issue was that the partBoundConstraint variable was utilized after\n> > the list_concat() function. This could potentially lead to accessing\n> > the partBoundConstraint variable after its memory has been freed.\n> >\n> > The issue was resolved by using the return value of the list_concat()\n> > function, instead of using the list1 argument of list_concat(). I\n> > copied the partBoundConstraint variable to a new variable named\n> > partConstraint and used it for the previous references before invoking\n> > get_proposed_default_constraint(). I confirmed that the\n> > eval_const_expressions(), make_ands_explicit(),\n> > map_partition_varattnos(), QueuePartitionConstraintValidation()\n> > functions do not modify the memory location pointed to by the\n> > partBoundConstraint variable. Therefore, it is safe to use it for the\n> > next reference in get_proposed_default_constraint()\n> >\n> > Attaching the patch. Please review and share the comments if any.\n> > Thanks to Andres for spotting the bug and some off-list advice on how\n> > to reproduce it.\n>\n> The patch LGTM.\n>\n> Curious how to reproduce the bug ;)\n\nDownload and apply (`git am`) Andres' patch to \"add allocator\nattributes\" here (note it's not merged into the tree yet!):\nhttps://github.com/anarazel/postgres/commit/99067d5c944e7bd29a8702689f470f623723f4e7.patch\n\nThen configure using meson with -Dc_args=\"-Wuse-after-free=3\"\n--buildtype=release\n\nCompiling with gcc-12 or gcc-13 should give you the warning that looks\nas follows:\n\n[713/2349] Compiling C object\nsrc/backend/postgres_lib.a.p/commands_tablecmds.c.o\n../src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n../src/backend/commands/tablecmds.c:18593:25: warning: pointer\n‘partBoundConstraint’ may be used after ‘list_concat’\n[-Wuse-after-free]\n18593 |\nget_proposed_default_constraint(partBoundConstraint);\n |\n^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../src/backend/commands/tablecmds.c:18546:26: note: call to ‘list_concat’ here\n18546 | partConstraint = list_concat(partBoundConstraint,\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n18547 |\n RelationGetPartitionQual(rel));\n |\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 9 Jul 2024 19:18:28 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Address the -Wuse-after-free warning in ATExecAttachPartition()" }, { "msg_contents": "On Tue, Jul 9, 2024 at 6:18 PM Amit Langote <[email protected]> wrote:\n>\n> Hi Junwang,\n>\n> On Mon, Jul 8, 2024 at 7:08 PM Junwang Zhao <[email protected]> wrote:\n> > On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n> > <[email protected]> wrote:\n> > >\n> > > In [1], Andres reported a -Wuse-after-free bug in the\n> > > ATExecAttachPartition() function. I've created a patch to address it\n> > > with pointers from Amit offlist.\n> > >\n> > > The issue was that the partBoundConstraint variable was utilized after\n> > > the list_concat() function. This could potentially lead to accessing\n> > > the partBoundConstraint variable after its memory has been freed.\n> > >\n> > > The issue was resolved by using the return value of the list_concat()\n> > > function, instead of using the list1 argument of list_concat(). I\n> > > copied the partBoundConstraint variable to a new variable named\n> > > partConstraint and used it for the previous references before invoking\n> > > get_proposed_default_constraint(). I confirmed that the\n> > > eval_const_expressions(), make_ands_explicit(),\n> > > map_partition_varattnos(), QueuePartitionConstraintValidation()\n> > > functions do not modify the memory location pointed to by the\n> > > partBoundConstraint variable. Therefore, it is safe to use it for the\n> > > next reference in get_proposed_default_constraint()\n> > >\n> > > Attaching the patch. Please review and share the comments if any.\n> > > Thanks to Andres for spotting the bug and some off-list advice on how\n> > > to reproduce it.\n> >\n> > The patch LGTM.\n> >\n> > Curious how to reproduce the bug ;)\n>\n> Download and apply (`git am`) Andres' patch to \"add allocator\n> attributes\" here (note it's not merged into the tree yet!):\n> https://github.com/anarazel/postgres/commit/99067d5c944e7bd29a8702689f470f623723f4e7.patch\n>\n> Then configure using meson with -Dc_args=\"-Wuse-after-free=3\"\n> --buildtype=release\n>\n> Compiling with gcc-12 or gcc-13 should give you the warning that looks\n> as follows:\n>\n> [713/2349] Compiling C object\n> src/backend/postgres_lib.a.p/commands_tablecmds.c.o\n> ../src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n> ../src/backend/commands/tablecmds.c:18593:25: warning: pointer\n> ‘partBoundConstraint’ may be used after ‘list_concat’\n> [-Wuse-after-free]\n> 18593 |\n> get_proposed_default_constraint(partBoundConstraint);\n> |\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../src/backend/commands/tablecmds.c:18546:26: note: call to ‘list_concat’ here\n> 18546 | partConstraint = list_concat(partBoundConstraint,\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> 18547 |\n> RelationGetPartitionQual(rel));\n> |\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> --\n> Thanks, Amit Langote\n\nThanks Amit,\n\nGood to know.\n\nWhen Nitin says:\n\n```Thanks to Andres for spotting the bug and some off-list advice on how\nto reproduce it.```\n\nI thought maybe there is some test case that can really trigger the\nuse-after-free bug, I might get it wrong though ;)\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 9 Jul 2024 18:57:48 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Address the -Wuse-after-free warning in ATExecAttachPartition()" }, { "msg_contents": "On Tue, Jul 9, 2024 at 19:58 Junwang Zhao <[email protected]> wrote:\n\n> On Tue, Jul 9, 2024 at 6:18 PM Amit Langote <[email protected]>\n> wrote:\n> >\n> > Hi Junwang,\n> >\n> > On Mon, Jul 8, 2024 at 7:08 PM Junwang Zhao <[email protected]> wrote:\n> > > On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n> > > <[email protected]> wrote:\n> > > >\n> > > > In [1], Andres reported a -Wuse-after-free bug in the\n> > > > ATExecAttachPartition() function. I've created a patch to address it\n> > > > with pointers from Amit offlist.\n> > > >\n> > > > The issue was that the partBoundConstraint variable was utilized\n> after\n> > > > the list_concat() function. This could potentially lead to accessing\n> > > > the partBoundConstraint variable after its memory has been freed.\n> > > >\n> > > > The issue was resolved by using the return value of the list_concat()\n> > > > function, instead of using the list1 argument of list_concat(). I\n> > > > copied the partBoundConstraint variable to a new variable named\n> > > > partConstraint and used it for the previous references before\n> invoking\n> > > > get_proposed_default_constraint(). I confirmed that the\n> > > > eval_const_expressions(), make_ands_explicit(),\n> > > > map_partition_varattnos(), QueuePartitionConstraintValidation()\n> > > > functions do not modify the memory location pointed to by the\n> > > > partBoundConstraint variable. Therefore, it is safe to use it for the\n> > > > next reference in get_proposed_default_constraint()\n> > > >\n> > > > Attaching the patch. Please review and share the comments if any.\n> > > > Thanks to Andres for spotting the bug and some off-list advice on how\n> > > > to reproduce it.\n> > >\n> > > The patch LGTM.\n> > >\n> > > Curious how to reproduce the bug ;)\n> >\n> > Download and apply (`git am`) Andres' patch to \"add allocator\n> > attributes\" here (note it's not merged into the tree yet!):\n> >\n> https://github.com/anarazel/postgres/commit/99067d5c944e7bd29a8702689f470f623723f4e7.patch\n> >\n> > Then configure using meson with -Dc_args=\"-Wuse-after-free=3\"\n> > --buildtype=release\n> >\n> > Compiling with gcc-12 or gcc-13 should give you the warning that looks\n> > as follows:\n> >\n> > [713/2349] Compiling C object\n> > src/backend/postgres_lib.a.p/commands_tablecmds.c.o\n> > ../src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n> > ../src/backend/commands/tablecmds.c:18593:25: warning: pointer\n> > ‘partBoundConstraint’ may be used after ‘list_concat’\n> > [-Wuse-after-free]\n> > 18593 |\n> > get_proposed_default_constraint(partBoundConstraint);\n> > |\n> > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > ../src/backend/commands/tablecmds.c:18546:26: note: call to\n> ‘list_concat’ here\n> > 18546 | partConstraint = list_concat(partBoundConstraint,\n> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > 18547 |\n> > RelationGetPartitionQual(rel));\n> > |\n> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> >\n> > --\n> > Thanks, Amit Langote\n>\n> Thanks Amit,\n>\n> Good to know.\n>\n> When Nitin says:\n>\n> ```Thanks to Andres for spotting the bug and some off-list advice on how\n> to reproduce it.```\n>\n> I thought maybe there is some test case that can really trigger the\n> use-after-free bug, I might get it wrong though ;)\n\n\nI doubt one could write a regression test to demonstrate the use-after-free\nbug, though I may of course be wrong. By “reproduce it”, I think Nitin\nmeant the warning that suggests that use-after-free bug may occur.\n\n>\n\nOn Tue, Jul 9, 2024 at 19:58 Junwang Zhao <[email protected]> wrote:On Tue, Jul 9, 2024 at 6:18 PM Amit Langote <[email protected]> wrote:\n>\n> Hi Junwang,\n>\n> On Mon, Jul 8, 2024 at 7:08 PM Junwang Zhao <[email protected]> wrote:\n> > On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n> > <[email protected]> wrote:\n> > >\n> > > In [1], Andres reported a -Wuse-after-free bug in the\n> > > ATExecAttachPartition() function.  I've created a patch to address it\n> > > with pointers from Amit offlist.\n> > >\n> > > The issue was that the partBoundConstraint variable was utilized after\n> > > the list_concat() function. This could potentially lead to accessing\n> > > the partBoundConstraint variable after its memory has been freed.\n> > >\n> > > The issue was resolved by using the return value of the list_concat()\n> > > function, instead of using the list1 argument of list_concat(). I\n> > > copied the partBoundConstraint variable to a new variable named\n> > > partConstraint and used it for the previous references before invoking\n> > > get_proposed_default_constraint(). I confirmed that the\n> > > eval_const_expressions(), make_ands_explicit(),\n> > > map_partition_varattnos(), QueuePartitionConstraintValidation()\n> > > functions do not modify the memory location pointed to by the\n> > > partBoundConstraint variable. Therefore, it is safe to use it for the\n> > > next reference in get_proposed_default_constraint()\n> > >\n> > > Attaching the patch. Please review and share the comments if any.\n> > > Thanks to Andres for spotting the bug and some off-list advice on how\n> > > to reproduce it.\n> >\n> > The patch LGTM.\n> >\n> > Curious how to reproduce the bug ;)\n>\n> Download and apply (`git am`) Andres' patch to \"add allocator\n> attributes\" here (note it's not merged into the tree yet!):\n> https://github.com/anarazel/postgres/commit/99067d5c944e7bd29a8702689f470f623723f4e7.patch\n>\n> Then configure using meson with -Dc_args=\"-Wuse-after-free=3\"\n> --buildtype=release\n>\n> Compiling with gcc-12 or gcc-13 should give you the warning that looks\n> as follows:\n>\n> [713/2349] Compiling C object\n> src/backend/postgres_lib.a.p/commands_tablecmds.c.o\n> ../src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n> ../src/backend/commands/tablecmds.c:18593:25: warning: pointer\n> ‘partBoundConstraint’ may be used after ‘list_concat’\n> [-Wuse-after-free]\n> 18593 |\n> get_proposed_default_constraint(partBoundConstraint);\n>       |\n> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> ../src/backend/commands/tablecmds.c:18546:26: note: call to ‘list_concat’ here\n> 18546 |         partConstraint = list_concat(partBoundConstraint,\n>       |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> 18547 |\n>   RelationGetPartitionQual(rel));\n>       |\n>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n> --\n> Thanks, Amit Langote\n\nThanks Amit,\n\nGood to know.\n\nWhen Nitin says:\n\n```Thanks to Andres for spotting the bug and some off-list advice on how\nto reproduce it.```\n\nI thought maybe there is some test case that can really trigger the\nuse-after-free bug, I might get it wrong though ;)I doubt one could write a regression test to demonstrate the use-after-free bug, though I may of course be wrong. By “reproduce it”, I think Nitin meant the warning that suggests that use-after-free bug may occur.", "msg_date": "Tue, 9 Jul 2024 20:39:31 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Address the -Wuse-after-free warning in ATExecAttachPartition()" }, { "msg_contents": "On Tue, Jul 9, 2024 at 7:39 PM Amit Langote <[email protected]> wrote:\n>\n> On Tue, Jul 9, 2024 at 19:58 Junwang Zhao <[email protected]> wrote:\n>>\n>> On Tue, Jul 9, 2024 at 6:18 PM Amit Langote <[email protected]> wrote:\n>> >\n>> > Hi Junwang,\n>> >\n>> > On Mon, Jul 8, 2024 at 7:08 PM Junwang Zhao <[email protected]> wrote:\n>> > > On Mon, Jul 8, 2024 at 3:22 PM Nitin Jadhav\n>> > > <[email protected]> wrote:\n>> > > >\n>> > > > In [1], Andres reported a -Wuse-after-free bug in the\n>> > > > ATExecAttachPartition() function. I've created a patch to address it\n>> > > > with pointers from Amit offlist.\n>> > > >\n>> > > > The issue was that the partBoundConstraint variable was utilized after\n>> > > > the list_concat() function. This could potentially lead to accessing\n>> > > > the partBoundConstraint variable after its memory has been freed.\n>> > > >\n>> > > > The issue was resolved by using the return value of the list_concat()\n>> > > > function, instead of using the list1 argument of list_concat(). I\n>> > > > copied the partBoundConstraint variable to a new variable named\n>> > > > partConstraint and used it for the previous references before invoking\n>> > > > get_proposed_default_constraint(). I confirmed that the\n>> > > > eval_const_expressions(), make_ands_explicit(),\n>> > > > map_partition_varattnos(), QueuePartitionConstraintValidation()\n>> > > > functions do not modify the memory location pointed to by the\n>> > > > partBoundConstraint variable. Therefore, it is safe to use it for the\n>> > > > next reference in get_proposed_default_constraint()\n>> > > >\n>> > > > Attaching the patch. Please review and share the comments if any.\n>> > > > Thanks to Andres for spotting the bug and some off-list advice on how\n>> > > > to reproduce it.\n>> > >\n>> > > The patch LGTM.\n>> > >\n>> > > Curious how to reproduce the bug ;)\n>> >\n>> > Download and apply (`git am`) Andres' patch to \"add allocator\n>> > attributes\" here (note it's not merged into the tree yet!):\n>> > https://github.com/anarazel/postgres/commit/99067d5c944e7bd29a8702689f470f623723f4e7.patch\n>> >\n>> > Then configure using meson with -Dc_args=\"-Wuse-after-free=3\"\n>> > --buildtype=release\n>> >\n>> > Compiling with gcc-12 or gcc-13 should give you the warning that looks\n>> > as follows:\n>> >\n>> > [713/2349] Compiling C object\n>> > src/backend/postgres_lib.a.p/commands_tablecmds.c.o\n>> > ../src/backend/commands/tablecmds.c: In function ‘ATExecAttachPartition’:\n>> > ../src/backend/commands/tablecmds.c:18593:25: warning: pointer\n>> > ‘partBoundConstraint’ may be used after ‘list_concat’\n>> > [-Wuse-after-free]\n>> > 18593 |\n>> > get_proposed_default_constraint(partBoundConstraint);\n>> > |\n>> > ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>> > ../src/backend/commands/tablecmds.c:18546:26: note: call to ‘list_concat’ here\n>> > 18546 | partConstraint = list_concat(partBoundConstraint,\n>> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>> > 18547 |\n>> > RelationGetPartitionQual(rel));\n>> > |\n>> > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>> >\n>> > --\n>> > Thanks, Amit Langote\n>>\n>> Thanks Amit,\n>>\n>> Good to know.\n>>\n>> When Nitin says:\n>>\n>> ```Thanks to Andres for spotting the bug and some off-list advice on how\n>> to reproduce it.```\n>>\n>> I thought maybe there is some test case that can really trigger the\n>> use-after-free bug, I might get it wrong though ;)\n>\n>\n> I doubt one could write a regression test to demonstrate the use-after-free bug, though I may of course be wrong. By “reproduce it”, I think Nitin meant the warning that suggests that use-after-free bug may occur.\n\ngot it, thanks~\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 9 Jul 2024 20:48:48 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Address the -Wuse-after-free warning in ATExecAttachPartition()" } ]
[ { "msg_contents": "Not 100% sure, sorry if this doesn't make sense.\n\n--- a/src/common/jsonapi.c\n+++ b/src/common/jsonapi.c\n@@ -514,7 +514,7 @@ freeJsonLexContext(JsonLexContext *lex)\n *\n * If FORCE_JSON_PSTACK is defined then the routine will call the non-recursive\n * JSON parser. This is a useful way to validate that it's doing the right\n- * think at least for non-incremental cases. If this is on we expect to see\n+ * thing at least for non-incremental cases. If this is on we expect to see\n * regression diffs relating to error messages about stack depth, but no\n * other differences.\n */\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 8 Jul 2024 16:24:49 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "a potential typo in comments of pg_parse_json" }, { "msg_contents": "On Mon, Jul 8, 2024 at 5:25 PM Junwang Zhao <[email protected]> wrote:\n> Not 100% sure, sorry if this doesn't make sense.\n>\n> --- a/src/common/jsonapi.c\n> +++ b/src/common/jsonapi.c\n> @@ -514,7 +514,7 @@ freeJsonLexContext(JsonLexContext *lex)\n> *\n> * If FORCE_JSON_PSTACK is defined then the routine will call the non-recursive\n> * JSON parser. This is a useful way to validate that it's doing the right\n> - * think at least for non-incremental cases. If this is on we expect to see\n> + * thing at least for non-incremental cases. If this is on we expect to see\n> * regression diffs relating to error messages about stack depth, but no\n> * other differences.\n> */\n\nGood catch. Fixed.\n\n\n--\nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 8 Jul 2024 22:16:33 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a potential typo in comments of pg_parse_json" } ]
[ { "msg_contents": "while reviewing the json query doc,\nI found out the following error message was not quite right.\n\nselect '[1,2]'::int[];\nERROR: malformed array literal: \"[1,2]\"\nLINE 1: select '[1,2]'::int[];\n ^\nDETAIL: Missing \"]\" after array dimensions.\nshould it be:\n\"Missing delimiter \":\" while specifying array dimensions.\"\n?\nBecause here, the first problem is the comma should be colon.\n\n\nalso\n\"DETAIL: Missing \"]\" after array dimensions.\"\nshould be\nDETAIL: Missing \"]\" while specifying array dimensions.\n?\n\n\n", "msg_date": "Mon, 8 Jul 2024 18:37:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "jian he <[email protected]> writes:\n> while reviewing the json query doc,\n> I found out the following error message was not quite right.\n\n> select '[1,2]'::int[];\n> ERROR: malformed array literal: \"[1,2]\"\n> LINE 1: select '[1,2]'::int[];\n> ^\n> DETAIL: Missing \"]\" after array dimensions.\n\n> should it be:\n> \"Missing delimiter \":\" while specifying array dimensions.\"\n\nThat's presuming quite a lot about the nature of the error.\nAll the code knows is that what follows the \"1\" should be\neither \":\" or \"]\", and when it sees \",\" instead it throws\nthis error. I agree the existing message isn't great, but\ntrying to be more specific could confuse people even more\nif the more-specific message doesn't apply either.\n\nOne possibility could be\n\n if (*p != ']')\n ereturn(escontext, false,\n (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),\n+ (strchr(p, ']') != NULL) ?\n+ errdetail(\"Array dimensions have invalid syntax.\") :\n errdetail(\"Missing \\\"%s\\\" after array dimensions.\",\n \"]\")));\n\nthat is, only say \"Missing \"]\"\" if there's no ']' anywhere, and\notherwise just say the dimensions are wrong. This could be fooled\nby a ']' that's part of some string in the data, but even then the\nerrdetail isn't wrong.\n\nOr we could just say \"Array dimensions have invalid syntax.\"\nunconditionally.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 10:42:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "On Mon, Jul 8, 2024 at 10:42 PM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > while reviewing the json query doc,\n> > I found out the following error message was not quite right.\n>\n> > select '[1,2]'::int[];\n> > ERROR: malformed array literal: \"[1,2]\"\n> > LINE 1: select '[1,2]'::int[];\n> > ^\n> > DETAIL: Missing \"]\" after array dimensions.\n>\n> > should it be:\n> > \"Missing delimiter \":\" while specifying array dimensions.\"\n>\n> That's presuming quite a lot about the nature of the error.\n> All the code knows is that what follows the \"1\" should be\n> either \":\" or \"]\", and when it sees \",\" instead it throws\n> this error. I agree the existing message isn't great, but\n> trying to be more specific could confuse people even more\n> if the more-specific message doesn't apply either.\n>\n> One possibility could be\n>\n> if (*p != ']')\n> ereturn(escontext, false,\n> (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),\n> + (strchr(p, ']') != NULL) ?\n> + errdetail(\"Array dimensions have invalid syntax.\") :\n> errdetail(\"Missing \\\"%s\\\" after array dimensions.\",\n> \"]\")));\n>\n> that is, only say \"Missing \"]\"\" if there's no ']' anywhere, and\n> otherwise just say the dimensions are wrong. This could be fooled\n> by a ']' that's part of some string in the data, but even then the\n> errdetail isn't wrong.\n>\n> Or we could just say \"Array dimensions have invalid syntax.\"\n> unconditionally.\n>\n> regards, tom lane\n\n\nwe can\nif (*p == ':')\n{\n....\n\nif (*p != ']')\nereturn(escontext, false,\n(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\nerrmsg(\"malformed array literal: \\\"%s\\\"\", origStr),\nerrdetail(\"Missing \\\"%s\\\" after array dimensions.\",\n\"]\")));\n}\nelse\n{\nif (*p != ']')\nereturn(escontext, false,\n(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\nerrmsg(\"malformed array literal: \\\"%s\\\"\", origStr),\nerrdetail(\"Missing delimiter \\\"%s\\\" while specifying array dimensions.\",\n\":\")));\n}\n\n\ndifferent error message in IF ELSE blocks.\nplease check attached.", "msg_date": "Tue, 9 Jul 2024 09:53:17 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "jian he <[email protected]> writes:\n> On Mon, Jul 8, 2024 at 10:42 PM Tom Lane <[email protected]> wrote:\n>> One possibility could be\n>> ...\n>> that is, only say \"Missing \"]\"\" if there's no ']' anywhere, and\n>> otherwise just say the dimensions are wrong. This could be fooled\n>> by a ']' that's part of some string in the data, but even then the\n>> errdetail isn't wrong.\n>> \n>> Or we could just say \"Array dimensions have invalid syntax.\"\n>> unconditionally.\n\n> please check attached.\n\nMeh. This just replaces one possibly-off-point error message\nwith a different one that will be off-point in a different set\nof cases. I think we should simply reduce the specificity of\nthe message.\n\nBTW, I think we have pretty much the same issue with respect\nto the check for \"=\" later on. For instance, if someone\nis confused about whether to insert commas:\n\n=# select '[1:1],[1:2]={{3,4}}'::int[];\nERROR: malformed array literal: \"[1:1],[1:2]={{3,4}}\"\nLINE 1: select '[1:1],[1:2]={{3,4}}'::int[];\n ^\nDETAIL: Missing \"=\" after array dimensions.\n\nHere again, the problem is not a missing \"=\", it's invalid\nsyntax somewhere before that.\n\nAnother thing we could consider doing here (and similarly\nfor your original case) is\n\nDETAIL: Expected \"=\" not \",\" after array dimensions.\n\nThe advantage of this is that it provides a little more\nclarity as to exactly where things went wrong.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 11:31:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "On Tue, Jul 9, 2024 at 8:31 AM Tom Lane <[email protected]> wrote:\n\n>\n> Here again, the problem is not a missing \"=\", it's invalid\n> syntax somewhere before that.\n>\n> Another thing we could consider doing here (and similarly\n> for your original case) is\n>\n> DETAIL: Expected \"=\" not \",\" after array dimensions.\n>\n> The advantage of this is that it provides a little more\n> clarity as to exactly where things went wrong.\n>\n>\nOne possibility all this ignores is that what we are calling\narray-dimensions are in reality most likely a user using json array syntax\nin our SQL arrays. That seems eminently more likely than someone\nmis-typing this niche incantation of building an array literal nowadays.\nI'd add a hint if the first symbol is [ and we fail to get to the point of\nactually seeing the equal sign or the first subsequent unquoted symbol is a\ncomma instead of a colon.\n\nDavid J.\n\nOn Tue, Jul 9, 2024 at 8:31 AM Tom Lane <[email protected]> wrote:\nHere again, the problem is not a missing \"=\", it's invalid\nsyntax somewhere before that.\n\nAnother thing we could consider doing here (and similarly\nfor your original case) is\n\nDETAIL:  Expected \"=\" not \",\" after array dimensions.\n\nThe advantage of this is that it provides a little more\nclarity as to exactly where things went wrong.One possibility all this ignores is that what we are calling array-dimensions are in reality most likely a user using json array syntax in our SQL arrays.  That seems eminently more likely than someone mis-typing this niche incantation of building an array literal nowadays.  I'd add a hint if the first symbol is [ and we fail to get to the point of actually seeing the equal sign or the first subsequent unquoted symbol is a comma instead of a colon.David J.", "msg_date": "Tue, 9 Jul 2024 08:40:08 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> One possibility all this ignores is that what we are calling\n> array-dimensions are in reality most likely a user using json array syntax\n> in our SQL arrays. That seems eminently more likely than someone\n> mis-typing this niche incantation of building an array literal nowadays.\n\nYeah, that's a good point.\n\n> I'd add a hint if the first symbol is [ and we fail to get to the point of\n> actually seeing the equal sign or the first subsequent unquoted symbol is a\n> comma instead of a colon.\n\nThat seems closely related to my suggestion of applying strchr() to\nsee if the character we are expecting to see actually appears\nanywhere. Or did you have something else in mind? Please be\nspecific, both about the test you are thinking of and how the\nmessage should be worded.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 11:59:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "On Tue, Jul 9, 2024 at 8:59 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n>\n> > I'd add a hint if the first symbol is [ and we fail to get to the point\n> of\n> > actually seeing the equal sign or the first subsequent unquoted symbol\n> is a\n> > comma instead of a colon.\n>\n> That seems closely related to my suggestion of applying strchr() to\n> see if the character we are expecting to see actually appears\n> anywhere. Or did you have something else in mind? Please be\n> specific, both about the test you are thinking of and how the\n> message should be worded.\n>\n>\nPseudo-code, the syntax for adding a conditional errhint I didn't try to\nlearn along with figuring out the logic. Also not totally sure on\nthe ReadDimensionInt behavior interplay here.\n\nIn short, the ambiguous first dimension [n] case is cleared by seeing\neither a second dimension or the equal sign separator. (ToDo: see how\nend-of-string get resolved)\n\nThe [n:m] first dimension case clears as soon as the colon is seen.\n\nThe normal, undimensioned case clears once the first non-blank character is\nnot a [\n\nIf we error before clearing we assume the query author provided an SQL\narray using json syntax and point out that the delimiters for SQL arrays\nare {}.\nWe also assume, in the single bounds case, that a too-large upper-bound\nvalue means they did intend to supply a single number in a json array\nformat. We would need to modify these tests so they occur after checking\nwhether the next part of the string is [ or = but, in the [ case, before\nprocessing the next dimension.\n\ndiff --git a/src/backend/utils/adt/arrayfuncs.c\nb/src/backend/utils/adt/arrayfuncs.c\nindex d6641b570d..0ac1eabba1 100644\n--- a/src/backend/utils/adt/arrayfuncs.c\n+++ b/src/backend/utils/adt/arrayfuncs.c\n@@ -404,7 +404,7 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int\n*dim, int *lBound,\n {\n char *p = *srcptr;\n int ndim;\n-\n+ bool couldBeJson = true;\n /*\n * Dimension info takes the form of one or more [n] or [m:n]\nitems. This\n * loop iterates once per dimension item.\n@@ -422,8 +422,19 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int\n*dim, int *lBound,\n */\n while (scanner_isspace(*p))\n p++;\n+ if (*p == '=') {\n+ //if we have an equals sign we are not dealing with\na json array\n+ couldBeJson = false;\n+ break; // and we are at the end of the bounds\nspecification for the SQL array literal we do have\n+ }\n if (*p != '[')\n- break; /* no more\ndimension items */\n+ {\n+ couldBeJson = false; // json arrays will start with\n[\n+ break;\n+ // on subsequent passes we better have either this\nor an equal sign and the later is covered above\n+ } /* no (more?) dimension items */\n+ if (ndim > 0)\n+ couldBeJson = false; //multi-dimensional arrays\nspecs are invalid json arrays\n p++;\n if (ndim >= MAXDIM)\n ereturn(escontext, false,\n@@ -438,11 +449,18 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int\n*dim, int *lBound,\n ereturn(escontext, false,\n\n(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n errmsg(\"malformed array literal:\n\\\"%s\\\"\", origStr),\n- errdetail(\"\\\"[\\\" must introduce\nexplicitly-specified array dimensions.\")));\n+ errdetail(\"\\\"[\\\" must introduce\nexplicitly-specified array dimensions.\")),\n+ //if it isn't a digit and we might\nstill have a json array its a good bet it is one\n+ //with non-numeric content\n+ couldBeJson ? errhint(\"SQL array\nvaLues are delimited by {}\" : null));\n\n if (*p == ':')\n {\n /* [m:n] format */\n+ //if we have numbers as the first entry, the\npresence of a colon,\n+ //which is not a valid json separator, in a number\nliteral or an array,\n+ //means we have a bounds specification in an SQL\narray\n+ couldBeJson = false;\n lBound[ndim] = i;\n p++;\n q = p;\n@@ -454,18 +472,22 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int\n*dim, int *lBound,\n errmsg(\"malformed array\nliteral: \\\"%s\\\"\", origStr),\n errdetail(\"Missing array\ndimension value.\")));\n }\n- else\n+ else if (*p == ']')\n {\n /* [n] format */\n+ //single digit doesn't disprove json with single\nnumber element\n lBound[ndim] = 1;\n ub = i;\n }\n- if (*p != ']')\n+ else\n+ {\n ereturn(escontext, false,\n\n(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n errmsg(\"malformed array literal:\n\\\"%s\\\"\", origStr),\n errdetail(\"Missing \\\"%s\\\" after\narray dimensions.\",\n- \"]\")));\n+ \"]\")),\n+ couldBeJson ? errhint(\"SQL array\nvalues are delimited by {}\" : null));\n+ }\n p++;\n\n /*\n@@ -475,29 +497,37 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int\n*dim, int *lBound,\n * would be equivalent. Given the lack of field demand,\nthere seems\n * little point in allowing such cases.\n */\n+ //not possible in the single bound case so cannot be json\n if (ub < lBound[ndim])\n ereturn(escontext, false,\n\n(errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),\n errmsg(\"upper bound cannot be less\nthan lower bound\")));\n\n /* Upper bound of INT_MAX must be disallowed, cf\nArrayCheckBounds() */\n+ // the singular json number may very well be larger than an\ninteger...\n if (ub == INT_MAX)\n ereturn(escontext, false,\n\n(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n- errmsg(\"array upper bound is too\nlarge: %d\", ub)));\n+ errmsg(\"array upper bound is too\nlarge: %d\", ub),\n+ couldBeJson ? errhint(\"SQL array\nvalues are delimited by {}\" : null)));\n\n /* Compute \"ub - lBound[ndim] + 1\", detecting overflow */\n+ //a whatever this threshold is a random number passed in a\njson array likely will exceed it\n if (pg_sub_s32_overflow(ub, lBound[ndim], &ub) ||\n pg_add_s32_overflow(ub, 1, &ub))\n ereturn(escontext, false,\n\n(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"array size exceeds the\nmaximum allowed (%d)\",\n- (int)\nMaxArraySize)));\n+ (int) MaxArraySize,\n+ couldBeJson ? errhint(\"SQL array\nvalues are delimited by {}\" : null))));\n\n dim[ndim] = ub;\n ndim++;\n }\n\n+ if (couldBeJson == true)\n+ assert('not reachable, need to disprove json array literal\nprior to returning');\n+\n *srcptr = p;\n *ndim_p = ndim;\n return true;\n\n\nDavid J.\n\nOn Tue, Jul 9, 2024 at 8:59 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\r\n> I'd add a hint if the first symbol is [ and we fail to get to the point of\r\n> actually seeing the equal sign or the first subsequent unquoted symbol is a\r\n> comma instead of a colon.\n\r\nThat seems closely related to my suggestion of applying strchr() to\r\nsee if the character we are expecting to see actually appears\r\nanywhere.  Or did you have something else in mind?  Please be\r\nspecific, both about the test you are thinking of and how the\r\nmessage should be worded.Pseudo-code, the syntax for adding a conditional errhint I didn't try to learn along with figuring out the logic.  Also not totally sure on the ReadDimensionInt behavior interplay here.In short, the ambiguous first dimension [n] case is cleared by seeing either a second dimension or the equal sign separator. (ToDo: see how end-of-string get resolved)The [n:m] first dimension case clears as soon as the colon is seen.The normal, undimensioned case clears once the first non-blank character is not a [If we error before clearing we assume the query author provided an SQL array using json syntax and point out that the delimiters for SQL arrays are {}.We also assume, in the single bounds case, that a too-large upper-bound value means they did intend to supply a single number in a json array format.  We would need to modify these tests so they occur after checking whether the next part of the string is [ or = but, in the [ case, before processing the next dimension.diff --git a/src/backend/utils/adt/arrayfuncs.c b/src/backend/utils/adt/arrayfuncs.cindex d6641b570d..0ac1eabba1 100644--- a/src/backend/utils/adt/arrayfuncs.c+++ b/src/backend/utils/adt/arrayfuncs.c@@ -404,7 +404,7 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int *dim, int *lBound, {        char       *p = *srcptr;        int                     ndim;-+    bool couldBeJson = true;        /*         * Dimension info takes the form of one or more [n] or [m:n] items.  This         * loop iterates once per dimension item.@@ -422,8 +422,19 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int *dim, int *lBound,                 */                while (scanner_isspace(*p))                        p++;+               if (*p == '=') {+                       //if we have an equals sign we are not dealing with a json array+                       couldBeJson = false;+                       break; // and we are at the end of the bounds specification for the SQL array literal we do have+               }                if (*p != '[')-                       break;                          /* no more dimension items */+               {+                       couldBeJson = false; // json arrays will start with [+                       break;  +                       // on subsequent passes we better have either this or an equal sign and the later is covered above+               }                       /* no (more?) dimension items */+               if (ndim > 0)+                       couldBeJson = false; //multi-dimensional arrays specs are invalid json arrays                p++;                if (ndim >= MAXDIM)                        ereturn(escontext, false,@@ -438,11 +449,18 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int *dim, int *lBound,                        ereturn(escontext, false,                                        (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),                                         errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),-                                        errdetail(\"\\\"[\\\" must introduce explicitly-specified array dimensions.\")));+                                        errdetail(\"\\\"[\\\" must introduce explicitly-specified array dimensions.\")),+                                        //if it isn't a digit and we might still have a json array its a good bet it is one+                                        //with non-numeric content+                                        couldBeJson ? errhint(\"SQL array vaLues are delimited by {}\" : null));                 if (*p == ':')                {                        /* [m:n] format */+                       //if we have numbers as the first entry, the presence of a colon, +                       //which is not a valid json separator, in a number literal or an array, +                       //means we have a bounds specification in an SQL array+                       couldBeJson = false;                         lBound[ndim] = i;                        p++;                        q = p;@@ -454,18 +472,22 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int *dim, int *lBound,                                                 errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),                                                 errdetail(\"Missing array dimension value.\")));                }-               else+               else if (*p == ']')                {                        /* [n] format */+                       //single digit doesn't disprove json with single number element                        lBound[ndim] = 1;                        ub = i;                }-               if (*p != ']')+               else+               {                        ereturn(escontext, false,                                        (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),                                         errmsg(\"malformed array literal: \\\"%s\\\"\", origStr),                                         errdetail(\"Missing \\\"%s\\\" after array dimensions.\",-                                                          \"]\")));+                                                          \"]\")),+                                       couldBeJson ? errhint(\"SQL array values are delimited by {}\" : null));+               }                p++;                 /*@@ -475,29 +497,37 @@ ReadArrayDimensions(char **srcptr, int *ndim_p, int *dim, int *lBound,                 * would be equivalent.  Given the lack of field demand, there seems                 * little point in allowing such cases.                 */+               //not possible in the single bound case so cannot be json                if (ub < lBound[ndim])                        ereturn(escontext, false,                                        (errcode(ERRCODE_ARRAY_SUBSCRIPT_ERROR),                                         errmsg(\"upper bound cannot be less than lower bound\")));                 /* Upper bound of INT_MAX must be disallowed, cf ArrayCheckBounds() */+               // the singular json number may very well be larger than an integer...                if (ub == INT_MAX)                        ereturn(escontext, false,                                        (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),-                                        errmsg(\"array upper bound is too large: %d\", ub)));+                                        errmsg(\"array upper bound is too large: %d\", ub),+                                        couldBeJson ? errhint(\"SQL array values are delimited by {}\" : null)));                 /* Compute \"ub - lBound[ndim] + 1\", detecting overflow */+               //a whatever this threshold is a random number passed in a json array likely will exceed it                if (pg_sub_s32_overflow(ub, lBound[ndim], &ub) ||                        pg_add_s32_overflow(ub, 1, &ub))                        ereturn(escontext, false,                                        (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),                                         errmsg(\"array size exceeds the maximum allowed (%d)\",-                                                       (int) MaxArraySize)));+                                                       (int) MaxArraySize,+                                       couldBeJson ? errhint(\"SQL array values are delimited by {}\" : null))));                 dim[ndim] = ub;                ndim++;        } +    if (couldBeJson == true)+           assert('not reachable, need to disprove json array literal prior to returning');+        *srcptr = p;        *ndim_p = ndim;        return true;David J.", "msg_date": "Tue, 9 Jul 2024 10:33:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "On Tue, Jul 9, 2024 at 11:31 PM Tom Lane <[email protected]> wrote:\n>\n> jian he <[email protected]> writes:\n> > On Mon, Jul 8, 2024 at 10:42 PM Tom Lane <[email protected]> wrote:\n> >> One possibility could be\n> >> ...\n> >> that is, only say \"Missing \"]\"\" if there's no ']' anywhere, and\n> >> otherwise just say the dimensions are wrong. This could be fooled\n> >> by a ']' that's part of some string in the data, but even then the\n> >> errdetail isn't wrong.\n> >>\n> >> Or we could just say \"Array dimensions have invalid syntax.\"\n> >> unconditionally.\n>\n> > please check attached.\n>\n> Meh. This just replaces one possibly-off-point error message\n> with a different one that will be off-point in a different set\n> of cases. I think we should simply reduce the specificity of\n> the message.\n\nAfter playing around, I agree with you that reducing the specificity of\nthe message here is better.\n\n\n> BTW, I think we have pretty much the same issue with respect\n> to the check for \"=\" later on. For instance, if someone\n> is confused about whether to insert commas:\n>\n> =# select '[1:1],[1:2]={{3,4}}'::int[];\n> ERROR: malformed array literal: \"[1:1],[1:2]={{3,4}}\"\n> LINE 1: select '[1:1],[1:2]={{3,4}}'::int[];\n> ^\n> DETAIL: Missing \"=\" after array dimensions.\n>\n> Here again, the problem is not a missing \"=\", it's invalid\n> syntax somewhere before that.\n>\n> Another thing we could consider doing here (and similarly\n> for your original case) is\n>\n> DETAIL: Expected \"=\" not \",\" after array dimensions.\n>\n> The advantage of this is that it provides a little more\n> clarity as to exactly where things went wrong.\n>\nthis looks good to me.\n\n\nI also found other issues, like.\nselect '[-2:2147483644]={3}'::int[];\nERROR: malformed array literal: \"[-2:2147483644]={3}\"\nLINE 1: select '[-2:2147483644]={3}'::int[];\n ^\nDETAIL: Specified array dimensions do not match array contents.\n\nselect '[-2:2147483645]={3}'::int[];\nERROR: array size exceeds the maximum allowed (134217727)\nLINE 1: select '[-2:2147483645]={3}'::int[];\n ^\n\nBoth cases exceed the maximum allowed (134217727), but error is different.\nalso\n\"ERROR: array size exceeds the maximum allowed (134217727)\"\nis weird. we are still validating the dimension info function, we\ndon't even know the actual size of the array.\nmaybe\n\"ERROR: array dimensions specified array size exceeds the maximum\nallowed (134217727)\"\n\n\nThis customized dimension info is kind of obscure, even the\ndocumentation of it is buried\nin the third paragraph of\nhttps://www.postgresql.org/docs/current/arrays.html#ARRAYS-IO\n\n\nto address David G. Johnston concern,\nin ReadArrayDimensions, some error message can unconditionally mention\nerrhint like:\nerrhint(\"array dimension template is \\\"[lower_bound:upper_bound]\\\", we\ncan optional have 6 of this\"\n\n\n", "msg_date": "Wed, 10 Jul 2024 12:25:56 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" }, { "msg_contents": "On Tuesday, July 9, 2024, jian he <[email protected]> wrote:\n\n>\n> I also found other issues, like.\n> select '[-2:2147483644]={3}'::int[];\n> ERROR: malformed array literal: \"[-2:2147483644]={3}\"\n> LINE 1: select '[-2:2147483644]={3}'::int[];\n> ^\n> DETAIL: Specified array dimensions do not match array contents.\n>\n> select '[-2:2147483645]={3}'::int[];\n> ERROR: array size exceeds the maximum allowed (134217727)\n> LINE 1: select '[-2:2147483645]={3}'::int[];\n> ^\n>\n> Both cases exceed the maximum allowed (134217727), but error is different.\n> also\n\n\n\nThey do not both exceed (i.e., strictly greater than) 134217727 - the first\ncase equals it which is why it gets past the dimension specification parser.\n\n\"ERROR: array size exceeds the maximum allowed (134217727)\"\n> is weird. we are still validating the dimension info function, we\n> don't even know the actual size of the array.\n\nmaybe\n> \"ERROR: array dimensions specified array size exceeds the maximum\n> allowed (134217727)\"\n\n\nHow about:\n\n“array dimension %d declared size exceeds the maximum allowed %d”, ndim,\n(int) MaxArraySize\n\nBut while that message fits the code aren’t we supposed to be checking,\nafter processing all dimensions, whether the combined number of cells is\ngreater than MaxArraySize? Obviously if any one dimension is the whole\nthing will be, so this specific check and error is still useful.\n\nto address David G. Johnston concern,\n> in ReadArrayDimensions, some error message can unconditionally mention\n> errhint like:\n> errhint(\"array dimension template is \\\"[lower_bound:upper_bound]\\\", we\n> can optional have 6 of this\"\n>\n\nI suppose, but if they are writing json array syntax that hint is going to\nmean little to them. Pointing out that their use of [] brackets is wrong\nand that {} should be used seems more helpful. The extent we need to\nconsider people writing literal dimensions to set their lower bound to\nsomething other than one seems close to none and not needing a hint, IMO.\n\nThat said, it isn’t making it back to us if our users are actually having\nthis confusion and would benefit meaningfully from such a hint.\n\nDavid J.\n\nOn Tuesday, July 9, 2024, jian he <[email protected]> wrote:\nI also found other issues, like.\nselect '[-2:2147483644]={3}'::int[];\nERROR:  malformed array literal: \"[-2:2147483644]={3}\"\nLINE 1: select '[-2:2147483644]={3}'::int[];\n               ^\nDETAIL:  Specified array dimensions do not match array contents.\n\nselect '[-2:2147483645]={3}'::int[];\nERROR:  array size exceeds the maximum allowed (134217727)\nLINE 1: select '[-2:2147483645]={3}'::int[];\n               ^\n\nBoth cases exceed the maximum allowed (134217727), but error is different.\nalsoThey do not both exceed (i.e., strictly greater than) 134217727 - the first case equals it which is why it gets past the dimension specification parser.\n\"ERROR:  array size exceeds the maximum allowed (134217727)\"\nis weird. we are still validating the dimension info function, we\ndon't even know the actual size of the array. \nmaybe\n\"ERROR:  array dimensions specified array size exceeds the maximum\nallowed (134217727)\"How about: “array dimension %d declared size exceeds the maximum allowed %d”, ndim, (int) MaxArraySizeBut while that message fits the code aren’t we supposed to be checking, after processing all dimensions, whether the combined number of cells is greater than MaxArraySize?  Obviously if any one dimension is the whole thing will be, so this specific check and error is still useful.to address David G. Johnston concern,\nin ReadArrayDimensions, some error message can unconditionally mention\nerrhint like:\nerrhint(\"array dimension template is \\\"[lower_bound:upper_bound]\\\", we\ncan optional have 6 of this\"\nI suppose, but if they are writing json array syntax that hint is going to mean little to them. Pointing out that their use of [] brackets is wrong and that {} should be used seems more helpful.  The extent we need to consider people writing literal dimensions to set their lower bound to something other than one seems close to none and not needing a hint, IMO.  That said, it isn’t making it back to us if our users are actually having this confusion and would benefit meaningfully from such a hint.David J.", "msg_date": "Tue, 9 Jul 2024 22:31:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array_in sub function ReadArrayDimensions error message" } ]
[ { "msg_contents": "Hi,\n\nWhile I'm researching about [1], I found there are inconsistent EXPLAIN outputs.\nHere is an example which shows \" OPERATOR(pg_catalog.\". Though it's not wrong,\nI feel like there is no consistency in the output format.\n\n-- A reproduce procedure\ncreate temp table btree_bpchar (f1 text collate \"C\");\ncreate index on btree_bpchar(f1 bpchar_ops);\ninsert into btree_bpchar values ('foo'), ('fool'), ('bar'), ('quux');\nset enable_seqscan to false;\nset enable_bitmapscan to false;\nset enable_indexonlyscan to false; -- or true\nexplain (costs off)\nselect * from btree_bpchar where f1::bpchar like 'foo';\n\n-- Index Scan result\n QUERY PLAN \n------------------------------------------------------\n Index Scan using btree_bpchar_f1_idx on btree_bpchar\n Index Cond: ((f1)::bpchar = 'foo'::bpchar)\n Filter: ((f1)::bpchar ~~ 'foo'::text)\n(3 rows)\n\n-- Index Only Scan result which has 'OPERATOR'\n QUERY PLAN \n-----------------------------------------------------------\n Index Only Scan using btree_bpchar_f1_idx on btree_bpchar\n Index Cond: (f1 OPERATOR(pg_catalog.=) 'foo'::bpchar) -- Here is the point.\n Filter: ((f1)::bpchar ~~ 'foo'::text)\n(3 rows)\n\n\nIIUC, the index only scan use fixed_indexquals, which is removed \"RelabelType\" nodes,\nfor EXPLAIN so that get_rule_expr() could not understand the left argument of the operator\n(f1 if the above case) can be displayed with arg::resulttype and it doesn't need to\nshow \"OPERATOR(pg_catalog.)\".\n\nI've attached PoC patch to show a simple solution. It just adds a new member \"indexqualorig\"\nto the index only scan node like the index scan and the bitmap index scan. But, since I'm\na beginner about the planner, I might have misunderstood something or there should be better\nways.\n\n\n\nBTW, I'm trying to add a new index AM interface for EXPLAIN on the thread([1]). As the aspect,\nmy above solution might not be ideal because AMs can only know index column ids (varattno)\nfrom fixed_indexquals. In that case, to support \"fixed_indexquals\" as argument of deparse_expression()\nis better.\n\n[1] Improve EXPLAIN output for multicolumn B-Tree Index\nhttps://www.postgresql.org/message-id/flat/TYWPR01MB1098260B694D27758FE2BA46FB1C92%40TYWPR01MB10982.jpnprd01.prod.outlook.com\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 8 Jul 2024 11:03:51 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Is it expected behavior index only scan shows \"OPERATOR(pg_catalog.\"\n for EXPLAIN?" }, { "msg_contents": "On 7/8/24 13:03, [email protected] wrote:\n> Hi,\n> \n> While I'm researching about [1], I found there are inconsistent EXPLAIN outputs.\n> Here is an example which shows \" OPERATOR(pg_catalog.\". Though it's not wrong,\n> I feel like there is no consistency in the output format.\n> \n> -- A reproduce procedure\n> create temp table btree_bpchar (f1 text collate \"C\");\n> create index on btree_bpchar(f1 bpchar_ops);\n> insert into btree_bpchar values ('foo'), ('fool'), ('bar'), ('quux');\n> set enable_seqscan to false;\n> set enable_bitmapscan to false;\n> set enable_indexonlyscan to false; -- or true\n> explain (costs off)\n> select * from btree_bpchar where f1::bpchar like 'foo';\n> \n> -- Index Scan result\n> QUERY PLAN \n> ------------------------------------------------------\n> Index Scan using btree_bpchar_f1_idx on btree_bpchar\n> Index Cond: ((f1)::bpchar = 'foo'::bpchar)\n> Filter: ((f1)::bpchar ~~ 'foo'::text)\n> (3 rows)\n> \n> -- Index Only Scan result which has 'OPERATOR'\n> QUERY PLAN \n> -----------------------------------------------------------\n> Index Only Scan using btree_bpchar_f1_idx on btree_bpchar\n> Index Cond: (f1 OPERATOR(pg_catalog.=) 'foo'::bpchar) -- Here is the point.\n> Filter: ((f1)::bpchar ~~ 'foo'::text)\n> (3 rows)\n> \n\nThis apparently comes from generate_operator_name() in ruleutils.c,\nwhere the OPERATOR() decorator is added if:\n\n /*\n * The idea here is to schema-qualify only if the parser would fail to\n * resolve the correct operator given the unqualified op name with the\n * specified argtypes.\n */\n\nSo clearly, the code believes just the operator name could be ambiguous,\nso it adds the namespace too. Why exactly it is considered ambiguous I\ndon't know, but perhaps you have other applicable operators in the\nsearch_path, or something like that?\n\n> \n> IIUC, the index only scan use fixed_indexquals, which is removed \"RelabelType\" nodes,\n> for EXPLAIN so that get_rule_expr() could not understand the left argument of the operator\n> (f1 if the above case) can be displayed with arg::resulttype and it doesn't need to\n> show \"OPERATOR(pg_catalog.)\".\n> \n> I've attached PoC patch to show a simple solution. It just adds a new member \"indexqualorig\"\n> to the index only scan node like the index scan and the bitmap index scan. But, since I'm\n> a beginner about the planner, I might have misunderstood something or there should be better\n> ways.\n> \n\nI honestly don't know if this is the correct solution. It seems to me\nhandling this at the EXPLAIN level might just mask the issue - it's not\nclear to me why adding \"indexqualorig\" would remove the ambiguity (if\nthere's one). Perhaps it might be better to find why the ruleutils.c\ncode thinks the OPERATOR() is necessary, and then improve/fix that?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:32:31 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it expected behavior index only scan shows\n \"OPERATOR(pg_catalog.\" for EXPLAIN?" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> I honestly don't know if this is the correct solution. It seems to me\n> handling this at the EXPLAIN level might just mask the issue - it's not\n> clear to me why adding \"indexqualorig\" would remove the ambiguity (if\n> there's one). Perhaps it might be better to find why the ruleutils.c\n> code thinks the OPERATOR() is necessary, and then improve/fix that?\n\nAs Masahiro-san already said, the root cause is that the planner\nremoves the RelabelType that is on the indexed variable. So ruleutils\nsees that the operator is not the one that would be chosen by the\nparser given those input expressions (cf generate_operator_name)\nand it concludes it'd better schema-qualify the operator. While\nthat doesn't really make any difference in this particular case,\nit is the right thing in typical rule-deparsing cases.\n\nI don't think this output is really wrong, and I'm not in favor\nof adding nontrivial overhead to make it better, so I don't like\nthe proposed patch. Maybe generate_operator_name could use some\nother heuristic, but I'm unsure what. The case it wants to cover\nis that the operator is in the search path but is masked by some\noperator in an earlier schema, so simply testing if the operator's\nschema is in the path at all would be insufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 08:44:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it expected behavior index only scan shows\n \"OPERATOR(pg_catalog.\" for EXPLAIN?" }, { "msg_contents": "Thanks for your comments.\n\nTom Lane <[email protected]> writes: \n> Tomas Vondra <[email protected]> writes:\n> > I honestly don't know if this is the correct solution. It seems to me\n> > handling this at the EXPLAIN level might just mask the issue - it's\n> > not clear to me why adding \"indexqualorig\" would remove the ambiguity\n> > (if there's one). Perhaps it might be better to find why the\n> > ruleutils.c code thinks the OPERATOR() is necessary, and then improve/fix that?\n> \n> As Masahiro-san already said, the root cause is that the planner removes the\n> RelabelType that is on the indexed variable. So ruleutils sees that the operator is not\n> the one that would be chosen by the parser given those input expressions (cf\n> generate_operator_name) and it concludes it'd better schema-qualify the operator.\n> While that doesn't really make any difference in this particular case, it is the right thing\n> in typical rule-deparsing cases.\n\nYes.\n\nThe plan of index scan has a \"RELABELTYPE\" node, and it has \"resulttype\" so that\ngenerate_operator_name() gets \"operid =1054(\"=\"), arg1=1042(bpchar from resulttype),\narg2=1042(bpchar)\". The plan of index only scan is removed it so that\ngenerate_operator_name() gets \"operid =1054(\"=\"), arg1=25(*text*), arg2=1042(bpchar)\". \n\nThough there is no entry \"operid=1054(\"=\"), arg1=25(*text*), arg2=1042(bpchar)\" in\npg_operator, it decided to check with the operator \"=\" for (text, text) because it is\ncoercion-compatible and preferred than operator \"=\" for (bpchar, bpchar). \n\nBut since the caller expected to use operator \"=\" for (bpchar, bpchar), it plus\nOPERATOR() decoration sadly.\n\n# the partial output of Index Scan plan\n :indexqualorig (\n {OPEXPR \n :opno 1054 -- \"=\" operator\n :opfuncid 1048 \n :opresulttype 16 \n :opretset false \n :opcollid 0 \n :inputcollid 950 \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1\n :varattno 1 \n :vartype 25 -- text. But it will be evaluated as 1042(bpchar) from resulttype\n :vartypmod -1 \n :varcollid 950 \n :varnullingrels (b)\n :varlevelsup 0 \n :varnosyn 1 \n :varattnosyn 1 \n :location 33\n }\n :resulttype 1042 -- bpchar\n :resulttypmod -1 \n :resultcollid 950 \n :relabelformat 1 \n :location 35\n }\n {CONST \n :consttype 1042 -- bpchar\n :consttypmod -1 \n :constcollid 100 \n :constlen -1 \n :constbyval false \n :constisnull false \n :location 46 \n :constvalue 7 [ 28 0 0 0 102 111 111 ]\n }\n ) \n\n# the partial output of Index Only Scan plan\n :indexqual (\n {OPEXPR \n :opno 1054 -- \"=\" operator\n :opfuncid 1048 \n :opresulttype 16 \n :opretset false \n :opcollid 0 \n :inputcollid 950 \n :args (\n {VAR \n :varno -3\n :varattno 1 \n :vartype 25 -- text\n :vartypmod -1 \n :varcollid 950 \n :varnullingrels (b)\n :varlevelsup 0 \n :varnosyn 1 \n :varattnosyn 1 \n :location 33\n }\n {CONST \n :consttype 1042 -- bpchar\n :consttypmod -1 \n :constcollid 100 \n :constlen -1 \n :constbyval false \n :constisnull false \n :location 46 \n :constvalue 7 [ 28 0 0 0 102 111 111 ]\n }\n )\n :location 44\n }\n )\n\n> I don't think this output is really wrong, and I'm not in favor of adding nontrivial overhead\n> to make it better, so I don't like the proposed patch. Maybe generate_operator_name\n> could use some other heuristic, but I'm unsure what. The case it wants to cover is that\n> the operator is in the search path but is masked by some operator in an earlier schema,\n> so simply testing if the operator's schema is in the path at all would be insufficient.\n\nYes, that's one of my concerns. \n\nIIUC, the above case is not related to the search path but the arguments don't match. If so,\nI think there are two ways to solve the issue.\n\n1. make callers of generate_operator_name() check the arguments first.\nThe callers (e.g., get_oper_expr()) of generate_operator_name() decide whether they\nshould/can cast the arguments. The planner already decided what operator should be used\nso that get_oper_expr() can cast always on the explain context if the arguments don't match\nthe operator's one, doesn't it?\n\n2. make generate_operator_name() checks all operator candidate.\nCurrently generate_operator_name() checks only one operator which matches the operator's name\neven if although there are other candidates (e.g., to call oper()->oper_select_candidate()). \nfunc_select_candidate() selects operator \"=\" for (text, text) in the above case because \"=\" for\n(btchar, btchar) is typispreferred=false. So, it seems to me that it can solve the issue \nif generate_operator_name() checks all candidates.\n\nI think the second solution is better because callers might expect to use specified operator \neven if there are other candidates' operator which can handle the arguments.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Jul 2024 05:19:49 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "RE: Is it expected behavior index only scan shows\n \"OPERATOR(pg_catalog.\" for EXPLAIN?" } ]
[ { "msg_contents": "Hi,\n\nCompiling postgres on windows with tab-completion support fails either with\n\"fatal error C1026: parser stack overflow, program too complex”.\nor (in recent versions) with\n\"…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal compiler error.\"\n\nI've reported this to the visual studio folks at [1] and supposedly a fix is\nwaiting to be released. I don't know if that's just for the internal compiler\nerror in 2022 or lifting the limitation.\n\n\nIt's pretty easy to work around the error [2]. I wonder if we should just do\nthat, then we don't have to insist on a very new msvc version being used and\nit'll work even if they just decide to fix the internal compiler error.\n\n\nBesides just breaking the if-else-if chain at some arbitrary point, we could\nalternatively make a more general efficiency improvement, and add a bit of\nnesting at a few places. E.g. instead of having ~33 checks for COMMENT being\nthe first word, we could use \"outer\" else-if for COMMENT and check the\nsub-variants inside that branch:\n\nelse if (HeadMatches(\"COMMENT\"))\n{\n\tif (Matches(\"COMMENT\"))\n\t\tCOMPLETE_WITH(\"ON\");\n\telse if (Matches(\"COMMENT\", \"ON\"))\n ...\n}\n\nif we do that to one or two common keywords (COMMENT and DROP seems easiest)\nwe'd be out from that limitation, and would probably reduce the number of\ncycles for completions noticeably.\n\nOTOH, that'd be a more noisy change and it'd be less defensible to apply it to\n17 - which IMO would be nice to do.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://developercommunity.visualstudio.com/t/tab-completec4023:-fatal-error-C1001:/10685868\n[2] https://postgr.es/m/CACLU5mRRLAW2kca3k2gVDM8kow6wgvT_BCaeg37jz%3DKyj1afvw%40mail.gmail.com\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:48:04 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Compiling postgres on windows with tab-completion support fails either with\n> \"fatal error C1026: parser stack overflow, program too complex”.\n> or (in recent versions) with\n> \"…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal compiler error.\"\n\n> It's pretty easy to work around the error [2]. I wonder if we should just do\n> that, then we don't have to insist on a very new msvc version being used and\n> it'll work even if they just decide to fix the internal compiler error.\n\nI'm on board with doing something here, but wouldn't we want to\nback-patch at least a minimal fix to all supported branches?\n\nAs for the specific thing to do, that long if-chain seems horrid\nfrom an efficiency standpoint as well as stressing compiler\nimplementations. I realize that this pretty much only needs to run\nat human-reaction-time speed, but it still offends my inner nerd.\nI also wonder when we are going to hit problems with some earlier\ntest unexpectedly pre-empting some later one.\n\nCould we perhaps have a table of first words of each interesting\nmatch, and do a lookup in that before dispatching to code segments\nthat are individually similar to what's there now?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 14:18:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Hi,\n\nOn 2024-07-08 14:18:03 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Compiling postgres on windows with tab-completion support fails either with\n> > \"fatal error C1026: parser stack overflow, program too complex”.\n> > or (in recent versions) with\n> > \"…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal compiler error.\"\n>\n> > It's pretty easy to work around the error [2]. I wonder if we should just do\n> > that, then we don't have to insist on a very new msvc version being used and\n> > it'll work even if they just decide to fix the internal compiler error.\n>\n> I'm on board with doing something here, but wouldn't we want to\n> back-patch at least a minimal fix to all supported branches?\n\nI think we'd need to backpatch more for older branches. At least\n\ncommit 3f28bd7337d\nAuthor: Thomas Munro <[email protected]>\nDate: 2022-12-22 17:14:23 +1300\n\n Add work-around for VA_ARGS_NARGS() on MSVC.\n\n\nGiven that - afaict - tab completion never used to work with msvc, I think\nit'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is currently\nbuilding with readline support for windows - not sure if any packager is going\nto go back and add support for it in older branches.\n\n\n\n> As for the specific thing to do, that long if-chain seems horrid\n> from an efficiency standpoint as well as stressing compiler\n> implementations.\n\nIndeed.\n\nEven with gcc it's one of the slowest files to compile in our codebase. At -O2\ntab-complete.c takes 7.3s with gcc 14. Just adding an\nelse if (HeadMatches(\"ALTER\"))\n{\n}\naround all the ALTER branches reduces that to 4.5s to me. Doing that for\nCOMMENT and CREATE gets down to 3.2s.\n\n\n> I realize that this pretty much only needs to run\n> at human-reaction-time speed, but it still offends my inner nerd.\n\nSame.\n\n\n> Could we perhaps have a table of first words of each interesting\n> match, and do a lookup in that before dispatching to code segments\n> that are individually similar to what's there now?\n\nEventually it seems yet, it ought to be table driven in some form. But I\nwonder if that's setting the bar too high for now. Even just doing some manual\nre-nesting seems like it could get us quite far?\n\nI'm thinking of something like an outer if-else-if chain for CREATE, ALTER,\nDROP, COMMENT and everything that doesn't fit within those\n(e.g. various TailMatches(), backslash command etc) we'd have reduced the\nnumber of redundant checks a lot.\n\nThe amount of whitespace changes that'd imply isn't great, but I don't really\nsee a way around that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:07:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Given that - afaict - tab completion never used to work with msvc, I think\n> it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is currently\n> building with readline support for windows - not sure if any packager is going\n> to go back and add support for it in older branches.\n\nIf it doesn't even compile in existing branches, that seems to me to\nbe plenty enough reason to call it a new HEAD-only feature. That\ngives us some leeway for experimentation and a more invasive patch\nthan we might want to risk in stable branches.\n\n> On 2024-07-08 14:18:03 -0400, Tom Lane wrote:\n>> Could we perhaps have a table of first words of each interesting\n>> match, and do a lookup in that before dispatching to code segments\n>> that are individually similar to what's there now?\n\n> Eventually it seems yet, it ought to be table driven in some form. But I\n> wonder if that's setting the bar too high for now. Even just doing some manual\n> re-nesting seems like it could get us quite far?\n\nWhat I'm thinking is that (as you say) any fix that's not as ugly\nas Kirk's hack is going to involve massive reindentation, with\ncorresponding breakage of any pending patches. It would be a\nshame to do that twice, so I'd rather look for a long-term fix\ninstead of putting in a stop-gap.\n\nLet me play with the \"table\" idea a little bit and see what\nI get.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 16:30:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "On Mon, 8 Jul 2024 at 21:08, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-08 14:18:03 -0400, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > Compiling postgres on windows with tab-completion support fails either\n> with\n> > > \"fatal error C1026: parser stack overflow, program too complex”.\n> > > or (in recent versions) with\n> > > \"…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal\n> compiler error.\"\n> >\n> > > It's pretty easy to work around the error [2]. I wonder if we should\n> just do\n> > > that, then we don't have to insist on a very new msvc version being\n> used and\n> > > it'll work even if they just decide to fix the internal compiler error.\n> >\n> > I'm on board with doing something here, but wouldn't we want to\n> > back-patch at least a minimal fix to all supported branches?\n>\n> I think we'd need to backpatch more for older branches. At least\n>\n> commit 3f28bd7337d\n> Author: Thomas Munro <[email protected]>\n> Date: 2022-12-22 17:14:23 +1300\n>\n> Add work-around for VA_ARGS_NARGS() on MSVC.\n>\n>\n> Given that - afaict - tab completion never used to work with msvc, I think\n> it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is\n> currently\n> building with readline support for windows - not sure if any packager is\n> going\n> to go back and add support for it in older branches.\n\n\nPackagers aren't likely to be using readline, as it's GPL and it would have\nto be shipped with packages on Windows. They are more likely to be using\nlibedit if anything. Not sure if that has any bearing on the compilation\nfailure.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Mon, 8 Jul 2024 at 21:08, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-08 14:18:03 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Compiling postgres on windows with tab-completion support fails either with\n> > \"fatal error C1026: parser stack overflow, program too complex”.\n> > or (in recent versions) with\n> > \"…/src/bin/psql/tab-complete.c(4023): fatal error C1001: Internal compiler error.\"\n>\n> > It's pretty easy to work around the error [2]. I wonder if we should just do\n> > that, then we don't have to insist on a very new msvc version being used and\n> > it'll work even if they just decide to fix the internal compiler error.\n>\n> I'm on board with doing something here, but wouldn't we want to\n> back-patch at least a minimal fix to all supported branches?\n\nI think we'd need to backpatch more for older branches. At least\n\ncommit 3f28bd7337d\nAuthor: Thomas Munro <[email protected]>\nDate:   2022-12-22 17:14:23 +1300\n\n    Add work-around for VA_ARGS_NARGS() on MSVC.\n\n\nGiven that - afaict - tab completion never used to work with msvc, I think\nit'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is currently\nbuilding with readline support for windows - not sure if any packager is going\nto go back and add support for it in older branches.Packagers aren't likely to be using readline, as it's GPL and it would have to be shipped with packages on Windows. They are more likely to be using libedit if anything. Not sure if that has any bearing on the compilation failure. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 9 Jul 2024 09:14:33 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Hi,\n\nOn 2024-07-09 09:14:33 +0100, Dave Page wrote:\n> On Mon, 8 Jul 2024 at 21:08, Andres Freund <[email protected]> wrote:\n> > I think we'd need to backpatch more for older branches. At least\n> >\n> > commit 3f28bd7337d\n> > Author: Thomas Munro <[email protected]>\n> > Date: 2022-12-22 17:14:23 +1300\n> >\n> > Add work-around for VA_ARGS_NARGS() on MSVC.\n> >\n> >\n> > Given that - afaict - tab completion never used to work with msvc, I think\n> > it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is\n> > currently\n> > building with readline support for windows - not sure if any packager is\n> > going\n> > to go back and add support for it in older branches.\n>\n>\n> Packagers aren't likely to be using readline, as it's GPL and it would have\n> to be shipped with packages on Windows.\n\nI'm not sure (I mean that literally, not as a way to state that I think it's\nnot as you say) it'd actually be *that* big a problem to use readline - sure\nit'd make psql GPL, but that's the only thing using readline. Since psql\ndoesn't link to extensible / user provided code, it might actually be ok.\n\nWith openssl < 3.0 the mix between openssl and readline would be problematic,\nIIRC the licenses aren't really compatible. But luckily openssl relicensed to\napache v2.\n\n\n\nBut that is pretty orthogonal to the issue at hand:\n\n> They are more likely to be using libedit if anything. Not sure if that has\n> any bearing on the compilation failure.\n\nThe compilation issue is entirely independent of readline vs libedit, the\ntoo-long if-else-if-else-if chain is purely in our code. While the error\nmessage (particularly the compiler crash) is suboptimal, I can't entirely\nfault the compiler for not dealing with 700 else-ifs.\n\nIn fact the else-if chain is a problem for other compilers too, it seems to be\nhitting something in the vicinity of O(n^2) in gcc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:23:34 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-09 09:14:33 +0100, Dave Page wrote:\n>> Packagers aren't likely to be using readline, as it's GPL and it would have\n>> to be shipped with packages on Windows.\n\n> I'm not sure (I mean that literally, not as a way to state that I think it's\n> not as you say) it'd actually be *that* big a problem to use readline - sure\n> it'd make psql GPL, but that's the only thing using readline. Since psql\n> doesn't link to extensible / user provided code, it might actually be ok.\n\n> With openssl < 3.0 the mix between openssl and readline would be problematic,\n> IIRC the licenses aren't really compatible. But luckily openssl relicensed to\n> apache v2.\n\nOne thing that struck me while looking at tab-complete.c just now is\nthat there are aspects of the readline API that require strings to be\nmalloc'd by the client (tab-complete.c) and later free'd within\nlibreadline. I wonder how that will play with Windows' weird rules\nabout when one DLL's malloc pool will interoperate with another's\n(cf PQfreemem). Worst case: the reason no one uses readline under\nWindows is that it flat out doesn't work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 17:44:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Hi,\n\nOn 2024-07-09 17:44:27 -0400, Tom Lane wrote:\n> Worst case: the reason no one uses readline under Windows is that it flat\n> out doesn't work.\n\nI've just tried it again, it works after splitting the else-if chain.\n\n\n\n> One thing that struck me while looking at tab-complete.c just now is\n> that there are aspects of the readline API that require strings to be\n> malloc'd by the client (tab-complete.c) and later free'd within\n> libreadline. I wonder how that will play with Windows' weird rules\n> about when one DLL's malloc pool will interoperate with another's\n> (cf PQfreemem).\n\nIt seems to work fine as long as a debug-readline is paired with a debug-psql\nor a release-readline is paired with a release-psql.\n\n\nIntentionally cross-matching the two does indeed quickly crash, with stack\ntrace that looks like exactly the issue you describe. This is a release\nreadline in a debug psql, but it shouldn't matter which way round.\n\nJust doing \"tab\":\n\n # Child-SP RetAddr Call Site\n00 00000017`db5ff430 00007ff9`1fa18182 ntdll!RtlReportCriticalFailure+0x56\n01 00000017`db5ff520 00007ff9`1fa1846a ntdll!RtlpHeapHandleError+0x12\n02 00000017`db5ff550 00007ff9`1fa1e0f1 ntdll!RtlpHpHeapHandleError+0x7a\n03 00000017`db5ff580 00007ff9`1f9b79d2 ntdll!RtlpLogHeapFailure+0x45\n04 00000017`db5ff5b0 00007ff9`1f9347b1 ntdll!RtlpFreeHeapInternal+0x822c2\n05 00000017`db5ff670 00007ff9`1d7df05b ntdll!RtlFreeHeap+0x51\n*** WARNING: Unable to verify checksum for C:\\dev\\postgres-meson\\build-ninja-2022\\tmp_install\\usr\\local\\pgsql\\bin\\readline.dll\n06 00000017`db5ff6b0 00007ff8`f7ab637c ucrtbase!_free_base+0x1b\n07 (Inline Function) --------`-------- readline!_rl_free_match_list+0x19 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\complete.c @ 1627]\n08 00000017`db5ff6e0 00007ff8`f7ab12fc readline!rl_complete_internal+0x4dc [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\complete.c @ 1755]\n09 00000017`db5ff750 00007ff8`f7ab1725 readline!_rl_dispatch_subseq+0x2dc [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 582]\n0a (Inline Function) --------`-------- readline!_rl_dispatch+0x18 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 530]\n0b 00000017`db5ff7a0 00007ff8`f7ab1625 readline!readline_internal_char+0xd5 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 449]\n0c (Inline Function) --------`-------- readline!readline_internal_charloop+0x13 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 490]\n0d (Inline Function) --------`-------- readline!readline_internal+0x18 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 504]\n0e 00000017`db5ff7e0 00007ff7`6f1e9dba readline!readline+0xb5 [C:\\dev\\vcpkg\\buildtrees\\readline-win32\\src\\e6f798e014-dba5d3560f.clean\\readline.c @ 300]\n0f 00000017`db5ff810 00007ff7`6f1eb211 psql!gets_interactive+0x3a [C:\\dev\\postgres-meson\\src\\bin\\psql\\input.c @ 91]\n10 00000017`db5ff850 00007ff7`6f1ee7ec psql!MainLoop+0x3a1 [C:\\dev\\postgres-meson\\src\\bin\\psql\\mainloop.c @ 166]\n11 00000017`db5ff980 00007ff7`6f287a99 psql!main+0xcfc [C:\\dev\\postgres-meson\\src\\bin\\psql\\startup.c @ 462]\n12 00000017`db5ffaa0 00007ff7`6f2879e2 psql!invoke_main+0x39 [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 79]\n13 00000017`db5ffaf0 00007ff7`6f28789e psql!__scrt_common_main_seh+0x132 [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 288]\n14 00000017`db5ffb60 00007ff7`6f287b0e psql!__scrt_common_main+0xe [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 331]\n15 00000017`db5ffb90 00007ff9`1e9a7374 psql!mainCRTStartup+0xe [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_main.cpp @ 17]\n16 00000017`db5ffbc0 00007ff9`1f95cc91 KERNEL32!BaseThreadInitThunk+0x14\n17 00000017`db5ffbf0 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\n\nexecuting a statement (crashes after execution):\n\n # Child-SP RetAddr Call Site\n00 000000db`ae3ff898 00007ff9`1f9cd088 ntdll!RtlpBreakPointHeap+0x16\n01 000000db`ae3ff8a0 00007ff9`1f96f6f5 ntdll!RtlpValidateHeapEntry+0x5d858\n02 000000db`ae3ff8e0 00007ff9`1d2b6edb ntdll!RtlValidateHeap+0x95\n03 000000db`ae3ff930 00007ff8`ba38dc52 KERNELBASE!HeapValidate+0xb\n04 000000db`ae3ff960 00007ff8`ba390a76 ucrtbased!_CrtIsValidHeapPointer+0x42 [minkernel\\crts\\ucrt\\src\\appcrt\\heap\\debug_heap.cpp @ 1407]\n05 000000db`ae3ff9a0 00007ff8`ba38f565 ucrtbased!free_dbg_nolock+0x136 [minkernel\\crts\\ucrt\\src\\appcrt\\heap\\debug_heap.cpp @ 904]\n06 000000db`ae3ffaa0 00007ff8`ba392118 ucrtbased!_free_dbg+0x55 [minkernel\\crts\\ucrt\\src\\appcrt\\heap\\debug_heap.cpp @ 1030]\n07 000000db`ae3ffae0 00007ff7`6f1ebd00 ucrtbased!free+0x28 [minkernel\\crts\\ucrt\\src\\appcrt\\heap\\free.cpp @ 39]\n08 000000db`ae3ffb20 00007ff7`6f1ee7ec psql!MainLoop+0xe90 [C:\\dev\\postgres-meson\\src\\bin\\psql\\mainloop.c @ 579]\n09 000000db`ae3ffc50 00007ff7`6f287a99 psql!main+0xcfc [C:\\dev\\postgres-meson\\src\\bin\\psql\\startup.c @ 462]\n0a 000000db`ae3ffd70 00007ff7`6f2879e2 psql!invoke_main+0x39 [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 79]\n0b 000000db`ae3ffdc0 00007ff7`6f28789e psql!__scrt_common_main_seh+0x132 [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 288]\n0c 000000db`ae3ffe30 00007ff7`6f287b0e psql!__scrt_common_main+0xe [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 331]\n0d 000000db`ae3ffe60 00007ff9`1e9a7374 psql!mainCRTStartup+0xe [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_main.cpp @ 17]\n0e 000000db`ae3ffe90 00007ff9`1f95cc91 KERNEL32!BaseThreadInitThunk+0x14\n0f 000000db`ae3ffec0 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\nNote that the line numbers seem to commonly point to where the next frame\nwould return to (i.e. mainloop.c:579 is the call to free, but on return the if\n(slashCmdStatus == PSQL_CMD_TERMINATE) would be reached, so that's displayed -\nwhy I don't know).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Jul 2024 15:59:34 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-09 17:44:27 -0400, Tom Lane wrote:\n>> Worst case: the reason no one uses readline under Windows is that it flat\n>> out doesn't work.\n\n> It seems to work fine as long as a debug-readline is paired with a debug-psql\n> or a release-readline is paired with a release-psql.\n\nOK, cool. I was concerned about the number of options that have to\nmatch according to our PQfreemem docs. But I guess in practice\nWindows packagers would likely compile libreadline as part of their\nPG build, so that the build options would always match.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jul 2024 19:07:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "On Tue, 9 Jul 2024 at 17:23, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-07-09 09:14:33 +0100, Dave Page wrote:\n> > On Mon, 8 Jul 2024 at 21:08, Andres Freund <[email protected]> wrote:\n> > > I think we'd need to backpatch more for older branches. At least\n> > >\n> > > commit 3f28bd7337d\n> > > Author: Thomas Munro <[email protected]>\n> > > Date: 2022-12-22 17:14:23 +1300\n> > >\n> > > Add work-around for VA_ARGS_NARGS() on MSVC.\n> > >\n> > >\n> > > Given that - afaict - tab completion never used to work with msvc, I\n> think\n> > > it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is\n> > > currently\n> > > building with readline support for windows - not sure if any packager\n> is\n> > > going\n> > > to go back and add support for it in older branches.\n> >\n> >\n> > Packagers aren't likely to be using readline, as it's GPL and it would\n> have\n> > to be shipped with packages on Windows.\n>\n> I'm not sure (I mean that literally, not as a way to state that I think\n> it's\n> not as you say) it'd actually be *that* big a problem to use readline -\n> sure\n> it'd make psql GPL, but that's the only thing using readline. Since psql\n> doesn't link to extensible / user provided code, it might actually be ok.\n>\n> With openssl < 3.0 the mix between openssl and readline would be\n> problematic,\n> IIRC the licenses aren't really compatible. But luckily openssl relicensed\n> to\n> apache v2.\n>\n\nCertainly in the case of the packages produced at EDB, we didn't want any\npotential surprises for users/redistributors/embedders, so *any* GPL was a\nproblem.\n\n\n> But that is pretty orthogonal to the issue at hand:\n>\n\nRight.\n\nWhat is more relevant is that as far as I can see, we've never actually\nsupported either libedit or readline in MSVC++ builds - which kinda makes\nsense because Windows terminals are very different from traditional *nix\nones. Windows isn't supported by either libedit or readline as far as I can\nsee, except through a couple of forks.\n\nI imagine from mingw/cygwin builds, readline would only actually work\nproperly in their respective terminals.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 9 Jul 2024 at 17:23, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-07-09 09:14:33 +0100, Dave Page wrote:\n> On Mon, 8 Jul 2024 at 21:08, Andres Freund <[email protected]> wrote:\n> > I think we'd need to backpatch more for older branches. At least\n> >\n> > commit 3f28bd7337d\n> > Author: Thomas Munro <[email protected]>\n> > Date:   2022-12-22 17:14:23 +1300\n> >\n> >     Add work-around for VA_ARGS_NARGS() on MSVC.\n> >\n> >\n> > Given that - afaict - tab completion never used to work with msvc, I think\n> > it'd be ok to just do it in 17 or 16+17 or such. Obviously nobody is\n> > currently\n> > building with readline support for windows - not sure if any packager is\n> > going\n> > to go back and add support for it in older branches.\n>\n>\n> Packagers aren't likely to be using readline, as it's GPL and it would have\n> to be shipped with packages on Windows.\n\nI'm not sure (I mean that literally, not as a way to state that I think it's\nnot as you say) it'd actually be *that* big a problem to use readline - sure\nit'd make psql GPL, but that's the only thing using readline. Since psql\ndoesn't link to extensible / user provided code, it might actually be ok.\n\nWith openssl < 3.0 the mix between openssl and readline would be problematic,\nIIRC the licenses aren't really compatible. But luckily openssl relicensed to\napache v2.Certainly in the case of the packages produced at EDB, we didn't want any potential surprises for users/redistributors/embedders, so *any* GPL was a problem. \nBut that is pretty orthogonal to the issue at hand:Right.What is more relevant is that as far as I can see, we've never actually supported either libedit or readline in MSVC++ builds - which kinda makes sense because Windows terminals are very different from traditional *nix ones. Windows isn't supported by either libedit or readline as far as I can see, except through a couple of forks.I imagine from mingw/cygwin builds, readline would only actually work properly in their respective terminals.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 10:55:50 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "On 2024-07-10 We 5:55 AM, Dave Page wrote:\n>\n>\n>\n> What is more relevant is that as far as I can see, we've never \n> actually supported either libedit or readline in MSVC++ builds - which \n> kinda makes sense because Windows terminals are very different from \n> traditional *nix ones. Windows isn't supported by either libedit or \n> readline as far as I can see, except through a couple of forks.\n>\n> I imagine from mingw/cygwin builds, readline would only actually work \n> properly in their respective terminals.\n>\n\nconfigure.ac explicitly forces --without-readline on win32, so no for \nmingw. configure.ac claims there are issues especially with non-US code \npages.\n\nOne of the reasons I've continued to support building with Cygwin is \nthat its readline does seem to work, making its psql the best I know of \non Windows.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-10 We 5:55 AM, Dave Page\n wrote:\n\n\n\n\n\n\n\n\n\nWhat is more relevant is that as far as I can see, we've\n never actually supported either libedit or readline in\n MSVC++ builds - which kinda makes sense because Windows\n terminals are very different from traditional *nix ones.\n Windows isn't supported by either libedit or readline as far\n as I can see, except through a couple of forks.\n\n\nI imagine from mingw/cygwin builds, readline would only\n actually work properly in their respective terminals.\n\n\n\n\n\n\n\nconfigure.ac explicitly forces --without-readline on win32, so no\n for mingw. configure.ac claims there are issues especially with\n non-US code pages.\n\nOne of the reasons I've continued to support building with Cygwin\n is that its readline does seem to work, making its psql the best I\n know of on Windows.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 10 Jul 2024 07:33:23 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" }, { "msg_contents": "Hi,\n\nOn 2024-07-10 10:55:50 +0100, Dave Page wrote:\n> What is more relevant is that as far as I can see, we've never actually\n> supported either libedit or readline in MSVC++ builds - which kinda makes\n> sense because Windows terminals are very different from traditional *nix\n> ones. Windows isn't supported by either libedit or readline as far as I can\n> see, except through a couple of forks.\n> \n> I imagine from mingw/cygwin builds, readline would only actually work\n> properly in their respective terminals.\n\nFWIW, if you can get readline to build (using something like [1] or one of the\nforks that add the support) these days it does work even in the plain windows\n\"command prompt\".\n\nThe only obvious thing that doesn't seem to work is ctrl-c cancelling queries\nrather than terminating psql. But I think that's an issue independent of\nreadline.\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/msys2/MINGW-packages/tree/master/mingw-w64-readline", "msg_date": "Wed, 10 Jul 2024 10:05:50 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we work around msvc failing to compile tab-complete.c?" } ]
[ { "msg_contents": "Hi,\n\nSince\n\ncommit f7431bca8b0138bdbce7025871560d39119565a0\nAuthor: Stephen Frost <[email protected]>\nDate: 2023-04-13 08:55:13 -0400\n\n Explicitly require MIT Kerberos for GSSAPI\n\n WHen building with GSSAPI support, explicitly require MIT Kerberos and\n check for gssapi_ext.h in configure.ac and meson.build. Also add\n documentation explicitly stating that we now require MIT Kerberos when\n building with GSSAPI support.\n\nconfigure/meson define HAVE_GSSAPI_EXT_H / HAVE_GSSAPI_GSSAPI_EXT_H - but\nafaict we don't use those anywhere?\n\nIt makes sense to test for the presence of gssapi_ext.h, but given that we\nrequire it to be present, I think it's not worth emitting HAVE_GSSAPI_EXT_H\netc.\n\nAs f7431bca8b is in 16, it seems best to just change this in 18.\n\n\nWhile looking at this I also found an argument omission present in the commit\nadding meson support. I plan to fix that with the attached commit.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 8 Jul 2024 15:56:59 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Why do we define HAVE_GSSAPI_EXT_H?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> configure/meson define HAVE_GSSAPI_EXT_H / HAVE_GSSAPI_GSSAPI_EXT_H - but\n> afaict we don't use those anywhere?\n\nIt looks to me like it's just a byproduct of the autoconf macros\nwe use to verify that you have a sane installation:\n\nif test \"$with_gssapi\" = yes ; then\n AC_CHECK_HEADERS(gssapi/gssapi.h, [],\n\t[AC_CHECK_HEADERS(gssapi.h, [], [AC_MSG_ERROR([gssapi.h header file is required for GSSAPI])])])\n AC_CHECK_HEADERS(gssapi/gssapi_ext.h, [],\n\t[AC_CHECK_HEADERS(gssapi_ext.h, [], [AC_MSG_ERROR([gssapi_ext.h header file is required for GSSAPI])])])\nfi\n\nThere might be a variant of AC_CHECK_HEADERS that doesn't have\nthe default define-a-symbol action, not sure.\n\nMaybe it's not really necessary to check both gssapi.h and\ngssapi_ext.h, but I'm not very familiar with all the variants of\nGSSAPI that are out there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jul 2024 19:05:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why do we define HAVE_GSSAPI_EXT_H?" }, { "msg_contents": "Hi,\n\nOn 2024-07-08 19:05:32 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > configure/meson define HAVE_GSSAPI_EXT_H / HAVE_GSSAPI_GSSAPI_EXT_H - but\n> > afaict we don't use those anywhere?\n> \n> It looks to me like it's just a byproduct of the autoconf macros\n> we use to verify that you have a sane installation:\n> \n> if test \"$with_gssapi\" = yes ; then\n> AC_CHECK_HEADERS(gssapi/gssapi.h, [],\n> \t[AC_CHECK_HEADERS(gssapi.h, [], [AC_MSG_ERROR([gssapi.h header file is required for GSSAPI])])])\n> AC_CHECK_HEADERS(gssapi/gssapi_ext.h, [],\n> \t[AC_CHECK_HEADERS(gssapi_ext.h, [], [AC_MSG_ERROR([gssapi_ext.h header file is required for GSSAPI])])])\n> fi\n> \n> There might be a variant of AC_CHECK_HEADERS that doesn't have\n> the default define-a-symbol action, not sure.\n\nYep, the singular version doesn't. That's what my attached patch uses...\n\n\n> Maybe it's not really necessary to check both gssapi.h and\n> gssapi_ext.h, but I'm not very familiar with all the variants of\n> GSSAPI that are out there.\n\nMe neither. I think it's fine to check both, I am just suggesting not to\ndefine a pg_config.h symbol for both...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:05:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why do we define HAVE_GSSAPI_EXT_H?" }, { "msg_contents": "On 2024-07-08 15:56:59 -0700, Andres Freund wrote:\n> While looking at this I also found an argument omission present in the commit\n> adding meson support. I plan to fix that with the attached commit.\n\nPushed that portion.\n\n\n", "msg_date": "Sat, 20 Jul 2024 13:57:10 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why do we define HAVE_GSSAPI_EXT_H?" } ]
[ { "msg_contents": "So anyways I talked last week about lock-free vacuum. Since then I almost\nfinished creating a plugin that works on all Jetbrains products. It hooks\nto the internal database tool and displays the internals of the database\nand some stats. I would use this for my next phase: autovacuum with\ncompaction.\nThe thing is, after reading the code a million times, I still don't\nunderstand why lock-free (or minimum locking) is such a big problem! Is it\nthat hard to lazily move tuples from one page to the other after\ndefragmenting it lazily?\nI even made (a super simple equation) to calculate how much disk space can\nbe reclaimed, and created a full list of the advantages including\nmaintaining clustered indexes. This would of course be followed by\nbenchmarks.\nHere is the wip plugin btw\nhttps://github.com/ahmedyarub/database_internals_plugin/tree/2.0.0 which\nshould be released this week.\nLike I don't even trust ChatGPT at all but I kept on trying to find reasons\nfor not doing it but I couldn't find any!\nSuch a trivial change that can bring a huge advantage. I just hope that\nsomebody could find me a reason for why it wouldn't work.\n\nReally appreciate your patience,\nAhmed\n\nSo anyways I talked last week about lock-free vacuum. Since then I almost finished creating a plugin that works on all Jetbrains products. It hooks to the internal database tool and displays the internals of the database and some stats. I would use this for my next phase: autovacuum with compaction.The thing is, after reading the code a million times, I still don't understand why lock-free (or minimum locking) is such a big problem! Is it that hard to lazily move tuples from one page to the other after defragmenting it lazily?I even made (a super simple equation) to calculate how much disk space can be reclaimed, and created a full list of the advantages including maintaining clustered indexes. This would of course be followed by benchmarks.Here is the wip plugin btw https://github.com/ahmedyarub/database_internals_plugin/tree/2.0.0 which should be released this week.Like I don't even trust ChatGPT at all but I kept on trying to find reasons for not doing it but I couldn't find any!Such a trivial change that can bring a huge advantage. I just hope that somebody could find me a reason for why it wouldn't work.Really appreciate your patience,Ahmed", "msg_date": "Tue, 9 Jul 2024 01:58:02 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Lock-free compaction. Why not?" }, { "msg_contents": "On Tue, 9 Jul 2024 at 16:58, Ahmed Yarub Hani Al Nuaimi\n<[email protected]> wrote:\n> The thing is, after reading the code a million times, I still don't understand why lock-free (or minimum locking) is such a big problem! Is it that hard to lazily move tuples from one page to the other after defragmenting it lazily?\n\nI think there are a few things to think about. You may have thought of\nsome of these already.\n\n1. moving rows could cause deadlocking issues. Users might struggle to\naccept that some background process is causing their transaction to\nrollback.\n2. transaction size: How large to make the batches of tuples to move\nat once? One transaction sounds much more prone to deadlocking.\n3. xid consumption. Doing lots of small transactions to move tuples\ncould consume lots of xids.\n4. moving tuples around to other pages needs indexes to be updated and\ncould cause index bloat.\n\nFor #1, maybe there's something that can be done to ensure it's always\nvacuum that's the deadlock victim.\n\nYou might be interested in [1]. There's also an older discussion in\n[2] that you might find interesting.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAFj8pRDNDOrg90hLMmbo_hiWpgBm%2B73gmWMRUHRkTKwrGnvYJQ%40mail.gmail.com#cc4f8d730d2c5203f53c50260053fec5\n[2] https://www.postgresql.org/message-id/flat/CANTTaev-LdgYj4uZoy67catS5SF5u_X-dTHiLH7OKwU6Gv3MFA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 10 Jul 2024 21:49:49 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On 7/10/24 11:49, David Rowley wrote:\n> On Tue, 9 Jul 2024 at 16:58, Ahmed Yarub Hani Al Nuaimi\n> <[email protected]> wrote:\n>> The thing is, after reading the code a million times, I still don't understand why lock-free (or minimum locking) is such a big problem! Is it that hard to lazily move tuples from one page to the other after defragmenting it lazily?\n> \n> I think there are a few things to think about. You may have thought of\n> some of these already.\n> \n> 1. moving rows could cause deadlocking issues. Users might struggle to\n> accept that some background process is causing their transaction to\n> rollback.\n> 2. transaction size: How large to make the batches of tuples to move\n> at once? One transaction sounds much more prone to deadlocking.\n> 3. xid consumption. Doing lots of small transactions to move tuples\n> could consume lots of xids.\n> 4. moving tuples around to other pages needs indexes to be updated and\n> could cause index bloat.\n> \n> For #1, maybe there's something that can be done to ensure it's always\n> vacuum that's the deadlock victim.\n> \n> You might be interested in [1]. There's also an older discussion in\n> [2] that you might find interesting.\n> \n\nIIRC long time ago VACUUM FULL actually worked in a similar way, i.e. by\nmoving rows around. I'm not sure if it did the lock-free thing as\nproposed here (probably not), but I guess at least some of the reasons\nwhy it was replaced by CLUSTER would still apply to this new thing.\n\nMaybe it's a good trade off for some use cases (after all, people do\nthat using pg_repack/pg_squeeze/... so it clearly has value for them),\nbut it'd be a bit unfortunate to rediscover those old issues later.\n\nThe cluster vacuum was introduced by commit 946cf229e89 in 2010, and\nthen the \"inplace\" variant was removed by 0a469c87692 shortly after. I\nhaven't looked for the threads discussing those changes, but I guess it\nshould not be hard to find in the archives.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Jul 2024 12:57:53 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Wed, 10 Jul 2024 at 22:58, Tomas Vondra\n<[email protected]> wrote:\n> IIRC long time ago VACUUM FULL actually worked in a similar way, i.e. by\n> moving rows around. I'm not sure if it did the lock-free thing as\n> proposed here (probably not), but I guess at least some of the reasons\n> why it was replaced by CLUSTER would still apply to this new thing.\n\nYeah, that changed in 9.0. The old version still obtained an AEL on the table.\n\nI think the primary issue with the old way was index bloat wasn't\nfixed. The release notes for 9.0 do claim the CLUSTER method \"is\nsubstantially faster in most cases\", however, I imagine there are\nplenty of cases where it wouldn't be. e.g, it's hard to imagine\nrewriting the entire 1TB table and indexes is cheaper than moving 1\nrow out of place row.\n\nDavid\n\n\n", "msg_date": "Thu, 18 Jul 2024 23:07:54 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Thu, Jul 18, 2024 at 7:08 AM David Rowley <[email protected]> wrote:\n> On Wed, 10 Jul 2024 at 22:58, Tomas Vondra\n> <[email protected]> wrote:\n> > IIRC long time ago VACUUM FULL actually worked in a similar way, i.e. by\n> > moving rows around. I'm not sure if it did the lock-free thing as\n> > proposed here (probably not), but I guess at least some of the reasons\n> > why it was replaced by CLUSTER would still apply to this new thing.\n>\n> Yeah, that changed in 9.0. The old version still obtained an AEL on the table.\n>\n> I think the primary issue with the old way was index bloat wasn't\n> fixed. The release notes for 9.0 do claim the CLUSTER method \"is\n> substantially faster in most cases\", however, I imagine there are\n> plenty of cases where it wouldn't be. e.g, it's hard to imagine\n> rewriting the entire 1TB table and indexes is cheaper than moving 1\n> row out of place row.\n\nThe other thing I remember besides index bloat is that it was\ncrushingly slow. My memory is pretty fuzzy after this long, but I feel\nlike it was on the order of minutes to do VACUUM FULL when you could\nhave done CLUSTER in seconds -- and then on top of the long wait you\noften ended up using more disk space at the end than you had at the\nbeginning due to the index bloat. I remember being surprised by the\ndecision to remove it entirely, but it sure was painful to use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jul 2024 14:07:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jul 18, 2024 at 7:08 AM David Rowley <[email protected]> wrote:\n>> I think the primary issue with the old way was index bloat wasn't\n>> fixed. The release notes for 9.0 do claim the CLUSTER method \"is\n>> substantially faster in most cases\", however, I imagine there are\n>> plenty of cases where it wouldn't be. e.g, it's hard to imagine\n>> rewriting the entire 1TB table and indexes is cheaper than moving 1\n>> row out of place row.\n\n> The other thing I remember besides index bloat is that it was\n> crushingly slow.\n\nYeah. The old way was great if there really were just a few tuples\nneeding to be moved ... but by the time you decide you need VACUUM\nFULL rather than plain VACUUM, that's unlikely to be the case. \n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Jul 2024 14:21:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "Wow I was busy for a controle of days and now I’m again fully committed to\nthis initiative. These ideas are extremely useful to my. I’ll first start\nby reading the old in-place implementation, but meanwhile I have the\nfollowing questions:\n1- I’m thinking of adding only one simple step to be auto-vacuum. This\nmeans that there will neither be excessive locking nor resource\nutilization. I guess my question is: does that simple step make the current\nlazy auto-vacuum much worse?\n2- Can you point me to a resource explaining why this might lead to index\nbloating?\n\nEm qui., 18 de jul. de 2024 às 15:21, Tom Lane <[email protected]> escreveu:\n\n> Robert Haas <[email protected]> writes:\n> > On Thu, Jul 18, 2024 at 7:08 AM David Rowley <[email protected]>\n> wrote:\n> >> I think the primary issue with the old way was index bloat wasn't\n> >> fixed. The release notes for 9.0 do claim the CLUSTER method \"is\n> >> substantially faster in most cases\", however, I imagine there are\n> >> plenty of cases where it wouldn't be. e.g, it's hard to imagine\n> >> rewriting the entire 1TB table and indexes is cheaper than moving 1\n> >> row out of place row.\n>\n> > The other thing I remember besides index bloat is that it was\n> > crushingly slow.\n>\n> Yeah. The old way was great if there really were just a few tuples\n> needing to be moved ... but by the time you decide you need VACUUM\n> FULL rather than plain VACUUM, that's unlikely to be the case.\n>\n> regards, tom lane\n>\n\nWow I was busy for a controle of days and now I’m again fully committed to this initiative. These ideas are extremely useful to my. I’ll first start by reading the old in-place implementation, but meanwhile I have the following questions:1- I’m thinking of adding only one simple step to be auto-vacuum. This means that there will neither be excessive locking nor resource utilization. I guess my question is: does that simple step make the current lazy auto-vacuum much worse?2- Can you point me to a resource explaining why this might lead to index bloating?Em qui., 18 de jul. de 2024 às 15:21, Tom Lane <[email protected]> escreveu:Robert Haas <[email protected]> writes:\n> On Thu, Jul 18, 2024 at 7:08 AM David Rowley <[email protected]> wrote:\n>> I think the primary issue with the old way was index bloat wasn't\n>> fixed. The release notes for 9.0 do claim the CLUSTER method \"is\n>> substantially faster in most cases\", however, I imagine there are\n>> plenty of cases where it wouldn't be. e.g, it's hard to imagine\n>> rewriting the entire 1TB table and indexes is cheaper than moving 1\n>> row out of place row.\n\n> The other thing I remember besides index bloat is that it was\n> crushingly slow.\n\nYeah.  The old way was great if there really were just a few tuples\nneeding to be moved ... but by the time you decide you need VACUUM\nFULL rather than plain VACUUM, that's unlikely to be the case.  \n\n                        regards, tom lane", "msg_date": "Sat, 20 Jul 2024 13:00:05 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Sun, 21 Jul 2024 at 04:00, Ahmed Yarub Hani Al Nuaimi\n<[email protected]> wrote:\n> 2- Can you point me to a resource explaining why this might lead to index bloating?\n\nNo resource links, but if you move a tuple to another page then you\nmust also adjust the index. If you have no exclusive lock on the\ntable, then you must assume older transactions still need the old\ntuple version, so you need to create another index entry rather than\nre-pointing the existing index entry's ctid to the new tuple version.\nIt's not hard to imagine that would cause the index to become larger\nif you had to move some decent portion of the tuples to other pages.\n\nFWIW, I think it would be good if we had some easier way to compact\ntables without blocking concurrent users. My primary interest in TID\nRange Scans was to allow easier identification of tuples near the end\nof the heap that could be manually UPDATEd after a vacuum to allow the\nheap to be shrunk during the next vacuum.\n\nDavid\n\n\n", "msg_date": "Sun, 21 Jul 2024 13:52:24 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> No resource links, but if you move a tuple to another page then you\n> must also adjust the index. If you have no exclusive lock on the\n> table, then you must assume older transactions still need the old\n> tuple version, so you need to create another index entry rather than\n> re-pointing the existing index entry's ctid to the new tuple version.\n\nThe actually tricky part about that is that you have to ensure that\nany concurrent scan will see one of the two copies --- not both,\nand not neither. This is fairly hard when the concurrent query\nmight be using any of several scan methods, and might or might not\nhave visited the original tuple before you commenced the move.\nYou can solve it by treating the move more like an UPDATE, that\nis the new tuple gets a new XID, but that has its own downsides;\nnotably that it must block/be blocked by concurrent real UPDATEs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jul 2024 22:13:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On 7/21/24 04:13, Tom Lane wrote:\n> David Rowley <[email protected]> writes:\n>> No resource links, but if you move a tuple to another page then you\n>> must also adjust the index. If you have no exclusive lock on the\n>> table, then you must assume older transactions still need the old\n>> tuple version, so you need to create another index entry rather than\n>> re-pointing the existing index entry's ctid to the new tuple version.\n> \n> The actually tricky part about that is that you have to ensure that\n> any concurrent scan will see one of the two copies --- not both,\n> and not neither. This is fairly hard when the concurrent query\n> might be using any of several scan methods, and might or might not\n> have visited the original tuple before you commenced the move.\n> You can solve it by treating the move more like an UPDATE, that\n> is the new tuple gets a new XID, but that has its own downsides;\n> notably that it must block/be blocked by concurrent real UPDATEs.\n> \n\nTrue, but the UPDATE approach probably comes with it's own set of\nissues. For example, it likely breaks tracking of commit timestamps, and\nif an application depends on that e.g. for conflict resolution ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Jul 2024 11:35:18 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "That clearly explains the problem. But this got me thinking: what if we do\nboth index and heap optimization at the same time?\nMeaning that the newly move heap tuple which is used to compact/defragment\nheap pages would be followed by moving the index (creating and then\ndeleting) a new index tuple at the right place in the index data files (the\none that had its dead tuples removed and internally defragmented, aka\nvacuumed). Deleting the old index could be done immediately after moving\nthe heap tuple. I think that this can both solve the bloating problem and\nmake sure that both the table and index heaps are in optimum shape, all of\nthis being done lazily to make sure that these operations would only be\ndone when the servers are not overwhelmed (or just using whatever logic our\nlazy vacuuming uses). What do you think?\n\nOn Sat, Jul 20, 2024 at 10:52 PM David Rowley <[email protected]> wrote:\n\n> On Sun, 21 Jul 2024 at 04:00, Ahmed Yarub Hani Al Nuaimi\n> <[email protected]> wrote:\n> > 2- Can you point me to a resource explaining why this might lead to\n> index bloating?\n>\n> No resource links, but if you move a tuple to another page then you\n> must also adjust the index. If you have no exclusive lock on the\n> table, then you must assume older transactions still need the old\n> tuple version, so you need to create another index entry rather than\n> re-pointing the existing index entry's ctid to the new tuple version.\n> It's not hard to imagine that would cause the index to become larger\n> if you had to move some decent portion of the tuples to other pages.\n>\n> FWIW, I think it would be good if we had some easier way to compact\n> tables without blocking concurrent users. My primary interest in TID\n> Range Scans was to allow easier identification of tuples near the end\n> of the heap that could be manually UPDATEd after a vacuum to allow the\n> heap to be shrunk during the next vacuum.\n>\n> David\n>\n\nThat clearly explains the problem. But this got me thinking: what if we do both index and heap optimization at the same time?Meaning that the newly move heap tuple which is used to compact/defragment heap pages would be followed by moving the index (creating and then deleting) a new index tuple at the right place in the index data files (the one that had its dead tuples removed and internally defragmented, aka vacuumed). Deleting the old index could be done immediately after moving the heap tuple. I think that this can both solve the bloating problem and make sure that both the table and index heaps are in optimum shape, all of this being done lazily to make sure that these operations would only be done when the servers are not overwhelmed (or just using whatever logic our lazy vacuuming uses). What do you think?On Sat, Jul 20, 2024 at 10:52 PM David Rowley <[email protected]> wrote:On Sun, 21 Jul 2024 at 04:00, Ahmed Yarub Hani Al Nuaimi\n<[email protected]> wrote:\n> 2- Can you point me to a resource explaining why this might lead to index bloating?\n\nNo resource links, but if you move a tuple to another page then you\nmust also adjust the index.  If you have no exclusive lock on the\ntable, then you must assume older transactions still need the old\ntuple version, so you need to create another index entry rather than\nre-pointing the existing index entry's ctid to the new tuple version.\nIt's not hard to imagine that would cause the index to become larger\nif you had to move some decent portion of the tuples to other pages.\n\nFWIW, I think it would be good if we had some easier way to compact\ntables without blocking concurrent users.  My primary interest in TID\nRange Scans was to allow easier identification of tuples near the end\nof the heap that could be manually UPDATEd after a vacuum to allow the\nheap to be shrunk during the next vacuum.\n\nDavid", "msg_date": "Sun, 21 Jul 2024 11:42:12 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "Please don't top-post ...\n\nOn 7/21/24 16:42, Ahmed Yarub Hani Al Nuaimi wrote:\n> That clearly explains the problem. But this got me thinking: what if we do\n> both index and heap optimization at the same time?\n> Meaning that the newly move heap tuple which is used to compact/defragment\n> heap pages would be followed by moving the index (creating and then\n> deleting) a new index tuple at the right place in the index data files (the\n> one that had its dead tuples removed and internally defragmented, aka\n> vacuumed). Deleting the old index could be done immediately after moving\n> the heap tuple. I think that this can both solve the bloating problem and\n> make sure that both the table and index heaps are in optimum shape, all of\n> this being done lazily to make sure that these operations would only be\n> done when the servers are not overwhelmed (or just using whatever logic our\n> lazy vacuuming uses). What do you think?\n> \n\nI think this would run directly into the problems mentioned by Tom [1].\nYou say \"immediately\", but what does that mean? You need to explain how\nwould you ensure a scan (of arbitrary type) sees *exactly( one of the\nheap/index tuples.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Jul 2024 22:14:46 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Sat, Jul 20, 2024 at 10:13 PM Tom Lane <[email protected]> wrote:\n> The actually tricky part about that is that you have to ensure that\n> any concurrent scan will see one of the two copies --- not both,\n> and not neither. This is fairly hard when the concurrent query\n> might be using any of several scan methods, and might or might not\n> have visited the original tuple before you commenced the move.\n> You can solve it by treating the move more like an UPDATE, that\n> is the new tuple gets a new XID, but that has its own downsides;\n> notably that it must block/be blocked by concurrent real UPDATEs.\n\nYeah, this is always the part that has stumped me. I think there's\nprobably some way to find bit space to indicate whether an update is a\n\"real\" update or a \"move-the-tuple\" update, although it might take a\nbunch of engineering to get the job done, or otherwise be kind of\nawkward in some way. But then what? You're still left with writing a\nbunch of new index entries for the tuple which will lead to bloating\nthe indexes as you try to organize the heap, so I imagine that you\nhave to move relatively small batches of tuples and then vacuum and\nthen repeat, which seems like it will take forever on a big table.\n\nIf you imagine a hypothetical world in which the block number in the\nindex entry is a logical block number rather than a physical block\nnumber, then you could imagine shuffling the physical position of the\nblocks around to put the non-empty logical blocks at the beginning of\nthe relation and then throw away the empty ones. You might still want\nto migrate some rows from partially-filled blocks to empty ones, but\nthat seems like it would often require reindexing vastly fewer rows.\nImagine for example a relation with ten half-empty blocks at the\nbeginning followed by a million completely full blocks. But now you've\nalso turned sequential scans into random I/O. Probably it works better\nif the logical->physical mapping works in multi-megabyte chunks rather\nthan block by block, but that seems like an awful lot of engineering\nto do as foundational work, especially given that even after you do\nall of that and build some kind of tuple relocator on top of it, you\nstill need a way to move around individual tuples when relocating\nchunks isn't good enough.\n\nWhat the extensions that are out there seem to do is, as I understand\nit, an online table rewrite with concurrent change capture, and then\nyou apply the changes to the output table afterward. That has the\nproblem that if the changes are happening faster than you can apply\nthem, the operation does not terminate. But, enough people seem to be\nhappy with this kind of solution that we should perhaps look harder at\ndoing something along these lines in core.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 08:39:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 22, 2024 at 08:39:23AM -0400, Robert Haas wrote:\n> What the extensions that are out there seem to do is, as I understand\n> it, an online table rewrite with concurrent change capture, and then\n> you apply the changes to the output table afterward. That has the\n> problem that if the changes are happening faster than you can apply\n> them, the operation does not terminate. But, enough people seem to be\n> happy with this kind of solution that we should perhaps look harder at\n> doing something along these lines in core.\n\nI believe this is being discussed here:\n\nhttps://commitfest.postgresql.org/49/5117/\nhttps://www.postgresql.org/message-id/5186.1706694913%40antos\n\n\nMichael\n\n\n", "msg_date": "Mon, 22 Jul 2024 14:42:41 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "That is a very useful thread and I'll keep on following it but it is not\nexactly what I'm trying to achieve here.\nYou see, there is a great difference between VACUUM FULL CONCURRENTLY and\nadding compaction to lazy vacuuming. The main factor here is resource\nutilization: a lot of companies have enough data that would need days to be\nvacuumed concurrently. Is the implementation discussed there pausable or at\nleast cancellable? Does it take into account periods of high resource\nutilization by user-generated queries?\n\nOn Mon, Jul 22, 2024 at 9:42 AM Michael Banck <[email protected]> wrote:\n\n> Hi,\n>\n> On Mon, Jul 22, 2024 at 08:39:23AM -0400, Robert Haas wrote:\n> > What the extensions that are out there seem to do is, as I understand\n> > it, an online table rewrite with concurrent change capture, and then\n> > you apply the changes to the output table afterward. That has the\n> > problem that if the changes are happening faster than you can apply\n> > them, the operation does not terminate. But, enough people seem to be\n> > happy with this kind of solution that we should perhaps look harder at\n> > doing something along these lines in core.\n>\n> I believe this is being discussed here:\n>\n> https://commitfest.postgresql.org/49/5117/\n> https://www.postgresql.org/message-id/5186.1706694913%40antos\n>\n>\n> Michael\n>\n\nThat is a very useful thread and I'll keep on following it but it is not exactly what I'm trying to achieve here.You see, there is a great difference between VACUUM FULL CONCURRENTLY and adding compaction to lazy vacuuming. The main factor here is resource utilization: a lot of companies have enough data that would need days to be vacuumed concurrently. Is the implementation discussed there pausable or at least cancellable? Does it take into account periods of high resource utilization by user-generated queries?On Mon, Jul 22, 2024 at 9:42 AM Michael Banck <[email protected]> wrote:Hi,\n\nOn Mon, Jul 22, 2024 at 08:39:23AM -0400, Robert Haas wrote:\n> What the extensions that are out there seem to do is, as I understand\n> it, an online table rewrite with concurrent change capture, and then\n> you apply the changes to the output table afterward. That has the\n> problem that if the changes are happening faster than you can apply\n> them, the operation does not terminate. But, enough people seem to be\n> happy with this kind of solution that we should perhaps look harder at\n> doing something along these lines in core.\n\nI believe this is being discussed here:\n\nhttps://commitfest.postgresql.org/49/5117/\nhttps://www.postgresql.org/message-id/5186.1706694913%40antos\n\n\nMichael", "msg_date": "Mon, 22 Jul 2024 13:59:56 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Mon, Jul 22, 2024 at 1:00 PM Ahmed Yarub Hani Al Nuaimi\n<[email protected]> wrote:\n> That is a very useful thread and I'll keep on following it but it is not exactly what I'm trying to achieve here.\n> You see, there is a great difference between VACUUM FULL CONCURRENTLY and adding compaction to lazy vacuuming. The main factor here is resource utilization: a lot of companies have enough data that would need days to be vacuumed concurrently. Is the implementation discussed there pausable or at least cancellable? Does it take into account periods of high resource utilization by user-generated queries?\n\nIf you want to discuss the patch on the other thread, you should go\nread that thread and perhaps reply there, rather than replying to this\nmessage. It's important to keep all of the discussion of a certain\npatch together, which doesn't happen if you reply like this.\n\nAlso, you've already been asked not to top-post and you just did it\nagain, so I'm guessing that you don't know what is meant by the term.\nSo please read this:\n\nhttps://web.archive.org/web/20230608210806/idallen.com/topposting.html\n\nIf you're going to post to this mailing list, it is important to\nunderstand the conventions and expectations that people have here. If\nyou insist on doing things differently than what everyone else does,\nyou're going to annoy a lot of people.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 13:20:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lock-free compaction. Why not?" }, { "msg_contents": "On Mon, Jul 22, 2024 at 2:20 PM Robert Haas <[email protected]> wrote:\n\n> On Mon, Jul 22, 2024 at 1:00 PM Ahmed Yarub Hani Al Nuaimi\n> <[email protected]> wrote:\n> > That is a very useful thread and I'll keep on following it but it is not\n> exactly what I'm trying to achieve here.\n> > You see, there is a great difference between VACUUM FULL CONCURRENTLY\n> and adding compaction to lazy vacuuming. The main factor here is resource\n> utilization: a lot of companies have enough data that would need days to be\n> vacuumed concurrently. Is the implementation discussed there pausable or at\n> least cancellable? Does it take into account periods of high resource\n> utilization by user-generated queries?\n>\n> If you want to discuss the patch on the other thread, you should go\n> read that thread and perhaps reply there, rather than replying to this\n> message. It's important to keep all of the discussion of a certain\n> patch together, which doesn't happen if you reply like this.\n>\n> Also, you've already been asked not to top-post and you just did it\n> again, so I'm guessing that you don't know what is meant by the term.\n> So please read this:\n>\n> https://web.archive.org/web/20230608210806/idallen.com/topposting.html\n>\n> If you're going to post to this mailing list, it is important to\n> understand the conventions and expectations that people have here. If\n> you insist on doing things differently than what everyone else does,\n> you're going to annoy a lot of people.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\nOh I'm so sorry for the top-posting. I didn't even notice the warning\nbefore. I'm not discussing exactly what is in that thread but rather an\nalternative implementation. That being said, I'll do my own research, try\nto get a working implementation and then come back to this thread.\nSorry again :)\n\nOn Mon, Jul 22, 2024 at 2:20 PM Robert Haas <[email protected]> wrote:On Mon, Jul 22, 2024 at 1:00 PM Ahmed Yarub Hani Al Nuaimi\n<[email protected]> wrote:\n> That is a very useful thread and I'll keep on following it but it is not exactly what I'm trying to achieve here.\n> You see, there is a great difference between VACUUM FULL CONCURRENTLY and adding compaction to lazy vacuuming. The main factor here is resource utilization: a lot of companies have enough data that would need days to be vacuumed concurrently. Is the implementation discussed there pausable or at least cancellable? Does it take into account periods of high resource utilization by user-generated queries?\n\nIf you want to discuss the patch on the other thread, you should go\nread that thread and perhaps reply there, rather than replying to this\nmessage. It's important to keep all of the discussion of a certain\npatch together, which doesn't happen if you reply like this.\n\nAlso, you've already been asked not to top-post and you just did it\nagain, so I'm guessing that you don't know what is meant by the term.\nSo please read this:\n\nhttps://web.archive.org/web/20230608210806/idallen.com/topposting.html\n\nIf you're going to post to this mailing list, it is important to\nunderstand the conventions and expectations that people have here. If\nyou insist on doing things differently than what everyone else does,\nyou're going to annoy a lot of people.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.comOh I'm so sorry for the top-posting. I didn't even notice the warning before. I'm not discussing exactly what is in that thread but rather an alternative implementation. That being said, I'll do my own research, try to get a working implementation and then come back to this thread.Sorry again :)", "msg_date": "Mon, 22 Jul 2024 14:48:28 -0300", "msg_from": "Ahmed Yarub Hani Al Nuaimi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Lock-free compaction. Why not?" } ]
[ { "msg_contents": "Hi all,\n(cc-ing Tom as committer of 52585f8f072a)\n\nAs Ashutosh has mentioned here we are not installing AdjustUpgrade.pm:\nhttp://postgresql.org/message-id/CAExHW5sja9YqZhin+UOp4DuHJwmgZc86YGDkXeEEW+HVyCvRnA@mail.gmail.com\n\nThe same thing can be said for Kerberos.pm.\n\nCould there be an argument for updating src/test/perl/meson.build and\nsrc/test/perl/Makefile with install and uninstall rules for both pm\nfiles?\n\nI cannot say much in favor of Kerberos.pm, but AdjustUpgrade.pm could\nbe really useful IMO when running the main regression test suite in an\nout-of-core module for upgrade scenarios. Not to mention that it\nincludes a set of filtering regexes that are rather generic.\n\nComments?\n--\nMichael", "msg_date": "Tue, 9 Jul 2024 16:46:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Missing installation of Kerberos.pm and AdjustUpgrade.pm" } ]
[ { "msg_contents": "Hi,\n\nCurrently, pg_verifybackup only works with plain (directory) format backups.\nThis proposal aims to support tar-format backups too. We will read the tar\nfiles from start to finish and verify each file inside against the\nbackup_manifest information, similar to how it verifies plain files.\n\nWe are introducing new options to pg_verifybackup: -F, --format=p|t and -Z,\n--compress=METHOD, which allow users to specify the backup format and\ncompression type, similar to the options available in pg_basebackup. If these\noptions are not provided, the backup format and compression type will be\nautomatically detected. To determine the format, we will search for PG_VERSION\nfile in the backup directory — if found, it indicates a plain backup;\notherwise, it\nis a tar-format backup. For the compression type, we will check the extension\nof base.tar.xxx file of tar-format backup. Refer to patch 0008 for the details.\n\nThe main challenge is to structure the code neatly. For plain-format backups,\nwe verify bytes directly from the files. For tar-format backups, we read bytes\nfrom the tar file of the specific file we care about. We need an abstraction\nto handle both formats smoothly, without using many if statements or special\ncases.\n\nTo achieve this goal, we need to reuse existing infrastructure without\nduplicating code, and for that, the major work involved here is the code\nrefactoring. Here is a breakdown of the work:\n\n1. BBSTREAMER Rename and Relocate:\nBBSTREAMER, currently used in pg_basebackup for reading and decompressing TAR\nfiles; can also be used for pg_verifybackup. In the future, it could support\nother tools like pg_combinebackup for merging TAR backups without extraction,\nand pg_waldump for verifying WAL files from the tar backup. For that\naccessibility,\nBBSTREAMER needs to be relocated to a shared directory.\n\nMoreover, renaming BBSTREAMER to ASTREAMER (short for Archive Streamer) would\nbetter indicate its general application across multiple tools. Moving it to\nsrc/fe_utils directory is appropriate, given its frontend infrastructure use.\n\n2. pg_verifybackup Code Refactoring:\nThe existing code for plain backup verification will be split into separate\nfiles or functions, so it can also be reused for tar backup verification.\n\n3. Adding TAR Backup Verification:\nFinally, patches will be added to implement TAR backup verification, along with\ntests and documentation.\n\nPatches 0001-0003 focus on renaming and relocating BBSTREAMER, patches\n0004-0007 on splitting the existing verification code, and patches 0008-0010 on\nadding TAR backup verification capabilities, tests, and documentation. The last\nset could be a single patch but is split to make the review easier.\n\nPlease take a look at the attached patches and share your comments,\nsuggestions, or any ways to enhance them. Your feedback is greatly\nappreciated.\n\nThank you !\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 9 Jul 2024 15:23:39 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Hi Amul,\n thanks for working on this.\n\n+ file_name_len = strlen(relpath);\n> + if (file_name_len < file_extn_len ||\n> + strcmp(relpath + file_name_len - file_extn_len, file_extn) != 0)\n> + {\n> + if (compress_algorithm == PG_COMPRESSION_NONE)\n> + report_backup_error(context,\n> + \"\\\"%s\\\" is not a valid file, expecting tar file\",\n> + relpath);\n> + else\n> + report_backup_error(context,\n> + \"\\\"%s\\\" is not a valid file, expecting \\\"%s\\\" compressed tar file\",\n> + relpath,\n> + get_compress_algorithm_name(compress_algorithm));\n> + return;\n> + }\n>\n\nI believe pg_verifybackup needs to exit after reporting a failure here\nsince it could not figure out a streamer to allocate.\n\nAlso, v1-0002 removes #include \"pqexpbuffer.h\" from astreamer.h and adds it\nto the new .h file and in v1-0004 it\nreverts the change. So this can be avoided altogether.\n\nOn Tue, Jul 9, 2024 at 3:24 PM Amul Sul <[email protected]> wrote:\n\n> Hi,\n>\n> Currently, pg_verifybackup only works with plain (directory) format\n> backups.\n> This proposal aims to support tar-format backups too. We will read the tar\n> files from start to finish and verify each file inside against the\n> backup_manifest information, similar to how it verifies plain files.\n>\n> We are introducing new options to pg_verifybackup: -F, --format=p|t and -Z,\n> --compress=METHOD, which allow users to specify the backup format and\n> compression type, similar to the options available in pg_basebackup. If\n> these\n> options are not provided, the backup format and compression type will be\n> automatically detected. To determine the format, we will search for\n> PG_VERSION\n> file in the backup directory — if found, it indicates a plain backup;\n> otherwise, it\n> is a tar-format backup. For the compression type, we will check the\n> extension\n> of base.tar.xxx file of tar-format backup. Refer to patch 0008 for the\n> details.\n>\n> The main challenge is to structure the code neatly. For plain-format\n> backups,\n> we verify bytes directly from the files. For tar-format backups, we read\n> bytes\n> from the tar file of the specific file we care about. We need an\n> abstraction\n> to handle both formats smoothly, without using many if statements or\n> special\n> cases.\n>\n> To achieve this goal, we need to reuse existing infrastructure without\n> duplicating code, and for that, the major work involved here is the code\n> refactoring. Here is a breakdown of the work:\n>\n> 1. BBSTREAMER Rename and Relocate:\n> BBSTREAMER, currently used in pg_basebackup for reading and decompressing\n> TAR\n> files; can also be used for pg_verifybackup. In the future, it could\n> support\n> other tools like pg_combinebackup for merging TAR backups without\n> extraction,\n> and pg_waldump for verifying WAL files from the tar backup. For that\n> accessibility,\n> BBSTREAMER needs to be relocated to a shared directory.\n>\n> Moreover, renaming BBSTREAMER to ASTREAMER (short for Archive Streamer)\n> would\n> better indicate its general application across multiple tools. Moving it to\n> src/fe_utils directory is appropriate, given its frontend infrastructure\n> use.\n>\n> 2. pg_verifybackup Code Refactoring:\n> The existing code for plain backup verification will be split into separate\n> files or functions, so it can also be reused for tar backup verification.\n>\n> 3. Adding TAR Backup Verification:\n> Finally, patches will be added to implement TAR backup verification, along\n> with\n> tests and documentation.\n>\n> Patches 0001-0003 focus on renaming and relocating BBSTREAMER, patches\n> 0004-0007 on splitting the existing verification code, and patches\n> 0008-0010 on\n> adding TAR backup verification capabilities, tests, and documentation. The\n> last\n> set could be a single patch but is split to make the review easier.\n>\n> Please take a look at the attached patches and share your comments,\n> suggestions, or any ways to enhance them. Your feedback is greatly\n> appreciated.\n>\n> Thank you !\n>\n> --\n> Regards,\n> Amul Sul\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi Amul,     thanks for working on this.+ file_name_len = strlen(relpath);+ if (file_name_len < file_extn_len ||+ strcmp(relpath + file_name_len - file_extn_len, file_extn) != 0)+ {+ if (compress_algorithm == PG_COMPRESSION_NONE)+ report_backup_error(context,+ \"\\\"%s\\\" is not a valid file, expecting tar file\",+ relpath);+ else+ report_backup_error(context,+ \"\\\"%s\\\" is not a valid file, expecting \\\"%s\\\" compressed tar file\",+ relpath,+ get_compress_algorithm_name(compress_algorithm));+ return;+ }I believe pg_verifybackup needs to exit after reporting a failure here since it could not figure out a streamer to allocate.Also, v1-0002 removes #include \"pqexpbuffer.h\" from astreamer.h and adds it to the new .h file and in v1-0004 itreverts the change. So this can be avoided altogether. On Tue, Jul 9, 2024 at 3:24 PM Amul Sul <[email protected]> wrote:Hi,\n\nCurrently, pg_verifybackup only works with plain (directory) format backups.\nThis proposal aims to support tar-format backups too.  We will read the tar\nfiles from start to finish and verify each file inside against the\nbackup_manifest information, similar to how it verifies plain files.\n\nWe are introducing new options to pg_verifybackup: -F, --format=p|t and -Z,\n--compress=METHOD, which allow users to specify the backup format and\ncompression type, similar to the options available in pg_basebackup. If these\noptions are not provided, the backup format and compression type will be\nautomatically detected.  To determine the format, we will search for PG_VERSION\nfile in the backup directory — if found, it indicates a plain backup;\notherwise, it\nis a tar-format backup.  For the compression type, we will check the extension\nof base.tar.xxx file of tar-format backup. Refer to patch 0008 for the details.\n\nThe main challenge is to structure the code neatly.  For plain-format backups,\nwe verify bytes directly from the files.  For tar-format backups, we read bytes\nfrom the tar file of the specific file we care about.  We need an abstraction\nto handle both formats smoothly, without using many if statements or special\ncases.\n\nTo achieve this goal, we need to reuse existing infrastructure without\nduplicating code, and for that, the major work involved here is the code\nrefactoring. Here is a breakdown of the work:\n\n1. BBSTREAMER Rename and Relocate:\nBBSTREAMER, currently used in pg_basebackup for reading and decompressing TAR\nfiles; can also be used for pg_verifybackup. In the future, it could support\nother tools like pg_combinebackup for merging TAR backups without extraction,\nand pg_waldump for verifying WAL files from the tar backup.  For that\naccessibility,\nBBSTREAMER needs to be relocated to a shared directory.\n\nMoreover, renaming BBSTREAMER to ASTREAMER (short for Archive Streamer) would\nbetter indicate its general application across multiple tools. Moving it to\nsrc/fe_utils directory is appropriate, given its frontend infrastructure use.\n\n2. pg_verifybackup Code Refactoring:\nThe existing code for plain backup verification will be split into separate\nfiles or functions, so it can also be reused for tar backup verification.\n\n3. Adding TAR Backup Verification:\nFinally, patches will be added to implement TAR backup verification, along with\ntests and documentation.\n\nPatches 0001-0003 focus on renaming and relocating BBSTREAMER, patches\n0004-0007 on splitting the existing verification code, and patches 0008-0010 on\nadding TAR backup verification capabilities, tests, and documentation. The last\nset could be a single patch but is split to make the review easier.\n\nPlease take a look at the attached patches and share your comments,\nsuggestions, or any ways to enhance them. Your feedback is greatly\nappreciated.\n\nThank you !\n\n--\nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n-- Thanks & Regards,Sravan VelagandulaEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 22 Jul 2024 08:29:12 +0530", "msg_from": "Sravan Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Jul 22, 2024 at 8:29 AM Sravan Kumar <[email protected]> wrote:\n>\n> Hi Amul,\n> thanks for working on this.\n>\n\nThanks, for your review.\n\n>> + file_name_len = strlen(relpath);\n>> + if (file_name_len < file_extn_len ||\n>> + strcmp(relpath + file_name_len - file_extn_len, file_extn) != 0)\n>> + {\n>> + if (compress_algorithm == PG_COMPRESSION_NONE)\n>> + report_backup_error(context,\n>> + \"\\\"%s\\\" is not a valid file, expecting tar file\",\n>> + relpath);\n>> + else\n>> + report_backup_error(context,\n>> + \"\\\"%s\\\" is not a valid file, expecting \\\"%s\\\" compressed tar file\",\n>> + relpath,\n>> + get_compress_algorithm_name(compress_algorithm));\n>> + return;\n>> + }\n>\n>\n> I believe pg_verifybackup needs to exit after reporting a failure here since it could not figure out a streamer to allocate.\n>\nThe intention here is to continue the verification of the remaining tar files\ninstead of exiting immediately in case of an error. If the user prefers an\nimmediate exit, they can use the --exit-on-error option of pg_verifybackup.\n\n\n> Also, v1-0002 removes #include \"pqexpbuffer.h\" from astreamer.h and adds it to the new .h file and in v1-0004 it\n> reverts the change. So this can be avoided altogether.\n>\nFix in the attached version.\n\nRegards,\nAmul", "msg_date": "Mon, 22 Jul 2024 17:22:07 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Jul 22, 2024 at 7:53 AM Amul Sul <[email protected]> wrote:\n> Fix in the attached version.\n\nFirst of all, in the interest of full disclosure, I suggested this\nproject to Amul, so I'm +1 on the concept. I think making more of our\nbackup-related tools work with tar and compressed tar formats -- and\nperhaps eventually data not stored locally -- will make them a lot\nmore usable. If, for example, you take a full backup and an\nincremental backup, each in tar format, store them in the cloud\nsomeplace, and then want to verify and afterwards restore the\nincremental backup, you would need to download the tar files from the\ncloud, then extract all the tar files, then run pg_verifybackup and\npg_combinebackup over the results. With this patch set, and similar\nwork for pg_combinebackup, you could skip the step where you need to\nextract the tar files, saving significant amounts of time and disk\nspace. If the tools also had the ability to access data remotely, you\ncould save even more, but that's a much harder project, so it makes\nsense to me to start with this.\n\nSecond, I think this patch set is quite well-organized and easy to\nread. That's not to say there is nothing in these patches to which\nsomeone might object, but it seems to me that it should at least be\nsimple for anyone who wants to review to find the things to which they\nobject in the patch set without having to spend too much time on it,\nwhich is fantastic.\n\nThird, I think the general approach that these patches take to the\nproblem - namely, renaming bbstreamer to astreamer and moving it\nsomewhere that permits it to be reused - makes a lot of sense. To be\nhonest, I don't think I had it in mind that bbstreamer would be a\nreusable component when I wrote it, or if I did have it in mind, it\nwas off in some dusty corner of my mind that doesn't get visited very\noften. I was imagining that you would need to build new infrastructure\nto deal with reading the tar file, but I think what you've done here\nis better. Reusing the bbstreamer stuff gives you tar file parsing,\nand decompression if necessary, basically for free, and IMHO the\nresult looks rather elegant.\n\nHowever, I'm not very convinced by 0003. The handling between the\nmeson and make-based build systems doesn't seem consistent. On the\nmeson side, you just add the objects to the same list that contains\nall of the other files (but not in alphabetical order, which should be\nfixed). But on the make side, you for some reason invent a separate\nAOBJS list instead of just adding the files to OBJS. I don't think it\nshould be necessary to treat these objects any differently from any\nother objects, so they should be able to just go in OBJS: but if it\nwere necessary, then I feel like the meson side would need something\nsimilar.\n\nAlso, I'm not so sure about this change to src/fe_utils/meson.build:\n\n- dependencies: frontend_common_code,\n+ dependencies: [frontend_common_code, lz4, zlib, zstd],\n\nfrontend_common_code already includes dependencies on zlib and zstd,\nso we probably don't need to add those again here. I checked the\nresult of otool -L src/bin/pg_controldata/pg_controldata from the\nmeson build directory, and I find that currently it links against libz\nand libzstd but not liblz4. However, if I either make this line say\ndependencies: [frontend_common_code, lz4] or if I just update\nfrontend_common_code to include lz4, then it starts linking against\nliblz4 as well. I'm not entirely sure if there's any reason to do one\nor the other of those things, but I think I'd be inclined to make\nfrontend_common_code just include lz4 since it already includes zlib\nand zstd anyway, and then you don't need this change.\n\nAlternatively, we could take the position that we need to avoid having\nrandom front-end tools that don't do anything with compression at all,\nlike pg_controldata for example, to link with compression libraries at\nall. But then we'd need to rethink a bunch of things that have not\nmuch to do with this patch.\n\nRegarding 0004, I would rather not move show_progress and\nskip_checksums to the new header file. I suppose I was a bit lazy in\nmaking these file-level global variables instead of passing them down\nusing function arguments and/or a context object, but at least right\nnow they're only global within a single file. Can we think of\ninserting a preparatory patch that moves these into verifier_context?\n\nRegarding 0005, the comment /* Check whether there's an entry in the\nmanifest hash. */ should move inside verify_manifest_entry, where\nmanifest_files_lookup is called. The call to the new function\nverify_manifest_entry() needs its own, proper comment. Also, I think\nthere's a null-pointer deference hazard here, because\nverify_manifest_entry() can return NULL but the \"Validate the manifest\nsystem identifier\" chunk assumes it isn't. I think you could hit this\n- and presumably seg fault - if pg_control is on disk but not in the\nmanifest. Seems like just adding an m != NULL test is easiest, but\nsee also below comments about 0006.\n\nRegarding 0006, suppose that the member file within the tar archive is\nlonger than expected. With the unpatched code, we'll feed all of the\ndata to the checksum context, but then, after the read-loop\nterminates, we'll complain about the file being the wrong length. With\nthe patched code, we'll complain about the checksum mismatch before\nreturning from verify_content_checksum(). I think that's an unintended\nbehavior change, and I think the new behavior is worse than the old\nbehavior. But also, I think that in the case of a tar file, the\ndesired behavior is quite different. In that case, we know the length\nof the file from the member header, so we can check whether the length\nis as expected before we read any of the data bytes. If we discover\nthat the size is wrong, we can complain about that and need not feed\nthe checksum bytes to the checksum context at all -- we can just skip\nthem, which will be faster. That approach doesn't really make sense\nfor a file, because even if we were to stat() the file before we\nstarted reading it, the length could theoretically change as we are\nreading it, if someone is concurrently modifying it, but for a tar\nfile I think it does.\n\nI would suggest abandoning this refactoring. There's very little logic\nin verify_file_checksum() that you can actually reuse. I think you\nshould just duplicate the code. If you really want, you could arrange\nto reuse the error-reporting code that checks for checksumlen !=\nm->checksum_length and memcmp(checksumbuf, m->checksum_payload,\nchecksumlen) != 0, but even that I think is little enough that it's\nfine to just duplicate it. The rest is either (1) OS calls like\nopen(), read(), etc. which won't be applicable to the\nread-from-archive case or (2) calls to pg_checksum_WHATEVER, which are\nfine to just duplicate, IMHO.\n\nMy eyes are starting to glaze over a bit here so expect comments below\nthis point to be only a partial review of the corresponding patch.\n\nRegarding 0007, I think that the should_verify_sysid terminology is\nproblematic. I made all the code and identifier names talk only about\nthe control file, not the specific thing in the control file that we\nare going to verify, in case in the future we want to verify\nadditional things. This breaks that abstraction.\n\nRegarding 0009, I feel like astreamer_verify_content() might want to\ngrow some subroutines. One idea could be to move the\nASTREAMER_MEMBER_HEADER case and likewise ASTREAMER_MEMBER_CONTENTS\ncases into a new function for each; another idea could be to move\nsmaller chunks of logic, e.g. under the ASTREAMER_MEMBER_CONTENTS\ncase, the verify_checksums could be one subroutine and the ill-named\nverify_sysid stuff could be another. I'm not certain exactly what's\nbest here, but some of this code is as deeply as six levels nested,\nwhich is not such a terrible thing that nobody should ever do it, but\nit is bad enough that we should at least look around for a better way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jul 2024 11:33:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Tue, Jul 30, 2024 at 9:04 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 7:53 AM Amul Sul <[email protected]> wrote:\n> > Fix in the attached version.\n>\n> First of all, in the interest of full disclosure, I suggested this\n> project to Amul, so I'm +1 on the concept. I think making more of our\n> backup-related tools work with tar and compressed tar formats -- and\n> perhaps eventually data not stored locally -- will make them a lot\n> more usable. If, for example, you take a full backup and an\n> incremental backup, each in tar format, store them in the cloud\n> someplace, and then want to verify and afterwards restore the\n> incremental backup, you would need to download the tar files from the\n> cloud, then extract all the tar files, then run pg_verifybackup and\n> pg_combinebackup over the results. With this patch set, and similar\n> work for pg_combinebackup, you could skip the step where you need to\n> extract the tar files, saving significant amounts of time and disk\n> space. If the tools also had the ability to access data remotely, you\n> could save even more, but that's a much harder project, so it makes\n> sense to me to start with this.\n>\n> Second, I think this patch set is quite well-organized and easy to\n> read. That's not to say there is nothing in these patches to which\n> someone might object, but it seems to me that it should at least be\n> simple for anyone who wants to review to find the things to which they\n> object in the patch set without having to spend too much time on it,\n> which is fantastic.\n>\n> Third, I think the general approach that these patches take to the\n> problem - namely, renaming bbstreamer to astreamer and moving it\n> somewhere that permits it to be reused - makes a lot of sense. To be\n> honest, I don't think I had it in mind that bbstreamer would be a\n> reusable component when I wrote it, or if I did have it in mind, it\n> was off in some dusty corner of my mind that doesn't get visited very\n> often. I was imagining that you would need to build new infrastructure\n> to deal with reading the tar file, but I think what you've done here\n> is better. Reusing the bbstreamer stuff gives you tar file parsing,\n> and decompression if necessary, basically for free, and IMHO the\n> result looks rather elegant.\n>\n\nThank you so much for the summary and the review.\n\n> However, I'm not very convinced by 0003. The handling between the\n> meson and make-based build systems doesn't seem consistent. On the\n> meson side, you just add the objects to the same list that contains\n> all of the other files (but not in alphabetical order, which should be\n> fixed). But on the make side, you for some reason invent a separate\n> AOBJS list instead of just adding the files to OBJS. I don't think it\n> should be necessary to treat these objects any differently from any\n> other objects, so they should be able to just go in OBJS: but if it\n> were necessary, then I feel like the meson side would need something\n> similar.\n>\n\nFixed -- I did that because it was part of a separate group in pg_basebackup.\n\n> Also, I'm not so sure about this change to src/fe_utils/meson.build:\n>\n> - dependencies: frontend_common_code,\n> + dependencies: [frontend_common_code, lz4, zlib, zstd],\n>\n> frontend_common_code already includes dependencies on zlib and zstd,\n> so we probably don't need to add those again here. I checked the\n> result of otool -L src/bin/pg_controldata/pg_controldata from the\n> meson build directory, and I find that currently it links against libz\n> and libzstd but not liblz4. However, if I either make this line say\n> dependencies: [frontend_common_code, lz4] or if I just update\n> frontend_common_code to include lz4, then it starts linking against\n> liblz4 as well. I'm not entirely sure if there's any reason to do one\n> or the other of those things, but I think I'd be inclined to make\n> frontend_common_code just include lz4 since it already includes zlib\n> and zstd anyway, and then you don't need this change.\n>\n\nFixed -- frontend_common_code now includes lz4 as well.\n\n> Alternatively, we could take the position that we need to avoid having\n> random front-end tools that don't do anything with compression at all,\n> like pg_controldata for example, to link with compression libraries at\n> all. But then we'd need to rethink a bunch of things that have not\n> much to do with this patch.\n>\n\nNoted. I might give it a try another day, unless someone else beats\nme, perhaps in a separate thread.\n\n> Regarding 0004, I would rather not move show_progress and\n> skip_checksums to the new header file. I suppose I was a bit lazy in\n> making these file-level global variables instead of passing them down\n> using function arguments and/or a context object, but at least right\n> now they're only global within a single file. Can we think of\n> inserting a preparatory patch that moves these into verifier_context?\n>\n\nDone -- added a new patch as 0004, and the subsequent patch numbers\nhave been incremented accordingly.\n\n> Regarding 0005, the comment /* Check whether there's an entry in the\n> manifest hash. */ should move inside verify_manifest_entry, where\n> manifest_files_lookup is called. The call to the new function\n> verify_manifest_entry() needs its own, proper comment. Also, I think\n> there's a null-pointer deference hazard here, because\n> verify_manifest_entry() can return NULL but the \"Validate the manifest\n> system identifier\" chunk assumes it isn't. I think you could hit this\n> - and presumably seg fault - if pg_control is on disk but not in the\n> manifest. Seems like just adding an m != NULL test is easiest, but\n> see also below comments about 0006.\n>\n\nFixed -- I did the NULL check in the earlier 0007 patch, but it should\nhave been done in this patch.\n\n> Regarding 0006, suppose that the member file within the tar archive is\n> longer than expected. With the unpatched code, we'll feed all of the\n> data to the checksum context, but then, after the read-loop\n> terminates, we'll complain about the file being the wrong length. With\n> the patched code, we'll complain about the checksum mismatch before\n> returning from verify_content_checksum(). I think that's an unintended\n> behavior change, and I think the new behavior is worse than the old\n> behavior. But also, I think that in the case of a tar file, the\n> desired behavior is quite different. In that case, we know the length\n> of the file from the member header, so we can check whether the length\n> is as expected before we read any of the data bytes. If we discover\n> that the size is wrong, we can complain about that and need not feed\n> the checksum bytes to the checksum context at all -- we can just skip\n> them, which will be faster. That approach doesn't really make sense\n> for a file, because even if we were to stat() the file before we\n> started reading it, the length could theoretically change as we are\n> reading it, if someone is concurrently modifying it, but for a tar\n> file I think it does.\n>\n\nIn the case of a file size mismatch, we never reach the point where\nchecksum calculation is performed, because verify_manifest_entry()\nencounters an error and sets manifest_file->bad to true, which causes\nskip_checksum to be set to false. For that reason, I didn’t include\nthe size check again in the checksum calculation part. This behavior\nis the same for plain backups, but the additional file size check was\nadded as a precaution (per comment in verify_file_checksum()),\npossibly for the same reasons you mentioned.\n\nI agree, changing the order of errors could create confusion.\nPreviously, a file size mismatch was a clear and appropriate error\nthat was reported before the checksum failure error.\n\nHowever, this can be fixed by delaying the checksum calculation until\nthe expected file content size is received. Specifically, return from\nverify_content_checksum(), if (*computed_len != m->size). If the file\nsize is incorrect, the checksum calculation won't be performed, and\nthe caller's loop reading file (I mean in verify_file_checksum()) will\nexit at some point which later encounters the size mismatch error.\n\n> I would suggest abandoning this refactoring. There's very little logic\n> in verify_file_checksum() that you can actually reuse. I think you\n> should just duplicate the code. If you really want, you could arrange\n> to reuse the error-reporting code that checks for checksumlen !=\n> m->checksum_length and memcmp(checksumbuf, m->checksum_payload,\n> checksumlen) != 0, but even that I think is little enough that it's\n> fine to just duplicate it. The rest is either (1) OS calls like\n> open(), read(), etc. which won't be applicable to the\n> read-from-archive case or (2) calls to pg_checksum_WHATEVER, which are\n> fine to just duplicate, IMHO.\n>\n\nI kept the refactoring as it is by fixing verify_content_checksum() as\nmentioned in the previous paragraph. Please let me know if this fix\nand the explanation makes sense to you. I’m okay with abandoning this\nrefactor patch if you think.\n\n> My eyes are starting to glaze over a bit here so expect comments below\n> this point to be only a partial review of the corresponding patch.\n>\n> Regarding 0007, I think that the should_verify_sysid terminology is\n> problematic. I made all the code and identifier names talk only about\n> the control file, not the specific thing in the control file that we\n> are going to verify, in case in the future we want to verify\n> additional things. This breaks that abstraction.\n>\n\nAgreed, changed to should_verify_control_data.\n\n> Regarding 0009, I feel like astreamer_verify_content() might want to\n> grow some subroutines. One idea could be to move the\n> ASTREAMER_MEMBER_HEADER case and likewise ASTREAMER_MEMBER_CONTENTS\n> cases into a new function for each; another idea could be to move\n> smaller chunks of logic, e.g. under the ASTREAMER_MEMBER_CONTENTS\n> case, the verify_checksums could be one subroutine and the ill-named\n> verify_sysid stuff could be another. I'm not certain exactly what's\n> best here, but some of this code is as deeply as six levels nested,\n> which is not such a terrible thing that nobody should ever do it, but\n> it is bad enough that we should at least look around for a better way.\n>\n\nOkay, I added the verify_checksums() and verify_controldata()\nfunctions to the astreamer_verify.c file. I also updated related\nvariables that were clashing with these function names:\nverify_checksums has been renamed to verifyChecksums, and verify_sysid\nhas been renamed to verifyControlData.\n\nThanks again for the review comments. Please have a look at the\nattached version.\n\nRegards,\nAmul", "msg_date": "Wed, 31 Jul 2024 18:58:16 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Jul 31, 2024 at 9:28 AM Amul Sul <[email protected]> wrote:\n> Fixed -- I did that because it was part of a separate group in pg_basebackup.\n\nWell, that's because pg_basebackup builds multiple executables, and\nthese files needed to be linked with some but not others. It looks\nlike when Andres added meson support, instead of linking each object\nfile into the binaries that need it, he had it just build a static\nlibrary and link every executable to that. That puts the linker in\ncharge of sorting out which binaries need which files, instead of\nhaving the makefile do it. In any case, this consideration doesn't\napply when we're putting the object files into a library, so there was\nno need to preserve the separate makefile variable. I think this looks\ngood now.\n\n> Fixed -- frontend_common_code now includes lz4 as well.\n\nCool. 0003 overall looks good to me now, unless Andres wants to object.\n\n> Noted. I might give it a try another day, unless someone else beats\n> me, perhaps in a separate thread.\n\nProbably not too important, since nobody has complained.\n\n> Done -- added a new patch as 0004, and the subsequent patch numbers\n> have been incremented accordingly.\n\nI think I would have made this pass context->show_progress to\nprogress_report() instead of the whole verifier_context, but that's an\narguable stylistic choice, so I'll defer to you if you prefer it the\nway you have it. Other than that, this LGTM.\n\nHowever, what is now 0005 does something rather evil. The commit\nmessage claims that it's just rearranging code, and that's almost\nentirely true, except that it also changes manifest_file's pathname\nmember to be char * instead of const char *. I do not think that is a\ngood idea, and I definitely do not think you should do it in a patch\nthat purports to just be doing code movement, and I even more\ndefinitely think that you should not do it without even mentioning\nthat you did it, and why you did it.\n\n> Fixed -- I did the NULL check in the earlier 0007 patch, but it should\n> have been done in this patch.\n\nThis is now 0006. struct stat's st_size is of type off_t -- or maybe\nssize_t on some platforms? - not type size_t. I suggest making the\nfilesize argument use int64 as we do in some other places. size_t is,\nI believe, defined to be the right width to hold the size of an object\nin memory, not the size of a file on disk, so it isn't really relevant\nhere.\n\nOther than that, my only comment on this patch is that I think I would\nfind it more natural to write the check in verify_backup_file() in a\ndifferent order: I'd put context->manifest->version != 1 && m != NULL\n&& m->matched && !m->bad && strcmp() because (a) that way the most\nexpensive test is last and (b) it feels weird to think about whether\nwe have the right pathname if we don't even have a valid manifest\nentry. But this is minor and just a stylistic preference, so it's also\nOK as you have it if you prefer.\n\n> I agree, changing the order of errors could create confusion.\n> Previously, a file size mismatch was a clear and appropriate error\n> that was reported before the checksum failure error.\n\nIn my opinion, this patch (currently 0007) creates a rather confusing\nsituation that I can't easily reason about. Post-patch,\nverify_content_checksum() is a mix of two different things: it ends up\ncontaining all of the logic that needs to be performed on every chunk\nof bytes read from the file plus some but not all of the end-of-file\nerror-checks from verify_file_checksum(). That's really weird. I'm not\nvery convinced that the test for whether we've reached the end of the\nfile is 100% correct, but even if it is, the stuff before that point\nis stuff that is supposed to happen many times and the stuff after\nthat is only supposed to happen once, and I don't see any good reason\nto smush those two categories of things into a single function. Plus,\nchanging the order in which those end-of-file checks happen doesn't\nseem like the right idea either: the current ordering is good the way\nit is. Maybe you want to think of refactoring to create TWO new\nfunctions, one to do the per-hunk work and a second to do the\nend-of-file \"is the checksum OK?\" stuff, or maybe you can just open\ncode it, but I'm not willing to commit this the way it is.\n\nRegarding 0008, I don't really see a reason why the m != NULL\nshouldn't also move inside should_verify_control_data(). Yeah, the\ncaller added in 0010 might not need the check, but it won't really\ncost anything. Also, it seems to me that the logic in 0010 is actually\nwrong. If m == NULL, we'll keep the values of verifyChecksums and\nverifyControlData from the previous iteration, whereas what we should\ndo is make them both false. How about removing the if m == NULL guard\nhere and making both should_verify_checksum() and\nshould_verify_control_data() test m != NULL internally? Then it all\nworks out nicely, I think. Or alternatively you need an else clause\nthat resets both values to false when m == NULL.\n\n> Okay, I added the verify_checksums() and verify_controldata()\n> functions to the astreamer_verify.c file. I also updated related\n> variables that were clashing with these function names:\n> verify_checksums has been renamed to verifyChecksums, and verify_sysid\n> has been renamed to verifyControlData.\n\nMaybe think of doing something with the ASTREAMER_MEMBER_HEADER case also.\n\nOut of time for today, will look again soon. I think the first few of\nthese are probably pretty much ready for commit already, and with a\nlittle more adjustment they'll probably be ready up through about\n0006.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jul 2024 16:07:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Hi,\n\nOn 2024-07-31 16:07:03 -0400, Robert Haas wrote:\n> On Wed, Jul 31, 2024 at 9:28 AM Amul Sul <[email protected]> wrote:\n> > Fixed -- I did that because it was part of a separate group in pg_basebackup.\n> \n> Well, that's because pg_basebackup builds multiple executables, and\n> these files needed to be linked with some but not others. It looks\n> like when Andres added meson support, instead of linking each object\n> file into the binaries that need it, he had it just build a static\n> library and link every executable to that. That puts the linker in\n> charge of sorting out which binaries need which files, instead of\n> having the makefile do it.\n\nRight. Meson supports using the same file with different compilation flags,\ndepending on the context its used (i.e. as part of an executable or a shared\nlibrary). But that also ends up compiling files multiple times when using the\nsame file in multiple binaries. Which wasn't desirable here -> hence moving it\nto a static lib.\n\n\n> > Fixed -- frontend_common_code now includes lz4 as well.\n> \n> Cool. 0003 overall looks good to me now, unless Andres wants to object.\n\nNope.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2024 13:57:55 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Thu, Aug 1, 2024 at 1:37 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jul 31, 2024 at 9:28 AM Amul Sul <[email protected]> wrote:\n> > Fixed -- I did that because it was part of a separate group in pg_basebackup.\n>\n> Well, that's because pg_basebackup builds multiple executables, and\n> these files needed to be linked with some but not others. It looks\n> like when Andres added meson support, instead of linking each object\n> file into the binaries that need it, he had it just build a static\n> library and link every executable to that. That puts the linker in\n> charge of sorting out which binaries need which files, instead of\n> having the makefile do it. In any case, this consideration doesn't\n> apply when we're putting the object files into a library, so there was\n> no need to preserve the separate makefile variable. I think this looks\n> good now.\n>\n\nUnderstood.\n\n> > Fixed -- frontend_common_code now includes lz4 as well.\n>\n> Cool. 0003 overall looks good to me now, unless Andres wants to object.\n>\n> > Noted. I might give it a try another day, unless someone else beats\n> > me, perhaps in a separate thread.\n>\n> Probably not too important, since nobody has complained.\n>\n> > Done -- added a new patch as 0004, and the subsequent patch numbers\n> > have been incremented accordingly.\n>\n> I think I would have made this pass context->show_progress to\n> progress_report() instead of the whole verifier_context, but that's an\n> arguable stylistic choice, so I'll defer to you if you prefer it the\n> way you have it. Other than that, this LGTM.\n>\n\nAdditionally, I moved total_size and done_size to verifier_context\nbecause done_size needs to be accessed in astreamer_verify.c.\nWith this change, verifier_context is now more suitable.\n\n> However, what is now 0005 does something rather evil. The commit\n> message claims that it's just rearranging code, and that's almost\n> entirely true, except that it also changes manifest_file's pathname\n> member to be char * instead of const char *. I do not think that is a\n> good idea, and I definitely do not think you should do it in a patch\n> that purports to just be doing code movement, and I even more\n> definitely think that you should not do it without even mentioning\n> that you did it, and why you did it.\n>\n\nTrue, that was a mistake on my part during the rebase. Fixed in the\nattached version.\n\n> > Fixed -- I did the NULL check in the earlier 0007 patch, but it should\n> > have been done in this patch.\n>\n> This is now 0006. struct stat's st_size is of type off_t -- or maybe\n> ssize_t on some platforms? - not type size_t. I suggest making the\n> filesize argument use int64 as we do in some other places. size_t is,\n> I believe, defined to be the right width to hold the size of an object\n> in memory, not the size of a file on disk, so it isn't really relevant\n> here.\n>\n\nOk, used int64.\n\n> Other than that, my only comment on this patch is that I think I would\n> find it more natural to write the check in verify_backup_file() in a\n> different order: I'd put context->manifest->version != 1 && m != NULL\n> && m->matched && !m->bad && strcmp() because (a) that way the most\n> expensive test is last and (b) it feels weird to think about whether\n> we have the right pathname if we don't even have a valid manifest\n> entry. But this is minor and just a stylistic preference, so it's also\n> OK as you have it if you prefer.\n>\n\nI used to do it that way (a) -- keeping the expensive check for last.\nI did the same thing while adding should_verify_control_data() in the\nlater patch. Somehow, I missed it here, maybe I didn't pay enough\nattention to this patch :(\n\n> > I agree, changing the order of errors could create confusion.\n> > Previously, a file size mismatch was a clear and appropriate error\n> > that was reported before the checksum failure error.\n>\n> In my opinion, this patch (currently 0007) creates a rather confusing\n> situation that I can't easily reason about. Post-patch,\n> verify_content_checksum() is a mix of two different things: it ends up\n> containing all of the logic that needs to be performed on every chunk\n> of bytes read from the file plus some but not all of the end-of-file\n> error-checks from verify_file_checksum(). That's really weird. I'm not\n> very convinced that the test for whether we've reached the end of the\n> file is 100% correct, but even if it is, the stuff before that point\n> is stuff that is supposed to happen many times and the stuff after\n> that is only supposed to happen once, and I don't see any good reason\n> to smush those two categories of things into a single function. Plus,\n> changing the order in which those end-of-file checks happen doesn't\n> seem like the right idea either: the current ordering is good the way\n> it is. Maybe you want to think of refactoring to create TWO new\n> functions, one to do the per-hunk work and a second to do the\n> end-of-file \"is the checksum OK?\" stuff, or maybe you can just open\n> code it, but I'm not willing to commit this the way it is.\n>\n\nUnderstood. At the start of working on the v3 review, I thought of\ncompletely discarding the 0007 patch and copying most of\nverify_file_checksum() to a new function in astreamer_verify.c.\nHowever, I later realized we could deduplicate some parts, so I split\nverify_file_checksum() and moved the reusable part to a separate\nfunction. Please have a look at v4-0007.\n\n> Regarding 0008, I don't really see a reason why the m != NULL\n> shouldn't also move inside should_verify_control_data(). Yeah, the\n> caller added in 0010 might not need the check, but it won't really\n> cost anything. Also, it seems to me that the logic in 0010 is actually\n> wrong. If m == NULL, we'll keep the values of verifyChecksums and\n> verifyControlData from the previous iteration, whereas what we should\n> do is make them both false. How about removing the if m == NULL guard\n> here and making both should_verify_checksum() and\n> should_verify_control_data() test m != NULL internally? Then it all\n> works out nicely, I think. Or alternatively you need an else clause\n> that resets both values to false when m == NULL.\n>\n\nI had the same thought about checking for NULL inside\nshould_verify_control_data(), but I wanted to maintain the structure\nsimilar to should_verify_checksum(). Making this change would have\nalso required altering should_verify_checksum(), I wasn’t sure if I\nshould make that change before. Now, I did that in the attached\nversion -- 0008 patch.\n\n> > Okay, I added the verify_checksums() and verify_controldata()\n> > functions to the astreamer_verify.c file. I also updated related\n> > variables that were clashing with these function names:\n> > verify_checksums has been renamed to verifyChecksums, and verify_sysid\n> > has been renamed to verifyControlData.\n>\n> Maybe think of doing something with the ASTREAMER_MEMBER_HEADER case also.\n>\n\nDone.\n\n> Out of time for today, will look again soon. I think the first few of\n> these are probably pretty much ready for commit already, and with a\n> little more adjustment they'll probably be ready up through about\n> 0006.\n>\n\nSure, thank you.\n\nRegards,\nAmul", "msg_date": "Thu, 1 Aug 2024 18:48:42 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Thu, Aug 1, 2024 at 6:48 PM Amul Sul <[email protected]> wrote:\n>\n> On Thu, Aug 1, 2024 at 1:37 AM Robert Haas <[email protected]> wrote:\n> >\n> > On Wed, Jul 31, 2024 at 9:28 AM Amul Sul <[email protected]> wrote:\n> > > Fixed -- I did that because it was part of a separate group in pg_basebackup.\n> >\n[...]\n> > Out of time for today, will look again soon. I think the first few of\n> > these are probably pretty much ready for commit already, and with a\n> > little more adjustment they'll probably be ready up through about\n> > 0006.\n> >\n>\n> Sure, thank you.\n>\n\nThe v4 version isn't handling the progress report correctly because\nthe total_size calculation was done in verify_manifest_entry(), and\ndone_size was updated during the checksum verification. This worked\nwell for the plain backup but failed for the tar backup, where\nchecksum verification occurs right after verify_manifest_entry(),\nleading to incorrect total_size in the progress report output.\n\nAdditionally, the patch missed the final progress_report(true) call\nfor persistent output, which is called from verify_backup_checksums()\nfor the plain backup but never for tar backup verification. To address\nthis, I moved the first and last progress_report() calls to the main\nfunction. Although this is a small change, I placed it in a separate\npatch, 0009, in the attached version.\n\nIn addition to these changes, the attached version includes\nimprovements in code comments, function names, and their arrangements\nin astreamer_verify.c.\n\nPlease consider the attached version for the review.\n\nRegards,\nAmul", "msg_date": "Fri, 2 Aug 2024 17:13:02 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Fri, Aug 2, 2024 at 7:43 AM Amul Sul <[email protected]> wrote:\n> Please consider the attached version for the review.\n\nThanks. I committed 0001-0003. The only thing that I changed was that\nin 0001, you forgot to pgindent, which actually mattered quite a bit,\nbecause astreamer is one character shorter than bbstreamer.\n\nBefore we proceed with the rest of this patch series, I think we\nshould fix up the comments for some of the astreamer files. Proposed\npatch for that attached; please review.\n\nI also noticed that cfbot was unhappy about this patch set:\n\n[10:37:55.075] pg_verifybackup.c:100:7: error: no previous extern\ndeclaration for non-static variable 'format'\n[-Werror,-Wmissing-variable-declarations]\n[10:37:55.075] char format = '\\0'; /* p(lain)/t(ar) */\n[10:37:55.075] ^\n[10:37:55.075] pg_verifybackup.c:100:1: note: declare 'static' if the\nvariable is not intended to be used outside of this translation unit\n[10:37:55.075] char format = '\\0'; /* p(lain)/t(ar) */\n[10:37:55.075] ^\n[10:37:55.075] pg_verifybackup.c:101:23: error: no previous extern\ndeclaration for non-static variable 'compress_algorithm'\n[-Werror,-Wmissing-variable-declarations]\n[10:37:55.075] pg_compress_algorithm compress_algorithm = PG_COMPRESSION_NONE;\n[10:37:55.075] ^\n[10:37:55.075] pg_verifybackup.c:101:1: note: declare 'static' if the\nvariable is not intended to be used outside of this translation unit\n[10:37:55.075] pg_compress_algorithm compress_algorithm = PG_COMPRESSION_NONE;\n[10:37:55.075] ^\n[10:37:55.075] 2 errors generated.\n\nPlease fix and, after posting future versions of the patch set, try to\nremember to check http://cfbot.cputube.org/amul-sul.html\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Aug 2024 12:58:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Aug 5, 2024 at 10:29 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Aug 2, 2024 at 7:43 AM Amul Sul <[email protected]> wrote:\n> > Please consider the attached version for the review.\n>\n> Thanks. I committed 0001-0003. The only thing that I changed was that\n> in 0001, you forgot to pgindent, which actually mattered quite a bit,\n> because astreamer is one character shorter than bbstreamer.\n>\n\nUnderstood. Thanks for tidying up and committing the patches.\n\n> Before we proceed with the rest of this patch series, I think we\n> should fix up the comments for some of the astreamer files. Proposed\n> patch for that attached; please review.\n>\n\nLooks good to me, except for the following typo that I have fixed in\nthe attached version:\n\ns/astreamer_plan_writer/astreamer_plain_writer/\n\n> I also noticed that cfbot was unhappy about this patch set:\n>\n> [10:37:55.075] pg_verifybackup.c:100:7: error: no previous extern\n> declaration for non-static variable 'format'\n> [-Werror,-Wmissing-variable-declarations]\n> [10:37:55.075] char format = '\\0'; /* p(lain)/t(ar) */\n> [10:37:55.075] ^\n> [10:37:55.075] pg_verifybackup.c:100:1: note: declare 'static' if the\n> variable is not intended to be used outside of this translation unit\n> [10:37:55.075] char format = '\\0'; /* p(lain)/t(ar) */\n> [10:37:55.075] ^\n> [10:37:55.075] pg_verifybackup.c:101:23: error: no previous extern\n> declaration for non-static variable 'compress_algorithm'\n> [-Werror,-Wmissing-variable-declarations]\n> [10:37:55.075] pg_compress_algorithm compress_algorithm = PG_COMPRESSION_NONE;\n> [10:37:55.075] ^\n> [10:37:55.075] pg_verifybackup.c:101:1: note: declare 'static' if the\n> variable is not intended to be used outside of this translation unit\n> [10:37:55.075] pg_compress_algorithm compress_algorithm = PG_COMPRESSION_NONE;\n> [10:37:55.075] ^\n> [10:37:55.075] 2 errors generated.\n>\n\nFixed in the attached version.\n\n> Please fix and, after posting future versions of the patch set, try to\n> remember to check http://cfbot.cputube.org/amul-sul.html\n\nSure. I used to rely on that earlier, but after Cirrus CI in the\nGitHub repo, I assumed the workflow would be the same as cfbot and\nstarted overlooking it. However, cfbot reported a warning that didn't\nappear in my GitHub run. From now on, I'll make sure to check cfbot as\nwell.\n\nRegards,\nAmul", "msg_date": "Tue, 6 Aug 2024 11:27:08 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Thu, Aug 1, 2024 at 9:19 AM Amul Sul <[email protected]> wrote:\n> > I think I would have made this pass context->show_progress to\n> > progress_report() instead of the whole verifier_context, but that's an\n> > arguable stylistic choice, so I'll defer to you if you prefer it the\n> > way you have it. Other than that, this LGTM.\n>\n> Additionally, I moved total_size and done_size to verifier_context\n> because done_size needs to be accessed in astreamer_verify.c.\n> With this change, verifier_context is now more suitable.\n\nBut it seems like 0006 now changes the logic for computing total_size.\nPrepatch, the condition is:\n\n- if (context->show_progress && !context->skip_checksums &&\n- should_verify_checksum(m))\n- context->total_size += m->size;\n\nwhere should_verify_checksum(m) checks (((m)->matched) && !((m)->bad)\n&& (((m)->checksum_type) != CHECKSUM_TYPE_NONE)). But post-patch the\ncondition is:\n\n+ if (!context.skip_checksums)\n...\n+ if (!should_ignore_relpath(context, m->pathname))\n+ total_size += m->size;\n\nThe old logic was reached from verify_backup_directory() which does\ncheck should_ignore_relpath(), so the new condition hasn't added\nanything. But it seems to have lost the show_progress condition, and\nthe m->checksum_type != CHECKSUM_TYPE_NONE condition. I think this\nmeans that we'll sum the sizes even when not displaying progress, and\nthat if some of the files in the manifest had no checksums, our\nprogress reporting would compute wrong percentages after the patch.\n\n> Understood. At the start of working on the v3 review, I thought of\n> completely discarding the 0007 patch and copying most of\n> verify_file_checksum() to a new function in astreamer_verify.c.\n> However, I later realized we could deduplicate some parts, so I split\n> verify_file_checksum() and moved the reusable part to a separate\n> function. Please have a look at v4-0007.\n\nYeah, that seems OK.\n\nThe fact that these patches don't have commit messages is making life\nmore difficult for me than it needs to be. In particular, I'm looking\nat 0009 and there's no hint about why you want to do this. In fact\nthat's the case for all of these refactoring patches. Instead of\nsaying something like \"tar format verification will want to verify the\ncontrol file, but will not be able to read the file directly from\ndisk, so separate the logic that reads the control file from the logic\nthat verifies it\" you just say what code you moved. Then I have to\nguess why you moved it, or flip back and forth between the refactoring\npatch and 0011 to try to figure it out. It would be nice if each of\nthese refactoring patches contained a clear indication about the\npurpose of the refactoring in the commit message.\n\n> I had the same thought about checking for NULL inside\n> should_verify_control_data(), but I wanted to maintain the structure\n> similar to should_verify_checksum(). Making this change would have\n> also required altering should_verify_checksum(), I wasn’t sure if I\n> should make that change before. Now, I did that in the attached\n> version -- 0008 patch.\n\nI believe there is no reason for this change to be part of 0008 at\nall, and that this should be part of whatever later patch needs it.\n\n> > Maybe think of doing something with the ASTREAMER_MEMBER_HEADER case also.\n>\n> Done.\n\nOK, the formatting of 0011 looks much better now.\n\nIt seems to me that 0011 is arranging to palloc the checksum context\nfor every file and then pfree it at the end. It seems like it would be\nconsiderably more efficient if astreamer_verify contained a\npg_checksum_context instead of a pointer to a pg_checksum_context. If\nyou need a flag to indicate whether we've reinitialized the checksum\nfor the current file, it's better to add that than to have all of\nthese unnecessary allocate/free cycles.\n\nExisting astreamer code uses struct member names_like_this. For the\nnew one, you mostly used namesLikeThis except when you used\nnames_like_this or namesLkThs.\n\nIt seems to me that instead of adding a global variable\nverify_backup_file_cb, it would be better to move the 'format'\nvariable into verifier_context. Then you can do something like if\n(context->format == 'p') verify_plain_backup_file() else\nverify_tar_backup_file().\n\nIt's pretty common for .tar.gz to be abbreviated to .tgz. I think we\nshould support that.\n\nLet's suppose that I have a backup which, for some reason, does not\nuse the same compression for all files (base.tar, 16384.tgz,\n16385.tar.gz, 16366.tar.lz4). With this patch, that will fail. Now,\nthat's not really a problem, because having a backup with mixed\ncompression algorithms like that is strange and you probably wouldn't\ntry to do it. But on the other hand, it looks to me like making the\ncode support that would be more elegant than what you have now.\nBecause, right now, you have code to detect what type of backup you've\ngot by looking at base.WHATEVER_EXTENSION ... but then you have to\nalso have code that complains if some later file doesn't have the same\nextension. But you could just detect the type of every file\nindividually.\n\nIn fact, I wonder if we even need -Z. What value is that actually\nproviding? Why not just always auto-detect?\n\nfind_backup_format() ignores the possibility of stat() throwing an\nerror. That's not good.\n\nSuppose that the backup directory contains main.tar, 16385.tar, and\nsnuffleupagus.tar. It looks to me like what will happen here is that\nwe'll verify main.tar with tblspc_oid = InvalidOid, 16385.tar with\ntblspc_oid = 16385, and snuffleupagus.tar with tblspc_oid =\nInvalidOid. That doesn't sound right. I think we should either\ncompletely ignore snuffleupagus.tar just as it were completely\nimaginary, or perhaps there's an argument for emitting a warning\nsaying that we weren't expecting a snuffleupagus to exist.\n\nIn general, I think all unexpected files in a tar-format backup\ndirectory should get the same treatment, regardless of whether the\nproblem is with the extension or the file itself. We should either\nsilently ignore everything that isn't expected to be present, or we\nshould emit a complaint saying that the file isn't expected to be\npresent. Right now, you say that it's \"not a valid file\" if the\nextension isn't what you expect (which doesn't seem like a good error\nmessage, because the file may be perfectly valid for what it is, it's\njust not a file we're expecting to see) and say nothing if the\nextension is right but the part of the filename preceding the\nextension is unexpected.\n\nA related issue is that it's a little unclear what --ignore is\nsupposed to do for tar-format backups. Does that ignore files in the\nbackup directory, or files instead of the tar files inside of the\nbackup directory? If we decide that --ignore ignores files in the\nbackup directory, then we should complain about any unexpected files\nthat are present there unless they've been ignored. If we decide that\n--ignore ignores files inside of the tar files, then I suggest we just\nsilently skip any files in the backup directory that don't seem to\nhave file names in the correct format. I think I prefer the latter\napproach, but I'm not 100% sure what's best.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Aug 2024 13:08:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Tue, Aug 6, 2024 at 10:39 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Aug 1, 2024 at 9:19 AM Amul Sul <[email protected]> wrote:\n> > > I think I would have made this pass context->show_progress to\n> > > progress_report() instead of the whole verifier_context, but that's an\n> > > arguable stylistic choice, so I'll defer to you if you prefer it the\n> > > way you have it. Other than that, this LGTM.\n> >\n> > Additionally, I moved total_size and done_size to verifier_context\n> > because done_size needs to be accessed in astreamer_verify.c.\n> > With this change, verifier_context is now more suitable.\n>\n> But it seems like 0006 now changes the logic for computing total_size.\n> Prepatch, the condition is:\n>\n> - if (context->show_progress && !context->skip_checksums &&\n> - should_verify_checksum(m))\n> - context->total_size += m->size;\n>\n> where should_verify_checksum(m) checks (((m)->matched) && !((m)->bad)\n> && (((m)->checksum_type) != CHECKSUM_TYPE_NONE)). But post-patch the\n> condition is:\n>\n> + if (!context.skip_checksums)\n> ...\n> + if (!should_ignore_relpath(context, m->pathname))\n> + total_size += m->size;\n>\n> The old logic was reached from verify_backup_directory() which does\n> check should_ignore_relpath(), so the new condition hasn't added\n> anything. But it seems to have lost the show_progress condition, and\n> the m->checksum_type != CHECKSUM_TYPE_NONE condition. I think this\n> means that we'll sum the sizes even when not displaying progress, and\n> that if some of the files in the manifest had no checksums, our\n> progress reporting would compute wrong percentages after the patch.\n>\n\nThat is not true. The compute_total_size() function doesn't do\nanything when not displaying progress, the first if condition, which\nreturns the same way as progress_report(). I omitted\nshould_verify_checksum() since we don't have match and bad flag\ninformation at the start, and we won't have that for TAR files at all.\nHowever, I missed the checksum_type check, which is necessary, and\nhave added it now.\n\nWith the patch, I am concerned that we won't be able to give an\naccurate progress report as before. We add all the file sizes in the\nbackup manifest to the total_size without checking if they exist on\ndisk. Therefore, sometimes the reported progress completion might not\nshow 100% when we encounter files where m->bad or m->match == false at\na later stage. However, I think this should be acceptable since there\nwill be an error for the respective missing or bad file, and it can be\nunderstood that verification is complete even if the progress isn't\n100% in that case. Thoughts/Comments?\n\n\n> > Understood. At the start of working on the v3 review, I thought of\n> > completely discarding the 0007 patch and copying most of\n> > verify_file_checksum() to a new function in astreamer_verify.c.\n> > However, I later realized we could deduplicate some parts, so I split\n> > verify_file_checksum() and moved the reusable part to a separate\n> > function. Please have a look at v4-0007.\n>\n> Yeah, that seems OK.\n>\n> The fact that these patches don't have commit messages is making life\n> more difficult for me than it needs to be. In particular, I'm looking\n> at 0009 and there's no hint about why you want to do this. In fact\n> that's the case for all of these refactoring patches. Instead of\n> saying something like \"tar format verification will want to verify the\n> control file, but will not be able to read the file directly from\n> disk, so separate the logic that reads the control file from the logic\n> that verifies it\" you just say what code you moved. Then I have to\n> guess why you moved it, or flip back and forth between the refactoring\n> patch and 0011 to try to figure it out. It would be nice if each of\n> these refactoring patches contained a clear indication about the\n> purpose of the refactoring in the commit message.\n>\n\nSorry, I was a bit lazy there, assuming you'd handle the review :).\nI can understand the frustration -- added some description.\n\n> > I had the same thought about checking for NULL inside\n> > should_verify_control_data(), but I wanted to maintain the structure\n> > similar to should_verify_checksum(). Making this change would have\n> > also required altering should_verify_checksum(), I wasn’t sure if I\n> > should make that change before. Now, I did that in the attached\n> > version -- 0008 patch.\n>\n> I believe there is no reason for this change to be part of 0008 at\n> all, and that this should be part of whatever later patch needs it.\n>\n\nOk\n\n> > > Maybe think of doing something with the ASTREAMER_MEMBER_HEADER case also.\n> >\n> > Done.\n>\n> OK, the formatting of 0011 looks much better now.\n>\n> It seems to me that 0011 is arranging to palloc the checksum context\n> for every file and then pfree it at the end. It seems like it would be\n> considerably more efficient if astreamer_verify contained a\n> pg_checksum_context instead of a pointer to a pg_checksum_context. If\n> you need a flag to indicate whether we've reinitialized the checksum\n> for the current file, it's better to add that than to have all of\n> these unnecessary allocate/free cycles.\n>\n\nI tried in the attached version, and it’s a good improvement. We don’t\nneed any flags; we can allocate that during astreamer creation. Later,\nin the ASTREAMER_MEMBER_HEADER case while reading, we can\n(re)initialize the context for each file as needed.\n\n> Existing astreamer code uses struct member names_like_this. For the\n> new one, you mostly used namesLikeThis except when you used\n> names_like_this or namesLkThs.\n>\n\nYeah, in my patch, I ended up using the same name for both the\nvariable and the function. To avoid that, I made this change. This\ncould be a minor inconvenience for someone using ctags/cscope to find\nthe definition of the function or variable, as they might be directed\nto the wrong place. However, I think it’s still okay since there are\nways to find the correct definition. I reverted those changes in the\nattached version.\n\n> It seems to me that instead of adding a global variable\n> verify_backup_file_cb, it would be better to move the 'format'\n> variable into verifier_context. Then you can do something like if\n> (context->format == 'p') verify_plain_backup_file() else\n> verify_tar_backup_file().\n>\n\nDone.\n\n> It's pretty common for .tar.gz to be abbreviated to .tgz. I think we\n> should support that.\n>\n\nDone.\n\n> Let's suppose that I have a backup which, for some reason, does not\n> use the same compression for all files (base.tar, 16384.tgz,\n> 16385.tar.gz, 16366.tar.lz4). With this patch, that will fail. Now,\n> that's not really a problem, because having a backup with mixed\n> compression algorithms like that is strange and you probably wouldn't\n> try to do it. But on the other hand, it looks to me like making the\n> code support that would be more elegant than what you have now.\n> Because, right now, you have code to detect what type of backup you've\n> got by looking at base.WHATEVER_EXTENSION ... but then you have to\n> also have code that complains if some later file doesn't have the same\n> extension. But you could just detect the type of every file\n> individually.\n>\n> In fact, I wonder if we even need -Z. What value is that actually\n> providing? Why not just always auto-detect?\n>\n\n+1, removed -Z option.\n\n> find_backup_format() ignores the possibility of stat() throwing an\n> error. That's not good.\n>\n\nI wasn't sure about that before -- I tried it in the attached version.\nSee if it looks good to you.\n\n> Suppose that the backup directory contains main.tar, 16385.tar, and\n> snuffleupagus.tar. It looks to me like what will happen here is that\n> we'll verify main.tar with tblspc_oid = InvalidOid, 16385.tar with\n> tblspc_oid = 16385, and snuffleupagus.tar with tblspc_oid =\n> InvalidOid. That doesn't sound right. I think we should either\n> completely ignore snuffleupagus.tar just as it were completely\n> imaginary, or perhaps there's an argument for emitting a warning\n> saying that we weren't expecting a snuffleupagus to exist.\n>\n> In general, I think all unexpected files in a tar-format backup\n> directory should get the same treatment, regardless of whether the\n> problem is with the extension or the file itself. We should either\n> silently ignore everything that isn't expected to be present, or we\n> should emit a complaint saying that the file isn't expected to be\n> present. Right now, you say that it's \"not a valid file\" if the\n> extension isn't what you expect (which doesn't seem like a good error\n> message, because the file may be perfectly valid for what it is, it's\n> just not a file we're expecting to see) and say nothing if the\n> extension is right but the part of the filename preceding the\n> extension is unexpected.\n>\n\nI added an error for files other than base.tar and\n<tablespacesoid>.tar. I think the error message could be improved.\n\n> A related issue is that it's a little unclear what --ignore is\n> supposed to do for tar-format backups. Does that ignore files in the\n> backup directory, or files instead of the tar files inside of the\n> backup directory? If we decide that --ignore ignores files in the\n> backup directory, then we should complain about any unexpected files\n> that are present there unless they've been ignored. If we decide that\n> --ignore ignores files inside of the tar files, then I suggest we just\n> silently skip any files in the backup directory that don't seem to\n> have file names in the correct format. I think I prefer the latter\n> approach, but I'm not 100% sure what's best.\n>\n\nI am interested in having that feature to be as useful as possible --\nI mean, allowing the option to ignore files from the backup directory\nand from the archive file as well. I don't see any major drawbacks,\napart from spending extra CPU cycles to browse the ignore list.\n\nRegards,\nAmul", "msg_date": "Wed, 7 Aug 2024 19:10:31 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "[ I committed 0001, then noticed I had a type in the subject line of\nthe commit message. Argh. ]\n\nOn Wed, Aug 7, 2024 at 9:41 AM Amul Sul <[email protected]> wrote:\n> With the patch, I am concerned that we won't be able to give an\n> accurate progress report as before. We add all the file sizes in the\n> backup manifest to the total_size without checking if they exist on\n> disk. Therefore, sometimes the reported progress completion might not\n> show 100% when we encounter files where m->bad or m->match == false at\n> a later stage. However, I think this should be acceptable since there\n> will be an error for the respective missing or bad file, and it can be\n> understood that verification is complete even if the progress isn't\n> 100% in that case. Thoughts/Comments?\n\nWhen somebody says that something is a refactoring commit, my\nassumption is that there should be no behavior change. If the behavior\nis changing, it's not purely a refactoring, and it shouldn't be\nlabelled as a refactoring (or at least there should be a prominent\ndisclaimer identifying whatever behavior has changed, if a small\nchange was deemed acceptable and unavoidable).\n\nI am very reluctant to accept a functional regression of the type that\nyou describe here (or the type that I postulated might occur, even if\nI was wrong and it doesn't). The point here is that we're trying to\nreuse the code, and I support that goal, because code reuse is good.\nBut it's not such a good thing that we should do it if it has negative\nconsequences. We should either figure out some other way of\nrefactoring it that doesn't have those negative side-effects, or we\nshould leave the existing code alone and have separate code for the\nnew stuff we want to do.\n\nI do realize that the type of side effect you describe here is quite\nminor. I could live with it if it were unavoidable. But I really don't\nsee why we can't avoid it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 11:42:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 7, 2024 at 9:12 PM Robert Haas <[email protected]> wrote:\n>\n> [ I committed 0001, then noticed I had a type in the subject line of\n> the commit message. Argh. ]\n>\n> On Wed, Aug 7, 2024 at 9:41 AM Amul Sul <[email protected]> wrote:\n> > With the patch, I am concerned that we won't be able to give an\n> > accurate progress report as before. We add all the file sizes in the\n> > backup manifest to the total_size without checking if they exist on\n> > disk. Therefore, sometimes the reported progress completion might not\n> > show 100% when we encounter files where m->bad or m->match == false at\n> > a later stage. However, I think this should be acceptable since there\n> > will be an error for the respective missing or bad file, and it can be\n> > understood that verification is complete even if the progress isn't\n> > 100% in that case. Thoughts/Comments?\n>\n> When somebody says that something is a refactoring commit, my\n> assumption is that there should be no behavior change. If the behavior\n> is changing, it's not purely a refactoring, and it shouldn't be\n> labelled as a refactoring (or at least there should be a prominent\n> disclaimer identifying whatever behavior has changed, if a small\n> change was deemed acceptable and unavoidable).\n>\n\nI agree; I'll be more careful next time.\n\n> I am very reluctant to accept a functional regression of the type that\n> you describe here (or the type that I postulated might occur, even if\n> I was wrong and it doesn't). The point here is that we're trying to\n> reuse the code, and I support that goal, because code reuse is good.\n> But it's not such a good thing that we should do it if it has negative\n> consequences. We should either figure out some other way of\n> refactoring it that doesn't have those negative side-effects, or we\n> should leave the existing code alone and have separate code for the\n> new stuff we want to do.\n>\n> I do realize that the type of side effect you describe here is quite\n> minor. I could live with it if it were unavoidable. But I really don't\n> see why we can't avoid it.\n>\n\nThe main issue I have is computing the total_size of valid files that\nwill be checksummed and that exist in both the manifests and the\nbackup, in the case of a tar backup. This cannot be done in the same\nway as with a plain backup.\n\nAnother consideration is that, in the case of a tar backup, since\nwe're reading all tar files from the backup rather than individual\nfiles to be checksummed, should we consider total_size as the sum of\nall valid tar files in the backup, regardless of checksum\nverification? If so, we would need to perform an initial pass to\ncalculate the total_size in the directory, similar to what\nverify_backup_directory() does., but doing so I am a bit uncomfortable\nand unsure if we should do that pass.\n\nAn alternative idea is to provide progress reports per file instead of\nfor the entire backup directory. We could report the size of each file\nand keep track of done_size as we read each tar header and content.\nFor example, the report could be:\n\n109032/109032 kB (100%) base.tar file verified\n123444/123444 kB (100%) 16245.tar file verified\n23478/23478 kB (100%) 16246.tar file verified\n.....\n<total_done_size>/<total_size> (NNN%) verified.\n\nI think this type of reporting can be implemented with minimal\nchanges, abd for plain backups, we can keep the reporting as it is.\nThoughts?\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 7 Aug 2024 22:35:00 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:05 PM Amul Sul <[email protected]> wrote:\n> The main issue I have is computing the total_size of valid files that\n> will be checksummed and that exist in both the manifests and the\n> backup, in the case of a tar backup. This cannot be done in the same\n> way as with a plain backup.\n\nI think you should compute and sum the sizes of the tar files\nthemselves. Suppose you readdir(), make a list of files that look\nrelevant, and stat() each one. total_size is the sum of the file\nsizes. Then you work your way through the list of files and read each\none. done_size is the total size of all files you've read completely\nplus the number of bytes you've read from the current file so far.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Aug 2024 13:58:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 7, 2024 at 11:28 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Aug 7, 2024 at 1:05 PM Amul Sul <[email protected]> wrote:\n> > The main issue I have is computing the total_size of valid files that\n> > will be checksummed and that exist in both the manifests and the\n> > backup, in the case of a tar backup. This cannot be done in the same\n> > way as with a plain backup.\n>\n> I think you should compute and sum the sizes of the tar files\n> themselves. Suppose you readdir(), make a list of files that look\n> relevant, and stat() each one. total_size is the sum of the file\n> sizes. Then you work your way through the list of files and read each\n> one. done_size is the total size of all files you've read completely\n> plus the number of bytes you've read from the current file so far.\n>\n\nI tried this in the attached version and made a few additional changes\nbased on Sravan's off-list comments regarding function names and\ndescriptions.\n\nNow, verification happens in two passes. The first pass simply\nverifies the file names, determines their compression types, and\nreturns a list of valid tar files whose contents need to be verified\nin the second pass. The second pass is called at the end of\nverify_backup_directory() after all files in that directory have been\nscanned. I named the functions for pass 1 and pass 2 as\nverify_tar_file_name() and verify_tar_file_contents(), respectively.\nThe rest of the code flow is similar as in the previous version.\n\nIn the attached patch set, I abandoned the changes, touching the\nprogress reporting code of plain backups by dropping the previous 0009\npatch. The new 0009 patch adds missing APIs to simple_list.c to\ndestroy SimplePtrList. The rest of the patch numbers remain unchanged.\n\nRegards,\nAmul", "msg_date": "Mon, 12 Aug 2024 14:42:24 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Aug 12, 2024 at 5:13 AM Amul Sul <[email protected]> wrote:\n> I tried this in the attached version and made a few additional changes\n> based on Sravan's off-list comments regarding function names and\n> descriptions.\n>\n> Now, verification happens in two passes. The first pass simply\n> verifies the file names, determines their compression types, and\n> returns a list of valid tar files whose contents need to be verified\n> in the second pass. The second pass is called at the end of\n> verify_backup_directory() after all files in that directory have been\n> scanned. I named the functions for pass 1 and pass 2 as\n> verify_tar_file_name() and verify_tar_file_contents(), respectively.\n> The rest of the code flow is similar as in the previous version.\n>\n> In the attached patch set, I abandoned the changes, touching the\n> progress reporting code of plain backups by dropping the previous 0009\n> patch. The new 0009 patch adds missing APIs to simple_list.c to\n> destroy SimplePtrList. The rest of the patch numbers remain unchanged.\n\nI think you've entangled the code paths here for plain-format backup\nand tar-format backup in a way that is not very nice. I suggest\nrefactoring things so that verify_backup_directory() is only called\nfor plain-format backups, and you have some completely separate\nfunction (perhaps verify_tar_backup) that is called for tar-format\nbackups. I don't think verify_backup_file() should be shared between\ntar-format and plain-format backups either. Let that be just for\nplain-format backups, and have separate logic for tar-format backups.\nRight now you've got \"if\" statements in various places trying to get\nall the cases correct, but I think you've missed some (and there's\nalso the issue of updating all the comments).\n\nFor instance, verify_backup_file() recurses into subdirectories, but\nthat behavior is inappropriate for a tar format backup, where\nsubdirectories should instead be treated like stray files: complain\nthat they exist. pg_verify_backup() does this:\n\n /* If it's a directory, just recurse. */\n if (S_ISDIR(sb.st_mode))\n {\n verify_backup_directory(context, relpath, fullpath);\n return;\n }\n\n /* If it's not a directory, it should be a plain file. */\n if (!S_ISREG(sb.st_mode))\n {\n report_backup_error(context,\n \"\\\"%s\\\" is not a file or directory\",\n relpath);\n return;\n }\n\nFor a plain format backup, this whole thing should be just:\n\n /* In a tar format backup, we expect only plain files. */\n if (!S_ISREG(sb.st_mode))\n {\n report_backup_error(context,\n \"\\\"%s\\\" is not a plain file\",\n relpath);\n return;\n }\n\nAlso, immediately above, you do\nsimple_string_list_append(&context->ignore_list, relpath), but that is\npointless in the tar-file case, and arguably wrong, if -i is going to\nignore both pathnames in the base directory and also pathnames inside\nthe tar files, because we could add something to the ignore list here\n-- accomplishing nothing useful -- and then that ignore-list entry\ncould cause us to disregard a stray file with the same name present\ninside one of the tar files -- which is silly. Note that the actual\npoint of this logic is to make sure that if we can't stat() a certain\ndirectory, we don't go off and issue a million complaints about all\nthe files in that directory being missing. But this doesn't accomplish\nthat goal for a tar-format backup. For a tar-format backup, you'd want\nto figure out which files in the manifest we don't expect to see based\non this file being inaccessible, and then arrange to suppress future\ncomplaints about all of those files. But you can't implement that\nhere, because you haven't parsed the file name yet. That happens\nlater, in verify_tar_file_name().\n\nYou could add a whole bunch more if statements here and try to work\naround these issues, but it seems pretty obviously a dead end to me.\nAlmost the entire function is going to end up being inside of an\nif-statement. Essentially the only thing in verify_backup_file() that\nshould actually be the same in the plain and tar-format cases is that\nyou should call stat() either way and check whether it throws an\nerror. But that is not enough justification for trying to share the\nwhole function.\n\nI find the logic in verify_tar_file_name() to be somewhat tortured as\nwell. The strstr() calls will match those strings anywhere in the\nfilename, not just at the end. But also, even if you fixed that, why\nwork backward from the end of the filename toward the beginning? It\nwould seem like it would more sense to (1) first check if the string\nstarts with \"base\" and set suffix equal to pathname+4, (2) if not,\nstrtol(pathname, &suffix, 10) and complain if we didn't eat at least\none character or got 0 or something too big to be an OID, (3) check\nwhether suffix is .tar, .tar.gz, etc.\n\nIn verify_member_checksum(), you set mystreamer->verify_checksum =\nfalse. That would be correct if there could only ever be one call to\nverify_member_checksum() per member file, but there is no such rule.\nThere can be (and, I think, normally will be) more than one\nASTREAMER_MEMBER_CONTENTS chunk. I'm a little confused as to how this\ncode passes any kind of testing.\n\nAlso in verify_member_checksum(), the mystreamer->received_bytes <\nm->size seems strange. I don't think this is the right way to do\nsomething when you reach the end of an archive member. The right way\nto do that is to do it when the ASTREAMER_MEMBER_TRAILER chunk shows\nup.\n\nIn verify_member_control_data(), you use astreamer_buffer_untIl(). But\nthat's the same buffer that is being used by verify_member_checksum(),\nso I don't really understand how you expect this to work. If this code\npath were ever taken, verify_member_checksum() would see the same data\nmore than once.\n\nThe call to pg_log_debug() in this function also seems quite random.\nIn a plain-format backup, we'd actually be doing something different\nfor pg_controldata vs. other files, namely reading it during the\ninitial directory scan. But here we're reading the file in exactly the\nsame sense as we're reading every other file, neither more nor less,\nso why mention this file and not all of the others? And why at this\nexact spot in the code?\n\nI suspect that the report_fatal_error(\"%s: could not read control\nfile: read %d of %zu\", ...) call is unreachable. I agree that you need\nto handle the case where the control file inside the tar file is not\nthe expected length, and in fact I think you should probably write a\nTAP test for that exact scenario to make sure it works. I bet this\ndoesn't. Even if it did, the error message makes no sense in context.\nIn the plain-format backup, this error would come from code reading\nthe actual bytes off the disk -- i.e. the complaint about not being\nable to read the control file would come from the read() system call.\nHere it doesn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 13:19:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Tue, Aug 13, 2024 at 10:49 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Aug 12, 2024 at 5:13 AM Amul Sul <[email protected]> wrote:\n> > I tried this in the attached version and made a few additional changes\n> > based on Sravan's off-list comments regarding function names and\n> > descriptions.\n> >\n> > Now, verification happens in two passes. The first pass simply\n> > verifies the file names, determines their compression types, and\n> > returns a list of valid tar files whose contents need to be verified\n> > in the second pass. The second pass is called at the end of\n> > verify_backup_directory() after all files in that directory have been\n> > scanned. I named the functions for pass 1 and pass 2 as\n> > verify_tar_file_name() and verify_tar_file_contents(), respectively.\n> > The rest of the code flow is similar as in the previous version.\n> >\n> > In the attached patch set, I abandoned the changes, touching the\n> > progress reporting code of plain backups by dropping the previous 0009\n> > patch. The new 0009 patch adds missing APIs to simple_list.c to\n> > destroy SimplePtrList. The rest of the patch numbers remain unchanged.\n>\n> I think you've entangled the code paths here for plain-format backup\n> and tar-format backup in a way that is not very nice. I suggest\n> refactoring things so that verify_backup_directory() is only called\n> for plain-format backups, and you have some completely separate\n> function (perhaps verify_tar_backup) that is called for tar-format\n> backups. I don't think verify_backup_file() should be shared between\n> tar-format and plain-format backups either. Let that be just for\n> plain-format backups, and have separate logic for tar-format backups.\n> Right now you've got \"if\" statements in various places trying to get\n> all the cases correct, but I think you've missed some (and there's\n> also the issue of updating all the comments).\n>\n> For instance, verify_backup_file() recurses into subdirectories, but\n> that behavior is inappropriate for a tar format backup, where\n> subdirectories should instead be treated like stray files: complain\n> that they exist. pg_verify_backup() does this:\n>\n> /* If it's a directory, just recurse. */\n> if (S_ISDIR(sb.st_mode))\n> {\n> verify_backup_directory(context, relpath, fullpath);\n> return;\n> }\n>\n> /* If it's not a directory, it should be a plain file. */\n> if (!S_ISREG(sb.st_mode))\n> {\n> report_backup_error(context,\n> \"\\\"%s\\\" is not a file or directory\",\n> relpath);\n> return;\n> }\n>\n> For a plain format backup, this whole thing should be just:\n>\n> /* In a tar format backup, we expect only plain files. */\n> if (!S_ISREG(sb.st_mode))\n> {\n> report_backup_error(context,\n> \"\\\"%s\\\" is not a plain file\",\n> relpath);\n> return;\n> }\n>\n> Also, immediately above, you do\n> simple_string_list_append(&context->ignore_list, relpath), but that is\n> pointless in the tar-file case, and arguably wrong, if -i is going to\n> ignore both pathnames in the base directory and also pathnames inside\n> the tar files, because we could add something to the ignore list here\n> -- accomplishing nothing useful -- and then that ignore-list entry\n> could cause us to disregard a stray file with the same name present\n> inside one of the tar files -- which is silly. Note that the actual\n> point of this logic is to make sure that if we can't stat() a certain\n> directory, we don't go off and issue a million complaints about all\n> the files in that directory being missing. But this doesn't accomplish\n> that goal for a tar-format backup. For a tar-format backup, you'd want\n> to figure out which files in the manifest we don't expect to see based\n> on this file being inaccessible, and then arrange to suppress future\n> complaints about all of those files. But you can't implement that\n> here, because you haven't parsed the file name yet. That happens\n> later, in verify_tar_file_name().\n>\n> You could add a whole bunch more if statements here and try to work\n> around these issues, but it seems pretty obviously a dead end to me.\n> Almost the entire function is going to end up being inside of an\n> if-statement. Essentially the only thing in verify_backup_file() that\n> should actually be the same in the plain and tar-format cases is that\n> you should call stat() either way and check whether it throws an\n> error. But that is not enough justification for trying to share the\n> whole function.\n>\n\nI agree with keeping verify_backup_file() separate, but I'm hesitant\nabout doing the same for verify_backup_directory(). Otherwise, we\nmight end up with nearly duplicate functions that are very similar.\nSince the changes in verify_backup_directory() are minimal, I've left\nit as is, let me know your thoughts on that. I’ve kept\nverify_backup_file() separated for plain backup files and added a new\nfunction, verify_tar_backup_file(), in patch 0011. To maintain\nconsistency, I also renamed verify_backup_file() to\nverify_plain_backup_file() in patch 0006.\n\n> I find the logic in verify_tar_file_name() to be somewhat tortured as\n> well. The strstr() calls will match those strings anywhere in the\n> filename, not just at the end. But also, even if you fixed that, why\n> work backward from the end of the filename toward the beginning? It\n> would seem like it would more sense to (1) first check if the string\n> starts with \"base\" and set suffix equal to pathname+4, (2) if not,\n> strtol(pathname, &suffix, 10) and complain if we didn't eat at least\n> one character or got 0 or something too big to be an OID, (3) check\n> whether suffix is .tar, .tar.gz, etc.\n>\n\nOk, did it this way.\n\n> In verify_member_checksum(), you set mystreamer->verify_checksum =\n> false. That would be correct if there could only ever be one call to\n> verify_member_checksum() per member file, but there is no such rule.\n> There can be (and, I think, normally will be) more than one\n> ASTREAMER_MEMBER_CONTENTS chunk. I'm a little confused as to how this\n> code passes any kind of testing.\n>\n\nI did that to avoid adding the line for every error case where we\nreturn. The flag is re-enabled when the file contents are yet to be\nreceived in mystreamer->received_bytes < m->size.\n\n> Also in verify_member_checksum(), the mystreamer->received_bytes <\n> m->size seems strange. I don't think this is the right way to do\n> something when you reach the end of an archive member. The right way\n> to do that is to do it when the ASTREAMER_MEMBER_TRAILER chunk shows\n> up.\n>\n\nOk, I've split this into two parts: the first part handles incremental\ncomputation at ASTREAMER_MEMBER_CONTENTS, and the second part performs\nthe final verification at the ASTREAMER_MEMBER_TRAILER stage.\n\n> In verify_member_control_data(), you use astreamer_buffer_untIl(). But\n> that's the same buffer that is being used by verify_member_checksum(),\n> so I don't really understand how you expect this to work. If this code\n> path were ever taken, verify_member_checksum() would see the same data\n> more than once.\n>\n\nNo, for checksum calculation, we directly compute the checksum on the\nreceived content (which is the caller's buffer) without copying it.\nHowever, for control file verification, we need the entire file, so we\nfirst copy it into a local buffer within myastremer. This local buffer\nis used solely for storing control file data.\n\nI've made some adjustments to align this code style with checksum\nverification: copying will occur during the ASTREAMER_MEMBER_CONTENTS\nstage, and final verification will be performed in the\nASTREAMER_MEMBER_TRAILER stage.\n\n> The call to pg_log_debug() in this function also seems quite random.\n> In a plain-format backup, we'd actually be doing something different\n> for pg_controldata vs. other files, namely reading it during the\n> initial directory scan. But here we're reading the file in exactly the\n> same sense as we're reading every other file, neither more nor less,\n> so why mention this file and not all of the others? And why at this\n> exact spot in the code?\n>\n\nAgreed, it was added without much thinking.\n\n> I suspect that the report_fatal_error(\"%s: could not read control\n> file: read %d of %zu\", ...) call is unreachable. I agree that you need\n> to handle the case where the control file inside the tar file is not\n> the expected length, and in fact I think you should probably write a\n> TAP test for that exact scenario to make sure it works. I bet this\n> doesn't. Even if it did, the error message makes no sense in context.\n> In the plain-format backup, this error would come from code reading\n> the actual bytes off the disk -- i.e. the complaint about not being\n> able to read the control file would come from the read() system call.\n> Here it doesn't.\n>\n\nAgreed. Replace that with a check for an expected file size.\n\nIn addition to the mentioned changes, I have renamed functions in\nastreamer_verify.c to ensure a consistent naming format.\n\nRegards,\nAmul.", "msg_date": "Wed, 14 Aug 2024 18:49:57 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 14, 2024 at 9:20 AM Amul Sul <[email protected]> wrote:\n> I agree with keeping verify_backup_file() separate, but I'm hesitant\n> about doing the same for verify_backup_directory().\n\nI don't have time today to go through your whole email or re-review\nthe code, but I plan to circle back to that at a later time, However,\nI want to respond to this point in the meanwhile. There are two big\nthings that are different for a tar-format backup vs. a\ndirectory-format backup as far as verify_backup_directory() is\nconcerned. One is that, for a directory format backup, we need to be\nable to recurse down through subdirectories; for tar-format backups we\ndon't. So a version of this function that only handled tar-format\nbackups would be somewhat shorter and simpler, and would need one\nfewer argument. The second difference is that for the tar-format\nbackup, you need to make a list of the files you see and then go back\nand visit each one a second time, and for a directory-format backup\nyou don't need to do that. It seems to me that those differences are\nsignificant enough to warrant having two separate functions. If you\nunify them, I think that less than half of the resulting function is\ngoing to be common to both cases. Yeah, a few bits of logic will be\nduplicated, like the error handling for closedir(), the logic to skip\n\".\" and \"..\", and the psprintf() to construct a full pathname for the\ndirectory entry. But that isn't really very much code, and it's code\nthat is pretty straightforward and also present in various other\nplaces in the PostgreSQL source tree, perhaps not in precisely the\nsame form. The fact that two functions both call readdir() and do\nsomething with each file in the directory isn't enough to say that\nthey should be the same function, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Aug 2024 12:44:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 14, 2024 at 12:44 PM Robert Haas <[email protected]> wrote:\n> On Wed, Aug 14, 2024 at 9:20 AM Amul Sul <[email protected]> wrote:\n> > I agree with keeping verify_backup_file() separate, but I'm hesitant\n> > about doing the same for verify_backup_directory().\n>\n> I don't have time today to go through your whole email or re-review\n> the code, but I plan to circle back to that at a later time, However,\n> I want to respond to this point in the meanwhile.\n\nI have committed 0004 (except that I changed a comment) and 0005\n(except that I didn't move READ_CHUNK_SIZE).\n\nLooking at the issue mentioned above again, I agree that the changes\nin verify_backup_directory() in this version don't look overly\ninvasive in this version. I'm still not 100% convinced it's the right\ncall, but it doesn't seem bad.\n\nYou have a spurious comment change to the header of verify_plain_backup_file().\n\nI feel like the naming of tarFile and tarFiles is not consistent with\nthe overall style of this file.\n\nI don't like this:\n\n[robert.haas ~]$ pg_verifybackup btar\npg_verifybackup: error: pg_waldump does not support parsing WAL files\nfrom a tar archive.\npg_verifybackup: hint: Try \"pg_verifybackup --help\" to skip parse WAL\nfiles option.\n\nThe hint seems like it isn't really correct grammar, and I also don't\nsee why we can't be more specific. How about \"You must use -n,\n--no-parse-wal when verifying a tar-format backup.\"?\n\nThe primary message seems a bit redundant, because parsing WAL files\nis the only thing pg_waldump does. How about \"pg_waldump cannot read\nfrom a tar archive\"? Note that primary error messages should not end\nin a period (while hint and detail messages should).\n\n+ int64 num = strtoi64(relpath, &suffix, 10);\n\n\n\n+ if (suffix == NULL || (num <= 0) || (num > OID_MAX))\n\nSeems like too many parentheses.\n\n\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Aug 2024 15:53:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Fri, Aug 16, 2024 at 3:53 PM Robert Haas <[email protected]> wrote:\n> + int64 num = strtoi64(relpath, &suffix, 10);\n\nHit send too early. Here, seems like this should be strtoul(), not strtoi64().\n\nThe documentation of --format seems to be cut-and-pasted from\npg_basebackup and the language isn't really appropriate here. e.g.\n\"The main data directory's contents will be written to a file\nnamed...\" but pg_verifybackup writes nothing.\n\n+ simple_string_list_append(&context.ignore_list, \"pg_wal.tar\");\n+ simple_string_list_append(&context.ignore_list, \"pg_wal.tar.gz\");\n+ simple_string_list_append(&context.ignore_list, \"pg_wal.tar.lz4\");\n+ simple_string_list_append(&context.ignore_list, \"pg_wal.tar.zst\");\n\nWhy not make the same logic that recognizes base or an OID also\nrecognize pg_wal as a prefix, and identify that as the WAL archive?\nFor now we'll have to skip it, but if you do it that way then if we\nadd future support for more suffixes, it'll just work, whereas this\nway won't. And you'd need that code anyway if we ever can run\npg_waldump on a tarfile, because you would need to identify the\ncompression method. Note that the danger of the list of suffixes\ngetting out of sync here is not hypothetical: you added .tgz elsewhere\nbut not here.\n\nThere's probably more to look at here but I'm running out of energy for today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Aug 2024 16:04:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Sat, Aug 17, 2024 at 1:34 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Aug 16, 2024 at 3:53 PM Robert Haas <[email protected]> wrote:\n> > + int64 num = strtoi64(relpath, &suffix, 10);\n>\n> Hit send too early. Here, seems like this should be strtoul(), not strtoi64().\n>\n\n Fixed in the attached version including others suggestions in that mail.\n\n> The documentation of --format seems to be cut-and-pasted from\n> pg_basebackup and the language isn't really appropriate here. e.g.\n> \"The main data directory's contents will be written to a file\n> named...\" but pg_verifybackup writes nothing.\n>\n\nI wrote that intentionally -- I didn’t mean to imply that\npg_verifybackup handles this; rather, I meant that the backup tool (in\nthis case, pg_basebackup) produces those files. I can see the\nconfusion and have rephrased the text accordingly.\n\n> + simple_string_list_append(&context.ignore_list, \"pg_wal.tar\");\n> + simple_string_list_append(&context.ignore_list, \"pg_wal.tar.gz\");\n> + simple_string_list_append(&context.ignore_list, \"pg_wal.tar.lz4\");\n> + simple_string_list_append(&context.ignore_list, \"pg_wal.tar.zst\");\n>\n> Why not make the same logic that recognizes base or an OID also\n> recognize pg_wal as a prefix, and identify that as the WAL archive?\n> For now we'll have to skip it, but if you do it that way then if we\n> add future support for more suffixes, it'll just work, whereas this\n> way won't. And you'd need that code anyway if we ever can run\n> pg_waldump on a tarfile, because you would need to identify the\n> compression method. Note that the danger of the list of suffixes\n> getting out of sync here is not hypothetical: you added .tgz elsewhere\n> but not here.\n>\n\nDid this way.\n\n> There's probably more to look at here but I'm running out of energy for today.\n>\n\nThank you for the review and committing 0004 and 0006 patches.\n\nRegards,\nAmul", "msg_date": "Tue, 20 Aug 2024 15:56:23 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Tue, Aug 20, 2024 at 3:56 PM Amul Sul <[email protected]> wrote:\n>\n> On Sat, Aug 17, 2024 at 1:34 AM Robert Haas <[email protected]> wrote:\n> >\n> > On Fri, Aug 16, 2024 at 3:53 PM Robert Haas <[email protected]> wrote:\n[...]\n> > There's probably more to look at here but I'm running out of energy for today.\n> >\n>\n> Thank you for the review and committing 0004 and 0006 patches.\n>\n\nI have reworked a few comments, revised error messages, and made some\nminor tweaks in the attached version.\n\nAdditionally, I would like to discuss a couple of concerns regarding\nerror placement and function names to gather your suggestions.\n\n0007 patch: Regarding error placement:\n\n1. I'm a bit unsure about the (bytes_read != m->size) check that I\nplaced in verify_checksum() and whether it's in the right place. Per\nour previous discussion, this check is applicable to plain backup\nfiles since they can change while being read, but not for files\nbelonging to tar backups. For consistency, I included the check for\ntar backups as well, as it doesn't cause any harm. Is it okay to keep\nthis check in verify_checksum(), or should I move it back to\nverify_file_checksum() and apply it only to the plain backup format?\n\n2. For the verify_checksum() function, I kept the argument name as\nbytes_read. Should we rename it to something more meaningful like\ncomputed_bytes, computed_size, or checksum_computed_size?\n\n\n0011 patch: Regarding function names:\n\n1. named the function verify_tar_backup_file() to align with\nverify_plain_backup_file(), but it does not perform the complete\nverification as verify_plain_backup_file does. Not sure if it is the\nright name.\n\n2. verify_tar_file_contents() is the second and final part of tar\nbackup verification. Should its name be aligned with\nverify_tar_backup_file()? I’m unsure what the best name would be.\nPerhaps verify_tar_backup_file_final(), but then\nverify_tar_backup_file() would need to be renamed to something like\nverify_tar_backup_file_initial(), which might be too lengthy.\n\n3. verify_tar_contents() is the core of verify_tar_file_contents()\nthat handles the actual verification. I’m unsure about the current\nnaming. Should we rename it to something like\nverify_tar_contents_core()? It wouldn’t be an issue if we renamed\nverify_tar_file_contents() as pointed in point #2.\n\nRegards,\nAmul", "msg_date": "Wed, 21 Aug 2024 16:37:49 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Wed, Aug 21, 2024 at 7:08 AM Amul Sul <[email protected]> wrote:\n> I have reworked a few comments, revised error messages, and made some\n> minor tweaks in the attached version.\n\nThanks.\n\n> Additionally, I would like to discuss a couple of concerns regarding\n> error placement and function names to gather your suggestions.\n>\n> 0007 patch: Regarding error placement:\n>\n> 1. I'm a bit unsure about the (bytes_read != m->size) check that I\n> placed in verify_checksum() and whether it's in the right place. Per\n> our previous discussion, this check is applicable to plain backup\n> files since they can change while being read, but not for files\n> belonging to tar backups. For consistency, I included the check for\n> tar backups as well, as it doesn't cause any harm. Is it okay to keep\n> this check in verify_checksum(), or should I move it back to\n> verify_file_checksum() and apply it only to the plain backup format?\n\nI think it's a good sanity check. For a long time I thought it was\ntriggerable until I eventually realized that you just get this message\nif the file size is wrong:\n\npg_verifybackup: error: \"pg_xact/0000\" has size 8203 on disk but size\n8192 in the manifest\n\nAfter realizing that, I agree with you that this doesn't really seem\nreachable for tar backups, but I don't think it hurts anything either.\n\nWhile I was investigating this question, I discovered this problem:\n\n$ pg_basebackup -cfast -Ft -Dx\n$ pg_verifybackup -n x\nbackup successfully verified\n$ mkdir x/tmpdir\n$ tar -C x/tmpdir -xf x/base.tar\n$ rm x/base.tar\n$ tar -C x/tmpdir -cf x/base.tar .\n$ pg_verifybackup -n x\n<lots of errors>\n\nIt appears that the reason why this fails is that the paths in the\noriginal base.tar from the server do not include \"./\" at the\nbeginning, and the ones that I get when I create my own tarfile have\nthat. But that shouldn't matter. Anyway, I was able to work around it\nlike this:\n\n$ tar -C x/tmpdir -cf x/base.tar `(cd x/tmpdir; echo *)`\n\nThen the result verifies. But I feel like we should have some test\ncases that do this kind of stuff so that there is automated\nverification. In fact, the current patch seems to have no negative\ntest cases at all. I think we should test all the cases in\n003_corruption.pl with tar format backups as well as with plain format\nbackups, which we could do by untarring one of the archives, messing\nsomething up, and then retarring it. I also think we should have some\nnegative test case specific to tar-format backup. I suggest running a\ncoverage analysis and trying to craft test cases that hit as much of\nthe code as possible. There will probably be some errors you can't\nhit, but you should try to hit the ones you can.\n\n> 2. For the verify_checksum() function, I kept the argument name as\n> bytes_read. Should we rename it to something more meaningful like\n> computed_bytes, computed_size, or checksum_computed_size?\n\nI think it's fine the way you have it.\n\n> 0011 patch: Regarding function names:\n>\n> 1. named the function verify_tar_backup_file() to align with\n> verify_plain_backup_file(), but it does not perform the complete\n> verification as verify_plain_backup_file does. Not sure if it is the\n> right name.\n\nI was thinking of something like precheck_tar_backup_file().\n\n> 2. verify_tar_file_contents() is the second and final part of tar\n> backup verification. Should its name be aligned with\n> verify_tar_backup_file()? I’m unsure what the best name would be.\n> Perhaps verify_tar_backup_file_final(), but then\n> verify_tar_backup_file() would need to be renamed to something like\n> verify_tar_backup_file_initial(), which might be too lengthy.\n\nverify_tar_file_contents() actually verifies the contents of all the\ntar files, not just one, so the name is a bit confusing. Maybe\nverify_all_tar_files().\n\n> 3. verify_tar_contents() is the core of verify_tar_file_contents()\n> that handles the actual verification. I’m unsure about the current\n> naming. Should we rename it to something like\n> verify_tar_contents_core()? It wouldn’t be an issue if we renamed\n> verify_tar_file_contents() as pointed in point #2.\n\nverify_one_tar_file()?\n\nBut with those renames, I think you really start to see why I'm not\nvery comfortable with verify_backup_directory(). The tar and plain\nformat cases aren't really doing the same thing. We're just gluing\nthem into a single function anyway.\n\nI am also still uncomfortable with the way you've refactored some of\nthis so that we end up with very small amounts of code far away from\nother code that they influence. Like you end up with this:\n\n /* Check the backup manifest entry for this file. */\n m = verify_manifest_entry(context, relpath, sb.st_size);\n\n /* Validate the pg_control information */\n if (should_verify_control_data(context->manifest, m))\n...\n if (show_progress && !context->skip_checksums &&\n should_verify_checksum(m))\n\nBut verify_manifest_entry can return NULL or it can set m->bad and\neither of those change the result of should_verify_control_data() and\nshould_verify_checksum(), but none of that is obvious when you just\nlook at this. Admittedly, the code in master isn't brilliant in terms\nof making it obvious what's happening either, but I think this is\nworse. I'm not really sure what I think we should do about that yet,\nbut I'm uncomfortable with it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 16:31:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Sat, Aug 24, 2024 at 2:02 AM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Aug 21, 2024 at 7:08 AM Amul Sul <[email protected]> wrote:\n> [....]\n> Then the result verifies. But I feel like we should have some test\n> cases that do this kind of stuff so that there is automated\n> verification. In fact, the current patch seems to have no negative\n> test cases at all. I think we should test all the cases in\n> 003_corruption.pl with tar format backups as well as with plain format\n> backups, which we could do by untarring one of the archives, messing\n> something up, and then retarring it. I also think we should have some\n> negative test case specific to tar-format backup. I suggest running a\n> coverage analysis and trying to craft test cases that hit as much of\n> the code as possible. There will probably be some errors you can't\n> hit, but you should try to hit the ones you can.\n>\n\nDone. I’ve added a few tests that extract, modify, and repack the tar\nfiles, mainly base.tar and skipping tablespace.tar since it mostly\nduplicate tests. I’ve also updated 002_algorithm.pl to cover\ntests for tar backups.\n\n>\n> > 0011 patch: Regarding function names:\n> >\n> > 1. named the function verify_tar_backup_file() to align with\n> > verify_plain_backup_file(), but it does not perform the complete\n> > verification as verify_plain_backup_file does. Not sure if it is the\n> > right name.\n>\n> I was thinking of something like precheck_tar_backup_file().\n\nDone.\n\n> > 2. verify_tar_file_contents() is the second and final part of tar\n> > backup verification. Should its name be aligned with\n> > verify_tar_backup_file()? I’m unsure what the best name would be.\n> > Perhaps verify_tar_backup_file_final(), but then\n> > verify_tar_backup_file() would need to be renamed to something like\n> > verify_tar_backup_file_initial(), which might be too lengthy.\n>\n> verify_tar_file_contents() actually verifies the contents of all the\n> tar files, not just one, so the name is a bit confusing. Maybe\n> verify_all_tar_files().\n>\n\nDone.\n\n>\n> > 3. verify_tar_contents() is the core of verify_tar_file_contents()\n> > that handles the actual verification. I’m unsure about the current\n> > naming. Should we rename it to something like\n> > verify_tar_contents_core()? It wouldn’t be an issue if we renamed\n> > verify_tar_file_contents() as pointed in point #2.\n>\n> verify_one_tar_file()?\n>\n\nDone.\n\n>\n> But with those renames, I think you really start to see why I'm not\n> very comfortable with verify_backup_directory(). The tar and plain\n> format cases aren't really doing the same thing. We're just gluing\n> them into a single function anyway.\n\nAgreed. I can see the uncomfortness -- added a new function.\n\n>\n> I am also still uncomfortable with the way you've refactored some of\n> this so that we end up with very small amounts of code far away from\n> other code that they influence. Like you end up with this:\n>\n> /* Check the backup manifest entry for this file. */\n> m = verify_manifest_entry(context, relpath, sb.st_size);\n>\n> /* Validate the pg_control information */\n> if (should_verify_control_data(context->manifest, m))\n> ...\n> if (show_progress && !context->skip_checksums &&\n> should_verify_checksum(m))\n>\n> But verify_manifest_entry can return NULL or it can set m->bad and\n> either of those change the result of should_verify_control_data() and\n> should_verify_checksum(), but none of that is obvious when you just\n> look at this. Admittedly, the code in master isn't brilliant in terms\n> of making it obvious what's happening either, but I think this is\n> worse. I'm not really sure what I think we should do about that yet,\n> but I'm uncomfortable with it.\n\nI am not sure if I fully understand the concern, but I see it\ndifferently. The verify_manifest_entry function returns an entry, m,\nthat the caller doesn't need to worry about, as it simply passes it to\nsubsequent routines or macros that are aware of the possible inputs --\nwhether it's NULL, m->bad, or something else.\n\nRegards,\nAmul", "msg_date": "Thu, 29 Aug 2024 19:16:49 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "I would rather that you didn't add simple_ptr_list_destroy_deep()\ngiven that you don't need it for this patch series.\n\n+\n\"\\\"%s\\\" unexpected file in the tar format backup\",\n\nThis doesn't seem grammatical to me. Perhaps change this to: file\n\\\"%s\\\" is not expected in a tar format backup\n\n+ /* We are only interested in files that are not in the ignore list. */\n+ if (member->is_directory || member->is_link ||\n+ should_ignore_relpath(mystreamer->context, member->pathname))\n+ return;\n\nDoesn't this need to happen after we add pg_tblspc/$OID to the path,\nrather than before? I bet this doesn't work correctly for files in\nuser-defined tablespaces, compared to the way it work for a\ndirectory-format backup.\n\n+ char temp[MAXPGPATH];\n+\n+ /* Copy original name at temporary space */\n+ memcpy(temp, member->pathname, MAXPGPATH);\n+\n+ snprintf(member->pathname, MAXPGPATH, \"%s/%d/%s\",\n+ \"pg_tblspc\", mystreamer->tblspc_oid, temp);\n\nI don't like this at all. This function doesn't have any business\nmodifying the astreamer_member, and it doesn't need to. It can just do\nchar *pathname; char tmppathbuf[MAXPGPATH] and then set pathname to\neither member->pathname or tmppathbuf depending on\nOidIsValid(tblspcoid). Also, shouldn't this be using %u, not %d?\n\n+ mystreamer->mfile = (void *) m;\n\nEither the cast to void * isn't necessary, or it indicates that\nthere's a type mismatch that should be fixed.\n\n+ * We could have these checks while receiving contents. However, since\n+ * contents are received in multiple iterations, this would result in\n+ * these lengthy checks being performed multiple times. Instead, setting\n+ * flags here and using them before proceeding with verification will be\n+ * more efficient.\n\nSeems unnecessary to explain this.\n\n+ Assert(mystreamer->verify_checksum);\n+\n+ /* Should have came for the right file */\n+ Assert(strcmp(member->pathname, m->pathname) == 0);\n+\n+ /*\n+ * The checksum context should match the type noted in the backup\n+ * manifest.\n+ */\n+ Assert(checksum_ctx->type == m->checksum_type);\n\nWhat do you think about:\n\nAssert(m != NULL && !m->bad);\nAssert(checksum_ctx->type == m->checksum_type);\nAssert(strcmp(member->pathname, m->pathname) == 0);\n\nOr possibly change the first one to Assert(should_verify_checksum(m))?\n\n+ memcpy(&control_file, streamer->bbs_buffer.data,\nsizeof(ControlFileData));\n\nThis probably doesn't really hurt anything, but it's a bit ugly. You\nfirst use astreamer_buffer_until() to force the entire file into a\nbuffer. And then here, you copy the entire file into a second buffer\nwhich is exactly the same except that it's guaranteed to be properly\naligned. It would be possible to include a ControlFileData in\nastreamer_verify and copy the bytes directly into it (you'd need a\nsecond astreamer_verify field for the number of bytes already copied\ninto that structure). I'm not 100% sure that's worth the code, but it\nseems like it wouldn't take more than a few lines, so perhaps it is.\n\n+/*\n+ * Verify plain backup.\n+ */\n+static void\n+verify_plain_backup(verifier_context *context)\n+{\n+ Assert(context->format == 'p');\n+ verify_backup_directory(context, NULL, context->backup_directory);\n+}\n+\n\nThis seems like a waste of space.\n\n+verify_tar_backup(verifier_context *context)\n\nI like this a lot better now! I'm still not quite sure about the\ndecision to have the ignore list apply to both the backup directory\nand the tar file contents -- but given the low participation on this\nthread, I don't think we have much chance of getting feedback from\nanyone else right now, so let's just do it the way you have it and we\ncan change it later if someone shows up to complain.\n\n+verify_all_tar_files(verifier_context *context, SimplePtrList *tarfiles)\n\nI think this code could be moved into its only caller instead of\nhaving a separate function. And then if you do that, maybe\nverify_one_tar_file could be renamed to just verify_tar_file. Or\nperhaps that function could also be removed and just move the code\ninto the caller. It's not much code and not very deeply nested.\nSimilarly create_archive_verifier() could be considered for this\ntreatment. Maybe inlining all of these is too much and the result will\nlook messy, but I think we should at least try to combine some of\nthem.\n\n...Robert\n\n\n", "msg_date": "Mon, 9 Sep 2024 16:00:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "+ pg_log_error(\"pg_waldump cannot read from a tar\");\n\n\"tar\" isn't normally used as a noun as you do here, so I think this\nshould say \"pg_waldump cannot read tar files\".\n\nTechnically, the position of this check could lead to an unnecessary\nfailure, if -n wasn't specified but pg_wal.tar(.whatever) also doesn't\nexist on disk. But I think it's OK to ignore that case.\n\nHowever, I also notice this change to the TAP tests in a few places:\n\n- [ 'pg_verifybackup', $tempdir ],\n+ [ 'pg_verifybackup', '-Fp', $tempdir ],\n\nIt's not the end of the world to have to make a change like this, but\nit seems easy to do better. Just postpone the call to\nfind_backup_format() until right after we call parse_manifest_file().\nThat also means postponing the check mentioned above until right after\nthat, but that's fine: after parse_manifest_file() and then\nfind_backup_format(), you can do if (!no_parse_wal && context.format\n== 't') { bail out }.\n\n+ if (stat(path, &sb) == 0)\n+ result = 'p';\n+ else\n+ {\n+ if (errno != ENOENT)\n+ {\n+ pg_log_error(\"cannot determine backup format : %m\");\n+ pg_log_error_hint(\"Try \\\"%s --help\\\" for more\ninformation.\", progname);\n+ exit(1);\n+ }\n+\n+ /* Otherwise, it is assumed to be a tar format backup. */\n+ result = 't';\n+ }\n\nThis doesn't look good, for a few reasons:\n\n1. It would be clearer to structure this as if (stat(...) == 0) result\n= 'p'; else if (errno == ENOENT) result = 't'; else { report an error;\n} instead of the way you have it.\n\n2. \"cannot determine backup format\" is not an appropriate way to\nreport the failure of stat(). The appropriate message is \"could not\nstat file \\\"%s\\\"\".\n\n3. It is almost never correct to put a space before a colon in an error message.\n\n4. The hint doesn't look helpful, or necessary. I think you can just\ndelete that.\n\nRegarding both point #2 and point #4, I think we should ask ourselves\nhow stat() could realistically fail here. On my system (macOS), the\ndocument failed modes for stat() are EACCES (i.e. permission denied),\nEFAULT (i.e. we have a bug in pg_verifybackup), EIO (I/O Error), ELOOP\n(symlink loop), ENAMETOOLONG, ENOENT, ENOTDIR, and EOVERFLOW. In none\nof those cases does it seem likely that specifying the format manually\nwill help anything. Thus, suggesting that the user look at the help,\npresumably to find --format, is unlikely to solve anything, and\ntelling them that the error happened while trying to determine the\nbackup format isn't really helping anything, either. What the user\nneeds to know is that it was stat() that failed, and the pathname for\nwhich it failed. Then they need to sort out whatever problem is\ncausing them to get one of those really weird errors.\n\nAside from the above, 0010 looks pretty reasonable, although I'll\nprobably want to do some wordsmithing on some of the comments at some\npoint.\n\n- of the backup. The backup must be stored in the \"plain\"\n- format; a \"tar\" format backup can be checked after extracting it.\n+ of the backup. The backup must be stored in the \"plain\" or \"tar\"\n+ format. Verification is supported for <literal>gzip</literal>,\n+ <literal>lz4</literal>, and <literal>zstd</literal> compressed tar backup;\n+ any other compressed format backups can be checked after decompressing them.\n\nI don't think that we need to say that the backup must be stored in\nthe plain or tar format, because those are the only backup formats\npg_basebackup knows about. Similarly, it doesn't seem help to me to\nenumerate all the compression algorithms that pg_basebackup supports\nand say we only support those; what else would a user expect?\n\nWhat I would do is replace the original sentence (\"The backup must be\nstored...\") with: The backup may be stored either in the \"plain\" or\nthe \"tar\" format; this includes \"tar\" backups compressed with any\nalgorithm supported by pg_basebackup. However, at present, WAL\nverification is supported only for plain-format backups. Therefore, if\nthe backup is stored in \"tar\" format, the <literal>-n,\n--no-parse-wal<literal> option should be used.\n\n+ # Add tar backup format option\n+ push @backup, ('-F', 't');\n+ # Add switch to skip WAL verification.\n+ push @verify, ('-n');\n\nSay why, not what. The second comment should say something like \"WAL\nverification not yet supported for tar-format backups\".\n\n+ \"$format backup fails with algorithm \\\"$algorithm\\\"\");\n+ $primary->command_ok(\\@backup, \"$format backup ok with\nalgorithm \\\"$algorithm\\\"\");\n+ ok(-f \"$backup_path/backup_manifest\", \"$format backup\nmanifest exists\");\n+ \"verify $format backup with algorithm \\\"$algorithm\\\"\");\n\nPersonally I would change \"$format\" to \"$format format\" in all of\nthese places, so that we talk about a \"tar format backup\" or a \"plain\nformat backup\" instead of a \"tar backup\" or a \"plain backup\".\n\n+ 'skip_on_windows' => 1\n\nI don't understand why 4 of the 7 new tests are skipped on Windows.\nThe existing \"skip\" message for this parameter says \"unix-style\npermissions not supported on Windows\" but that doesn't seem applicable\nfor any of the new cases, and I couldn't find a comment about it,\neither.\n\n+ my @files = glob(\"*\");\n+ system_or_bail($tar, '-cf', \"$backup_path/$archive\", @files);\n\nWhy not just system_or_bail($tar, '-cf', \"$backup_path/$archive\", '.')?\n\nAlso, instead of having separate entries in the test array to do\nbasically the same thing on Windows, could we just iterate through the\ntest array twice and do everything once for plain format and then a\nsecond time for tar format, and do the tests once for each? I don't\nthink that idea QUITE works, because the open_file_fails,\nopen_directory_fails, and search_directory_fails tests are really not\napplicable to tar format. But we could rename skip_on_windows to\ntests_file_permissions and skip those both on Windows and for tar\nformat. But aside from that, I don't quite see why it makes sense to,\nfor example, test extra_file for both formats but not\nextra_tablespace_file, and indeed it seems like an important bit of\ntest coverage.\n\nI also feel like we should have tests someplace that add extra files\nto a tar-format backup in the backup directory (e.g. 1234567.tar, or\nwug.tar, or 123456.wug) or remove entire files.\n\n...Robert\n\n\n", "msg_date": "Tue, 10 Sep 2024 13:24:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Tue, Sep 10, 2024 at 1:31 AM Robert Haas <[email protected]> wrote:\n>\n> I would rather that you didn't add simple_ptr_list_destroy_deep()\n> given that you don't need it for this patch series.\n>\nDone.\n\n> +\n> \"\\\"%s\\\" unexpected file in the tar format backup\",\n>\n> This doesn't seem grammatical to me. Perhaps change this to: file\n> \\\"%s\\\" is not expected in a tar format backup\n>\nOk, updated in the attached version.\n\n> + /* We are only interested in files that are not in the ignore list. */\n> + if (member->is_directory || member->is_link ||\n> + should_ignore_relpath(mystreamer->context, member->pathname))\n> + return;\n>\n> Doesn't this need to happen after we add pg_tblspc/$OID to the path,\n> rather than before? I bet this doesn't work correctly for files in\n> user-defined tablespaces, compared to the way it work for a\n> directory-format backup.\n>\n> + char temp[MAXPGPATH];\n> +\n> + /* Copy original name at temporary space */\n> + memcpy(temp, member->pathname, MAXPGPATH);\n> +\n> + snprintf(member->pathname, MAXPGPATH, \"%s/%d/%s\",\n> + \"pg_tblspc\", mystreamer->tblspc_oid, temp);\n>\n> I don't like this at all. This function doesn't have any business\n> modifying the astreamer_member, and it doesn't need to. It can just do\n> char *pathname; char tmppathbuf[MAXPGPATH] and then set pathname to\n> either member->pathname or tmppathbuf depending on\n> OidIsValid(tblspcoid). Also, shouldn't this be using %u, not %d?\n>\nTrue, fixed in the attached version.\n\n> + mystreamer->mfile = (void *) m;\n>\n> Either the cast to void * isn't necessary, or it indicates that\n> there's a type mismatch that should be fixed.\n>\nFixed -- was added in the very first version and forgotten in later updates.\n\n> + * We could have these checks while receiving contents. However, since\n> + * contents are received in multiple iterations, this would result in\n> + * these lengthy checks being performed multiple times. Instead, setting\n> + * flags here and using them before proceeding with verification will be\n> + * more efficient.\n>\n> Seems unnecessary to explain this.\n>\nRemoved.\n\n> + Assert(mystreamer->verify_checksum);\n> +\n> + /* Should have came for the right file */\n> + Assert(strcmp(member->pathname, m->pathname) == 0);\n> +\n> + /*\n> + * The checksum context should match the type noted in the backup\n> + * manifest.\n> + */\n> + Assert(checksum_ctx->type == m->checksum_type);\n>\n> What do you think about:\n>\n> Assert(m != NULL && !m->bad);\n> Assert(checksum_ctx->type == m->checksum_type);\n> Assert(strcmp(member->pathname, m->pathname) == 0);\n>\n> Or possibly change the first one to Assert(should_verify_checksum(m))?\n>\nLGTM.\n\n> + memcpy(&control_file, streamer->bbs_buffer.data,\n> sizeof(ControlFileData));\n>\n> This probably doesn't really hurt anything, but it's a bit ugly. You\n> first use astreamer_buffer_until() to force the entire file into a\n> buffer. And then here, you copy the entire file into a second buffer\n> which is exactly the same except that it's guaranteed to be properly\n> aligned. It would be possible to include a ControlFileData in\n> astreamer_verify and copy the bytes directly into it (you'd need a\n> second astreamer_verify field for the number of bytes already copied\n> into that structure). I'm not 100% sure that's worth the code, but it\n> seems like it wouldn't take more than a few lines, so perhaps it is.\n>\nI think we could skip this memcpy() and directly cast\nstreamer->bbs_buffer.data to ControlFileData *, as we already ensure\nthat the correct length is being read just before this memcpy(). Did\nthe same in the attached version.\n\n> +/*\n> + * Verify plain backup.\n> + */\n> +static void\n> +verify_plain_backup(verifier_context *context)\n> +{\n> + Assert(context->format == 'p');\n> + verify_backup_directory(context, NULL, context->backup_directory);\n> +}\n> +\n>\n> This seems like a waste of space.\n>\nYeah, but aim to keep the function name more self-explanatory and\nconsistent with the naming style.\n\n> +verify_tar_backup(verifier_context *context)\n>\n> I like this a lot better now! I'm still not quite sure about the\n> decision to have the ignore list apply to both the backup directory\n> and the tar file contents -- but given the low participation on this\n> thread, I don't think we have much chance of getting feedback from\n> anyone else right now, so let's just do it the way you have it and we\n> can change it later if someone shows up to complain.\n>\nOk.\n\n> +verify_all_tar_files(verifier_context *context, SimplePtrList *tarfiles)\n>\n> I think this code could be moved into its only caller instead of\n> having a separate function. And then if you do that, maybe\n> verify_one_tar_file could be renamed to just verify_tar_file. Or\n> perhaps that function could also be removed and just move the code\n> into the caller. It's not much code and not very deeply nested.\n> Similarly create_archive_verifier() could be considered for this\n> treatment. Maybe inlining all of these is too much and the result will\n> look messy, but I think we should at least try to combine some of\n> them.\n>\nI have removed verify_all_tar_files() and renamed\nverify_one_tar_file() as suggested. However, I can't merge further\nbecause I need verify_tar_file() (formerly verify_one_tar_file()) to\nremain a separate function. This way, regardless of whether it\nsucceeds or encounters an error, I can easily perform cleanup\nafterward.\n\nOn Tue, Sep 10, 2024 at 10:54 PM Robert Haas <[email protected]> wrote:\n>\n> + pg_log_error(\"pg_waldump cannot read from a tar\");\n>\n> \"tar\" isn't normally used as a noun as you do here, so I think this\n> should say \"pg_waldump cannot read tar files\".\n>\nDone.\n\n> Technically, the position of this check could lead to an unnecessary\n> failure, if -n wasn't specified but pg_wal.tar(.whatever) also doesn't\n> exist on disk. But I think it's OK to ignore that case.\n>\n> However, I also notice this change to the TAP tests in a few places:\n>\n> - [ 'pg_verifybackup', $tempdir ],\n> + [ 'pg_verifybackup', '-Fp', $tempdir ],\n>\n> It's not the end of the world to have to make a change like this, but\n> it seems easy to do better. Just postpone the call to\n> find_backup_format() until right after we call parse_manifest_file().\n> That also means postponing the check mentioned above until right after\n> that, but that's fine: after parse_manifest_file() and then\n> find_backup_format(), you can do if (!no_parse_wal && context.format\n> == 't') { bail out }.\n>\nDone.\n\n> + if (stat(path, &sb) == 0)\n> + result = 'p';\n> + else\n> + {\n> + if (errno != ENOENT)\n> + {\n> + pg_log_error(\"cannot determine backup format : %m\");\n> + pg_log_error_hint(\"Try \\\"%s --help\\\" for more\n> information.\", progname);\n> + exit(1);\n> + }\n> +\n> + /* Otherwise, it is assumed to be a tar format backup. */\n> + result = 't';\n> + }\n>\n> This doesn't look good, for a few reasons:\n>\n> 1. It would be clearer to structure this as if (stat(...) == 0) result\n> = 'p'; else if (errno == ENOENT) result = 't'; else { report an error;\n> } instead of the way you have it.\n>\n> 2. \"cannot determine backup format\" is not an appropriate way to\n> report the failure of stat(). The appropriate message is \"could not\n> stat file \\\"%s\\\"\".\n>\n> 3. It is almost never correct to put a space before a colon in an error message.\n>\n> 4. The hint doesn't look helpful, or necessary. I think you can just\n> delete that.\n>\n> Regarding both point #2 and point #4, I think we should ask ourselves\n> how stat() could realistically fail here. On my system (macOS), the\n> document failed modes for stat() are EACCES (i.e. permission denied),\n> EFAULT (i.e. we have a bug in pg_verifybackup), EIO (I/O Error), ELOOP\n> (symlink loop), ENAMETOOLONG, ENOENT, ENOTDIR, and EOVERFLOW. In none\n> of those cases does it seem likely that specifying the format manually\n> will help anything. Thus, suggesting that the user look at the help,\n> presumably to find --format, is unlikely to solve anything, and\n> telling them that the error happened while trying to determine the\n> backup format isn't really helping anything, either. What the user\n> needs to know is that it was stat() that failed, and the pathname for\n> which it failed. Then they need to sort out whatever problem is\n> causing them to get one of those really weird errors.\n>\nDone.\n\n> - of the backup. The backup must be stored in the \"plain\"\n> - format; a \"tar\" format backup can be checked after extracting it.\n> + of the backup. The backup must be stored in the \"plain\" or \"tar\"\n> + format. Verification is supported for <literal>gzip</literal>,\n> + <literal>lz4</literal>, and <literal>zstd</literal> compressed tar backup;\n> + any other compressed format backups can be checked after decompressing them.\n>\n> I don't think that we need to say that the backup must be stored in\n> the plain or tar format, because those are the only backup formats\n> pg_basebackup knows about. Similarly, it doesn't seem help to me to\n> enumerate all the compression algorithms that pg_basebackup supports\n> and say we only support those; what else would a user expect?\n>\n> What I would do is replace the original sentence (\"The backup must be\n> stored...\") with: The backup may be stored either in the \"plain\" or\n> the \"tar\" format; this includes \"tar\" backups compressed with any\n> algorithm supported by pg_basebackup. However, at present, WAL\n> verification is supported only for plain-format backups. Therefore, if\n> the backup is stored in \"tar\" format, the <literal>-n,\n> --no-parse-wal<literal> option should be used.\n>\nDone\n\n> + # Add tar backup format option\n> + push @backup, ('-F', 't');\n> + # Add switch to skip WAL verification.\n> + push @verify, ('-n');\n>\n> Say why, not what. The second comment should say something like \"WAL\n> verification not yet supported for tar-format backups\".\n>\nDone.\n\n> + \"$format backup fails with algorithm \\\"$algorithm\\\"\");\n> + $primary->command_ok(\\@backup, \"$format backup ok with\n> algorithm \\\"$algorithm\\\"\");\n> + ok(-f \"$backup_path/backup_manifest\", \"$format backup\n> manifest exists\");\n> + \"verify $format backup with algorithm \\\"$algorithm\\\"\");\n>\n> Personally I would change \"$format\" to \"$format format\" in all of\n> these places, so that we talk about a \"tar format backup\" or a \"plain\n> format backup\" instead of a \"tar backup\" or a \"plain backup\".\n>\nDone.\n\n> + 'skip_on_windows' => 1\n>\n> I don't understand why 4 of the 7 new tests are skipped on Windows.\n> The existing \"skip\" message for this parameter says \"unix-style\n> permissions not supported on Windows\" but that doesn't seem applicable\n> for any of the new cases, and I couldn't find a comment about it,\n> either.\n>\nI was a bit unsure whether Windows could handle unpacking and\nrepacking tar files and the required path formats for these tests but\nthe \"Cirrus CI / Windows - Server 2019, VS 2019\" workflow doesn’t have\nany issues with them. I’ve removed the flag.\n\n> + my @files = glob(\"*\");\n> + system_or_bail($tar, '-cf', \"$backup_path/$archive\", @files);\n>\n> Why not just system_or_bail($tar, '-cf', \"$backup_path/$archive\", '.')?\n>\nDoesn't suit since re-packing includes \"./\" at the beginning of each file path.\n\n> Also, instead of having separate entries in the test array to do\n> basically the same thing on Windows, could we just iterate through the\n> test array twice and do everything once for plain format and then a\n> second time for tar format, and do the tests once for each? I don't\n> think that idea QUITE works, because the open_file_fails,\n> open_directory_fails, and search_directory_fails tests are really not\n> applicable to tar format. But we could rename skip_on_windows to\n> tests_file_permissions and skip those both on Windows and for tar\n> format. But aside from that, I don't quite see why it makes sense to,\n> for example, test extra_file for both formats but not\n> extra_tablespace_file, and indeed it seems like an important bit of\n> test coverage.\n>\nAdded test extra_file and missing_file test for tablespace as well.\n\n> I also feel like we should have tests someplace that add extra files\n> to a tar-format backup in the backup directory (e.g. 1234567.tar, or\n> wug.tar, or 123456.wug) or remove entire files.\n>\nIf I am not missing something, tar_backup_unexpected_file test does\nthat. I have added a test that removes the tablespace archive in the\nattached version.\n\nThe updated version attached. Thank you for the review !\n\nRegards,\nAmul", "msg_date": "Thu, 12 Sep 2024 16:34:54 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Thu, Sep 12, 2024 at 7:05 AM Amul Sul <[email protected]> wrote:\n> The updated version attached. Thank you for the review !\n\nI have spent a bunch of time on this and have made numerous revisions.\nI hope to commit the result, aftering seeing what you and the\nbuildfarm think (and anyone else who wishes to offer an opinion).\nChanges:\n\n1. I adjusted some documentation wording for clarity.\n\n2. I adjusted quite a few comments.\n\n3. I changed the code to canonicalize pathnames taken from tar files,\nso that a backup where tar file names begin with \"./\" doesn't break\nbackup verification.\n\n4. I changed the code to use a dedicated buffer of type\nControlFileData instead of buffering the control file in bbs_buffer,\nbecause there's no guarantee that bbs_buffer is sufficiently aligned,\nwhich could result in failures on non-x86 platforms.\n\n5. I changed the way that we validate the length of the control file;\nthe old code looked like it was checking that the file size was\nsizeof(ControlFileData), but in fact the control file is much bigger\nthan that and its size is given by PG_CONTROL_FILE_SIZE. The old test\npassed only because the computed file size was capped at\nsizeof(ControlFileData), even though the actual file size was larger.\n\n6. I fixed things so that we check that the target directory exists\nbefore trying to figure out the backup format, so that cases where the\ndirectory doesn't exist behave the same as before instead of failing\nwith a different error message.\n\n7. I adjusted the test cases in view of point #3 and point #6.\n\n8. I reverted various refactorings about which I earlier complained,\nbecause they put very small amounts of code into functions which in my\nopinion made the code harder to read. I also realized along the way\nthat (a) you hadn't updated the comments in those functions, or at\nleast not thoroughly, so they contained some text that was really only\napplicable to the plain-format case and (b) some of the error message\nreally deserved to be different in the plain and tar format cases. In\nparticular, when there's a problem with an archive member, it seems\ngood to mention both the name of the archive and the name of the\narchive member. Having separate code paths makes that easy and I've\ndone it in this version. Exception: I didn't update the messages for\nfailing to initialize the checksum context, because I don't think\nthose can happen and it doesn't really even make sense to include the\nfile name in the first place; any hypothetical failure would\npresumably be based on which algorithm was picked, not which file you\nwere planning to use it on. This area could use some cleanup but it's\nnot this patch's job to make it less weird.\n\n9. I rewrote 003_corruption.pl so that we apply the same tests for tar\nand plain format backups without nearly as much code duplication as\nyou had.\n\n10. I added a few test cases to 004_options.pl, so that we test the -F\noption systematically, including what happens with an invalid value.\n\n11. I moved the --format option to the correct place in alphabetical\norder in the usage output.\n\nI think that's everything that I changed, but I might have missed\nsomething in putting this list together. Hopefully not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 25 Sep 2024 14:47:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Thu, Sep 26, 2024 at 12:18 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Sep 12, 2024 at 7:05 AM Amul Sul <[email protected]> wrote:\n> > The updated version attached. Thank you for the review !\n>\n> I have spent a bunch of time on this and have made numerous revisions.\n> I hope to commit the result, aftering seeing what you and the\n> buildfarm think (and anyone else who wishes to offer an opinion).\n> Changes:\n>\n\nThank you, Robert. The code changes look much better now.\n\nA few minor comments:\n\n+ each tablespace, named after the tablespace's OID. If the backup\n+ is compressed, the relevant compression extension is added to the\n+ end of each file name.\n\nI am a bit unsure about the last line, especially the use of the word\n\"added.\" I feel like it's implying that we're adding something, which\nisn't true.\n--\n\nTypo: futher\n\n--\n\nThe addition of simple_ptr_list_destroy will be part of a separate\ncommit, correct?\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 27 Sep 2024 11:36:25 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Fri, Sep 27, 2024 at 2:07 AM Amul Sul <[email protected]> wrote:\n> Thank you, Robert. The code changes look much better now.\n\nCool.\n\n> A few minor comments:\n>\n> + each tablespace, named after the tablespace's OID. If the backup\n> + is compressed, the relevant compression extension is added to the\n> + end of each file name.\n>\n> I am a bit unsure about the last line, especially the use of the word\n> \"added.\" I feel like it's implying that we're adding something, which\n> isn't true.\n\nIf you add .gz to the end of 16904.tar, you get 16904.tar.gz. This\nseems like correct English to me.\n\n> Typo: futher\n\nOK, thanks.\n\n> The addition of simple_ptr_list_destroy will be part of a separate\n> commit, correct?\n\nTo me, it doesn't seem worth splitting that out into a separate commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Sep 2024 08:03:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "The 32-bit buildfarm animals don't like this too much [1]:\n\nastreamer_verify.c: In function 'member_verify_checksum':\nastreamer_verify.c:297:68: error: format '%zu' expects argument of type 'size_t', but argument 6 has type 'uint64' {aka 'long long unsigned int'} [-Werror=format=]\n 297 | \"file \\\\\"%s\\\\\" in \\\\\"%s\\\\\" should contain %zu bytes, but read %zu bytes\",\n | ~~^\n | |\n | unsigned int\n | %llu\n 298 | m->pathname, mystreamer->archive_name,\n 299 | m->size, mystreamer->checksum_bytes);\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~ \n | |\n | uint64 {aka long long unsigned int}\n\nNow, manifest_file.size is in fact a size_t, so %zu is the correct\nformat spec for it. But astreamer_verify.checksum_bytes is declared\nuint64. This leads me to question whether size_t was the correct\nchoice for manifest_file.size. If it is, is it OK to use size_t\nfor checksum_bytes as well? If not, your best bet is likely\nto write %lld and add an explicit cast to long long, as we do in\ndozens of other places. I see that these messages are intended to be\ntranslatable, so INT64_FORMAT is not usable here.\n\nAside from that, I'm unimpressed with expending a five-line comment\nat line 376 to justify casting control_file_bytes to int, when you\ncould similarly cast it to long long, avoiding the need to justify\nsomething that's not even in tune with project style.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2024-09-28%2006%3A03%3A02\n\n\n", "msg_date": "Sat, 28 Sep 2024 18:59:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Piling on a bit ... Coverity reported the following issues in\nthis new code. I have not analyzed them to see if they're\nreal problems.\n\n________________________________________________________________________________________________________\n*** CID 1620458: Resource leaks (RESOURCE_LEAK)\n/srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 1025 in verify_tar_file()\n1019 \t\t\t\t\t\t\trelpath);\n1020 \n1021 \t/* Close the file. */\n1022 \tif (close(fd) != 0)\n1023 \t\treport_backup_error(context, \"could not close file \\\"%s\\\": %m\",\n1024 \t\t\t\t\t\t\trelpath);\n>>> CID 1620458: Resource leaks (RESOURCE_LEAK)\n>>> Variable \"buffer\" going out of scope leaks the storage it points to.\n1025 }\n1026 \n1027 /*\n1028 * Scan the hash table for entries where the 'matched' flag is not set; report\n1029 * that such files are present in the manifest but not on disk.\n1030 */\n\n________________________________________________________________________________________________________\n*** CID 1620457: Memory - illegal accesses (OVERRUN)\n/srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/astreamer_verify.c: 349 in member_copy_control_data()\n343 \t */\n344 \tif (mystreamer->control_file_bytes <= sizeof(ControlFileData))\n345 \t{\n346 \t\tint\t\t\tremaining;\n347 \n348 \t\tremaining = sizeof(ControlFileData) - mystreamer->control_file_bytes;\n>>> CID 1620457: Memory - illegal accesses (OVERRUN)\n>>> Overrunning array of 296 bytes at byte offset 296 by dereferencing pointer \"(char *)&mystreamer->control_file + mystreamer->control_file_bytes\".\n349 \t\tmemcpy(((char *) &mystreamer->control_file)\n350 \t\t\t + mystreamer->control_file_bytes,\n351 \t\t\t data, Min(len, remaining));\n352 \t}\n353 \n354 \t/* Remember how many bytes we saw, even if we didn't buffer them. */\n\n________________________________________________________________________________________________________\n*** CID 1620456: Null pointer dereferences (FORWARD_NULL)\n/srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 939 in precheck_tar_backup_file()\n933 \t\t\t\t\t\t\t\t\"file \\\"%s\\\" is not expected in a tar format backup\",\n934 \t\t\t\t\t\t\t\trelpath);\n935 \t\ttblspc_oid = (Oid) num;\n936 \t}\n937 \n938 \t/* Now, check the compression type of the tar */\n>>> CID 1620456: Null pointer dereferences (FORWARD_NULL)\n>>> Passing null pointer \"suffix\" to \"strcmp\", which dereferences it.\n939 \tif (strcmp(suffix, \".tar\") == 0)\n940 \t\tcompress_algorithm = PG_COMPRESSION_NONE;\n941 \telse if (strcmp(suffix, \".tgz\") == 0)\n942 \t\tcompress_algorithm = PG_COMPRESSION_GZIP;\n943 \telse if (strcmp(suffix, \".tar.gz\") == 0)\n944 \t\tcompress_algorithm = PG_COMPRESSION_GZIP;\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Sep 2024 13:03:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Sat, Sep 28, 2024 at 6:59 PM Tom Lane <[email protected]> wrote:\n> Now, manifest_file.size is in fact a size_t, so %zu is the correct\n> format spec for it. But astreamer_verify.checksum_bytes is declared\n> uint64. This leads me to question whether size_t was the correct\n> choice for manifest_file.size. If it is, is it OK to use size_t\n> for checksum_bytes as well? If not, your best bet is likely\n> to write %lld and add an explicit cast to long long, as we do in\n> dozens of other places. I see that these messages are intended to be\n> translatable, so INT64_FORMAT is not usable here.\n\nIt seems that manifest_file.size is size_t because parse_manifest.h is\nusing size_t for json_manifest_per_file_callback. What's happening is\nthat json_manifest_finalize_file() is parsing the file size\ninformation out of the manifest. It uses strtoul to do that and\nassigns the result to a size_t. I don't think I had any principled\nreason for making that decision; size_t is, I think, the size of an\nobject in memory, and this is not that. This is just a string in a\nJSON file that represents an integer which will hopefully turn out to\nbe the size of the file on disk. I guess I don't know what type I\nshould be using here. Most things in PostgreSQL use a type like uint32\nor uint64, but technically what we're going to be comparing against in\nthe end is probably an off_t produced by stat(), but the return value\nof strtoul() or strtoull() is unsigned long or unsigned long long,\nwhich is not any of those things. If you have a suggestion here, I'm\nall ears.\n\n> Aside from that, I'm unimpressed with expending a five-line comment\n> at line 376 to justify casting control_file_bytes to int, when you\n> could similarly cast it to long long, avoiding the need to justify\n> something that's not even in tune with project style.\n\nI don't know what you mean by this. The comment explains that I used\n%d here because that's what pg_rewind does, and %d corresponds to int,\nnot long long. If you think it should be some other way, you can\nchange it, and perhaps you'd like to change pg_rewind to match while\nyou're at it. But the fact that there's a comment here explaining the\nreasoning is a feature, not a bug. It's weird to me to get criticized\nfor failing to follow project style when I literally copied something\nthat already exists.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:06:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Sun, Sep 29, 2024 at 1:03 PM Tom Lane <[email protected]> wrote:\n> *** CID 1620458: Resource leaks (RESOURCE_LEAK)\n> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 1025 in verify_tar_file()\n> 1019 relpath);\n> 1020\n> 1021 /* Close the file. */\n> 1022 if (close(fd) != 0)\n> 1023 report_backup_error(context, \"could not close file \\\"%s\\\": %m\",\n> 1024 relpath);\n> >>> CID 1620458: Resource leaks (RESOURCE_LEAK)\n> >>> Variable \"buffer\" going out of scope leaks the storage it points to.\n> 1025 }\n> 1026\n> 1027 /*\n> 1028 * Scan the hash table for entries where the 'matched' flag is not set; report\n> 1029 * that such files are present in the manifest but not on disk.\n> 1030 */\n\nThis looks like a real leak. It can only happen once per tarfile when\nverifying a tar backup so it can never add up to much, but it makes\nsense to fix it.\n\n> *** CID 1620457: Memory - illegal accesses (OVERRUN)\n> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/astreamer_verify.c: 349 in member_copy_control_data()\n> 343 */\n> 344 if (mystreamer->control_file_bytes <= sizeof(ControlFileData))\n> 345 {\n> 346 int remaining;\n> 347\n> 348 remaining = sizeof(ControlFileData) - mystreamer->control_file_bytes;\n> >>> CID 1620457: Memory - illegal accesses (OVERRUN)\n> >>> Overrunning array of 296 bytes at byte offset 296 by dereferencing pointer \"(char *)&mystreamer->control_file + mystreamer->control_file_bytes\".\n> 349 memcpy(((char *) &mystreamer->control_file)\n> 350 + mystreamer->control_file_bytes,\n> 351 data, Min(len, remaining));\n> 352 }\n> 353\n> 354 /* Remember how many bytes we saw, even if we didn't buffer them. */\n\nI think this might be complaining about a potential zero-length copy.\nSeems like perhaps the <= sizeof(ControlFileData) test should actually\nbe < sizeof(ControlFileData).\n\n> *** CID 1620456: Null pointer dereferences (FORWARD_NULL)\n> /srv/coverity/git/pgsql-git/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: 939 in precheck_tar_backup_file()\n> 933 \"file \\\"%s\\\" is not expected in a tar format backup\",\n> 934 relpath);\n> 935 tblspc_oid = (Oid) num;\n> 936 }\n> 937\n> 938 /* Now, check the compression type of the tar */\n> >>> CID 1620456: Null pointer dereferences (FORWARD_NULL)\n> >>> Passing null pointer \"suffix\" to \"strcmp\", which dereferences it.\n> 939 if (strcmp(suffix, \".tar\") == 0)\n> 940 compress_algorithm = PG_COMPRESSION_NONE;\n> 941 else if (strcmp(suffix, \".tgz\") == 0)\n> 942 compress_algorithm = PG_COMPRESSION_GZIP;\n> 943 else if (strcmp(suffix, \".tar.gz\") == 0)\n> 944 compress_algorithm = PG_COMPRESSION_GZIP;\n\nThis one is happening, I believe, because report_backup_error()\ndoesn't perform a non-local exit, but we have a bit of code here that\nacts like it does.\n\nPatch attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 30 Sep 2024 11:21:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Sep 28, 2024 at 6:59 PM Tom Lane <[email protected]> wrote:\n>> Now, manifest_file.size is in fact a size_t, so %zu is the correct\n>> format spec for it. But astreamer_verify.checksum_bytes is declared\n>> uint64. This leads me to question whether size_t was the correct\n>> choice for manifest_file.size.\n\n> It seems that manifest_file.size is size_t because parse_manifest.h is\n> using size_t for json_manifest_per_file_callback. What's happening is\n> that json_manifest_finalize_file() is parsing the file size\n> information out of the manifest. It uses strtoul to do that and\n> assigns the result to a size_t. I don't think I had any principled\n> reason for making that decision; size_t is, I think, the size of an\n> object in memory, and this is not that.\n\nCorrect, size_t is defined to measure in-memory object sizes. It's\nthe argument type of malloc(), the result type of sizeof(), etc.\nIt does not restrict the size of disk files.\n\n> This is just a string in a\n> JSON file that represents an integer which will hopefully turn out to\n> be the size of the file on disk. I guess I don't know what type I\n> should be using here. Most things in PostgreSQL use a type like uint32\n> or uint64, but technically what we're going to be comparing against in\n> the end is probably an off_t produced by stat(), but the return value\n> of strtoul() or strtoull() is unsigned long or unsigned long long,\n> which is not any of those things. If you have a suggestion here, I'm\n> all ears.\n\nI don't know if it's realistic to expect that this code might be used\nto process JSON blobs exceeding 4GB. But if it is, I'd be inclined to\nuse uint64 and strtoull for these purposes, if only to avoid\ncross-platform hazards with varying sizeof(long) and sizeof(size_t).\n\nUm, wait ... we do have strtou64(), so you should use that.\n\n>> Aside from that, I'm unimpressed with expending a five-line comment\n>> at line 376 to justify casting control_file_bytes to int,\n\n> I don't know what you mean by this.\n\nI mean that we have a widely-used, better solution. If you copied\nthis from someplace else, the someplace else could stand to be\nimproved too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:24:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sun, Sep 29, 2024 at 1:03 PM Tom Lane <[email protected]> wrote:\n>>> CID 1620458: Resource leaks (RESOURCE_LEAK)\n>>> Variable \"buffer\" going out of scope leaks the storage it points to.\n\n> This looks like a real leak. It can only happen once per tarfile when\n> verifying a tar backup so it can never add up to much, but it makes\n> sense to fix it.\n\n+1\n\n>>> CID 1620457: Memory - illegal accesses (OVERRUN)\n>>> Overrunning array of 296 bytes at byte offset 296 by dereferencing pointer \"(char *)&mystreamer->control_file + mystreamer->control_file_bytes\".\n\n> I think this might be complaining about a potential zero-length copy.\n> Seems like perhaps the <= sizeof(ControlFileData) test should actually\n> be < sizeof(ControlFileData).\n\nThat's clearly an improvement, but I was wondering if we should also\nchange \"len\" and \"remaining\" to be unsigned (probably size_t).\nCoverity might be unhappy about the off-the-end array reference,\nbut perhaps it's also concerned about what happens if len is negative.\n\n>>> CID 1620456: Null pointer dereferences (FORWARD_NULL)\n>>> Passing null pointer \"suffix\" to \"strcmp\", which dereferences it.\n\n> This one is happening, I believe, because report_backup_error()\n> doesn't perform a non-local exit, but we have a bit of code here that\n> acts like it does.\n\nCheck.\n\n> Patch attached.\n\nWFM, modulo the suggestion about changing data types.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:31:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Sep 30, 2024 at 11:24 AM Tom Lane <[email protected]> wrote:\n> > This is just a string in a\n> > JSON file that represents an integer which will hopefully turn out to\n> > be the size of the file on disk. I guess I don't know what type I\n> > should be using here. Most things in PostgreSQL use a type like uint32\n> > or uint64, but technically what we're going to be comparing against in\n> > the end is probably an off_t produced by stat(), but the return value\n> > of strtoul() or strtoull() is unsigned long or unsigned long long,\n> > which is not any of those things. If you have a suggestion here, I'm\n> > all ears.\n>\n> I don't know if it's realistic to expect that this code might be used\n> to process JSON blobs exceeding 4GB. But if it is, I'd be inclined to\n> use uint64 and strtoull for these purposes, if only to avoid\n> cross-platform hazards with varying sizeof(long) and sizeof(size_t).\n>\n> Um, wait ... we do have strtou64(), so you should use that.\n\nThe thing we should be worried about is not how large a JSON blob\nmight be, but rather how large any file that appears in the data\ndirectory might be. So uint32 is out; and I think I hear you voting\nfor uint64 over size_t. But then how do you think we should print\nthat? Cast to unsigned long long and use %llu?\n\n> >> Aside from that, I'm unimpressed with expending a five-line comment\n> >> at line 376 to justify casting control_file_bytes to int,\n>\n> > I don't know what you mean by this.\n>\n> I mean that we have a widely-used, better solution. If you copied\n> this from someplace else, the someplace else could stand to be\n> improved too.\n\nI don't understand what you think the widely-used, better solution is\nhere. As far as I can see, there are two goods here, between which one\nmust choose. One can decide to use the same error message string, and\nI hope we can agree that's good, because I've been criticized in the\npast when I have done otherwise, as have many others. The other good\nis to use the most appropriate data type. One cannot have both of\nthose things in this instance, unless one goes and fixes the other\ncode also, but such a change had no business being part of this patch.\nIf the issue had been serious and likely to occur in real life, I\nwould have probably fixed it in a preparatory patch, but it isn't, so\nI settled for adding a comment.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Sep 2024 16:05:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "On Mon, Sep 30, 2024 at 11:31 AM Tom Lane <[email protected]> wrote:\n> WFM, modulo the suggestion about changing data types.\n\nI would prefer not to make the data type change here because it has\nquite a few tentacles. If I change member_copy_control_data() then I\nhave to change astreamer_verify_content() which means changing the\nastreamer interface which means adjusting all of the other astreamers.\nThat can certainly be done, but it's quite possible it might provoke\nsome other Coverity warning. Since this is a length, it might've been\nbetter to use an unsigned data type, but there's no reason that I can\nsee why it should be size_t specifically: the origin of the value\ncould be either the return value of read(), which is ssize_t not\nsize_t, or the number of bytes returned by a decompression library or\nthe number of bytes present in a protocol message. Trying to make\nthings fit better here is just likely to make them fit worse someplace\nelse.\n\n\"You are in a maze of twisty little data types, all alike.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Sep 2024 16:11:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Sep 30, 2024 at 11:24 AM Tom Lane <[email protected]> wrote:\n>> Um, wait ... we do have strtou64(), so you should use that.\n\n> The thing we should be worried about is not how large a JSON blob\n> might be, but rather how large any file that appears in the data\n> directory might be. So uint32 is out; and I think I hear you voting\n> for uint64 over size_t.\n\nYes. size_t might only be 32 bits.\n\n> But then how do you think we should print\n> that? Cast to unsigned long long and use %llu?\n\nOur two standard solutions are to do that or to use UINT64_FORMAT.\nBut UINT64_FORMAT is problematic in translatable strings because\nthen the .po files would become platform-specific, so long long\nis what to use in that case. For a non-translated format string\nyou can do either.\n\n> I don't understand what you think the widely-used, better solution is\n> here.\n\nWhat we just said above.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:01:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Sep 30, 2024 at 11:31 AM Tom Lane <[email protected]> wrote:\n>> WFM, modulo the suggestion about changing data types.\n\n> I would prefer not to make the data type change here because it has\n> quite a few tentacles.\n\nI see your point for the function's \"len\" argument, but perhaps it's\nworth doing\n\n- int remaining;\n+ size_t remaining;\n\n remaining = sizeof(ControlFileData) - mystreamer->control_file_bytes;\n memcpy(((char *) &mystreamer->control_file)\n + mystreamer->control_file_bytes,\n- data, Min(len, remaining));\n+ data, Min((size_t) len, remaining));\n\nThis is enough to ensure the Min() remains safe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:05:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_verifybackup: TAR format backup verification" } ]
[ { "msg_contents": "Greetings,\n\nThere are suggestions that you can use extended query to fetch from a\ncursor, however I don't see how you can bind values to the cursor ?\n\nPostgreSQL: Documentation: 16: FETCH\n<https://www.postgresql.org/docs/16/sql-fetch.html>\n\nIs this possible?\n\nDave Cramer\n\nGreetings,There are suggestions that you can use extended query to fetch from a cursor, however I don't see how you can bind values to the cursor ?PostgreSQL: Documentation: 16: FETCHIs this possible?Dave Cramer", "msg_date": "Wed, 10 Jul 2024 10:50:45 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Is it possible to create a cursor with hold using extended query\n protocol" }, { "msg_contents": "On Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:\n\n> Greetings,\n>\n> There are suggestions that you can use extended query to fetch from a\n> cursor, however I don't see how you can bind values to the cursor ?\n>\n> PostgreSQL: Documentation: 16: FETCH\n> <https://www.postgresql.org/docs/16/sql-fetch.html>\n>\n> Is this possible?\n>\n\nNot that i can see. The declare’d query isn’t shown to accept $n bindings\nrather it must be executable (select or values). Per the note on declare,\nthe bind phase of the fetch command under the extended protocol is used to\ndetermine whether values retrieved are text or binary. Beyond that, the\nbind is really just a formality of the protocol, the same as for executing\nany other non-parameterized query that way.\n\nDavid J.\n\nOn Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:Greetings,There are suggestions that you can use extended query to fetch from a cursor, however I don't see how you can bind values to the cursor ?PostgreSQL: Documentation: 16: FETCHIs this possible?Not that i can see.  The declare’d query isn’t shown to accept $n bindings rather it must be executable (select or values).  Per the note on declare, the bind phase of the fetch command under the extended protocol is used to determine whether values retrieved are text or binary.  Beyond that, the bind is really just a formality of the protocol, the same as for executing any other non-parameterized query that way.David J.", "msg_date": "Wed, 10 Jul 2024 08:04:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to create a cursor with hold using extended query\n protocol" }, { "msg_contents": "On Wed, 10 Jul 2024 at 11:04, David G. Johnston <[email protected]>\nwrote:\n\n> On Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:\n>\n>> Greetings,\n>>\n>> There are suggestions that you can use extended query to fetch from a\n>> cursor, however I don't see how you can bind values to the cursor ?\n>>\n>> PostgreSQL: Documentation: 16: FETCH\n>> <https://www.postgresql.org/docs/16/sql-fetch.html>\n>>\n>> Is this possible?\n>>\n>\n> Not that i can see. The declare’d query isn’t shown to accept $n bindings\n> rather it must be executable (select or values). Per the note on declare,\n> the bind phase of the fetch command under the extended protocol is used to\n> determine whether values retrieved are text or binary. Beyond that, the\n> bind is really just a formality of the protocol, the same as for executing\n> any other non-parameterized query that way.\n>\n\nSeems you can bind to the Declare though.\n\nexecute <unnamed>: BEGIN\n2024-07-10 11:18:57.247 EDT [98519] LOG: duration: 0.239 ms parse\n<unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <\n$1\n2024-07-10 11:18:57.247 EDT [98519] LOG: duration: 0.014 ms bind\n<unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <\n$1\n2024-07-10 11:18:57.247 EDT [98519] DETAIL: Parameters: $1 = '400'\n2024-07-10 11:18:57.248 EDT [98519] LOG: duration: 1.080 ms execute\n<unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id <\n$1\n2024-07-10 11:18:57.248 EDT [98519] DETAIL: Parameters: $1 = '400'\n\nThanks,\n\nDave\n\n>\n>\n\nOn Wed, 10 Jul 2024 at 11:04, David G. Johnston <[email protected]> wrote:On Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:Greetings,There are suggestions that you can use extended query to fetch from a cursor, however I don't see how you can bind values to the cursor ?PostgreSQL: Documentation: 16: FETCHIs this possible?Not that i can see.  The declare’d query isn’t shown to accept $n bindings rather it must be executable (select or values).  Per the note on declare, the bind phase of the fetch command under the extended protocol is used to determine whether values retrieved are text or binary.  Beyond that, the bind is really just a formality of the protocol, the same as for executing any other non-parameterized query that way.Seems you can bind to the Declare though.execute <unnamed>: BEGIN2024-07-10 11:18:57.247 EDT [98519] LOG:  duration: 0.239 ms  parse <unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id < $12024-07-10 11:18:57.247 EDT [98519] LOG:  duration: 0.014 ms  bind <unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id < $12024-07-10 11:18:57.247 EDT [98519] DETAIL:  Parameters: $1 = '400'2024-07-10 11:18:57.248 EDT [98519] LOG:  duration: 1.080 ms  execute <unnamed>: DECLARE c1 CURSOR WITH HOLD FOR select * from vactbl where id < $12024-07-10 11:18:57.248 EDT [98519] DETAIL:  Parameters: $1 = '400'Thanks,Dave", "msg_date": "Wed, 10 Jul 2024 11:28:49 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it possible to create a cursor with hold using extended query\n protocol" }, { "msg_contents": "On Wed, Jul 10, 2024 at 8:29 AM Dave Cramer <[email protected]> wrote:\n\n>\n> On Wed, 10 Jul 2024 at 11:04, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:\n>>\n>>> Greetings,\n>>>\n>>> There are suggestions that you can use extended query to fetch from a\n>>> cursor, however I don't see how you can bind values to the cursor ?\n>>>\n>>> PostgreSQL: Documentation: 16: FETCH\n>>> <https://www.postgresql.org/docs/16/sql-fetch.html>\n>>>\n>>> Is this possible?\n>>>\n>>\n>> Not that i can see. The declare’d query isn’t shown to accept $n\n>> bindings rather it must be executable (select or values). Per the note on\n>> declare, the bind phase of the fetch command under the extended protocol is\n>> used to determine whether values retrieved are text or binary. Beyond\n>> that, the bind is really just a formality of the protocol, the same as for\n>> executing any other non-parameterized query that way.\n>>\n>\n> Seems you can bind to the Declare though.\n>\n>\nRight. You got me trying to equate cursors and prepared statements and\nthey differ in exactly this way. The prepared binds are held until execute\nwhile cursor binds, and query execution for that matter, are immediate,\nwith fetch just being a way to obtain the rows already computed (at least\nconceptually, optimizations might exist). They both end up creating a\nnamed portal. You cannot declare an execute command which simplifies\nthings a bit.\n\nDavid J.\n\nOn Wed, Jul 10, 2024 at 8:29 AM Dave Cramer <[email protected]> wrote:On Wed, 10 Jul 2024 at 11:04, David G. Johnston <[email protected]> wrote:On Wednesday, July 10, 2024, Dave Cramer <[email protected]> wrote:Greetings,There are suggestions that you can use extended query to fetch from a cursor, however I don't see how you can bind values to the cursor ?PostgreSQL: Documentation: 16: FETCHIs this possible?Not that i can see.  The declare’d query isn’t shown to accept $n bindings rather it must be executable (select or values).  Per the note on declare, the bind phase of the fetch command under the extended protocol is used to determine whether values retrieved are text or binary.  Beyond that, the bind is really just a formality of the protocol, the same as for executing any other non-parameterized query that way.Seems you can bind to the Declare though. Right.  You got me trying to equate cursors and prepared statements and they differ in exactly this way.  The prepared binds are held until execute while cursor binds, and query execution for that matter, are immediate, with fetch just being a way to obtain the rows already computed (at least conceptually, optimizations might exist).  They both end up creating a named portal.  You cannot declare an execute command which simplifies things a bit.David J.", "msg_date": "Wed, 10 Jul 2024 08:56:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to create a cursor with hold using extended query\n protocol" } ]
[ { "msg_contents": "I don't quite understand why TransactionIdIsCurrentTransactionId() implements\nbinary search in ParallelCurrentXids \"from scratch\" instead of using\nbsearch().\n\nIf I read the code correctly, the contents of the ParallelCurrentXids is\ncomposed in SerializeTransactionState(), which uses xidComparator:\n\n\tqsort(workspace, nxids, sizeof(TransactionId), xidComparator);\n\nso it should be o.k. to use bsearch(..., xidComparator).\n\nFor example, ReorderBufferCopySnap() also uses xidComparator to sort the\n'subxip' array, and HeapTupleSatisfiesHistoricMVCC() then uses\nTransactionIdInArray() (which is effectively bsearch(..., xidComparator)) to\nsearch for particular XID in the array.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 10 Jul 2024 17:00:13 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": true, "msg_subject": "Missed opportunity for bsearch() in\n TransactionIdIsCurrentTransactionId()?" }, { "msg_contents": "On Wed, Jul 10, 2024 at 05:00:13PM +0200, Antonin Houska wrote:\n> I don't quite understand why TransactionIdIsCurrentTransactionId() implements\n> binary search in ParallelCurrentXids \"from scratch\" instead of using\n> bsearch().\n\nI believe there are a number of open-coded binary searches in the tree. My\nconcern with switching them to bsearch() would be the performance impact of\nusing a function pointer for the comparisons. Perhaps we could add a\ncouple of inlined binary search implementations for common types to replace\nmany of the open-coded ones.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 10 Jul 2024 13:59:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed opportunity for bsearch() in\n TransactionIdIsCurrentTransactionId()?" }, { "msg_contents": "Nathan Bossart <[email protected]> wrote:\n\n> On Wed, Jul 10, 2024 at 05:00:13PM +0200, Antonin Houska wrote:\n> > I don't quite understand why TransactionIdIsCurrentTransactionId() implements\n> > binary search in ParallelCurrentXids \"from scratch\" instead of using\n> > bsearch().\n> \n> I believe there are a number of open-coded binary searches in the tree.\n\nNot sure if there are many, but I could find some:\n\n* TransactionIdIsCurrentTransactionId()\n\n* RelationHasSysCache()\n\n* pg_dump.c:getRoleName()\n\n> My concern with switching them to bsearch() would be the performance impact\n> of using a function pointer for the comparisons. Perhaps we could add a\n> couple of inlined binary search implementations for common types to replace\n> many of the open-coded ones.\n\nbsearch() appears to be used widely, but o.k., I don't insist on using it to\nreplace the existing open-coded searches.\n\nWhat actually bothers me more than the absence of bsearch() is that\nTransactionIdIsCurrentTransactionId() implements the binary search from\nscratch. Even w/o bsearch(), it can still call TransactionIdInArray(). I ran\ninto the problem when working on [1], which adds one more XID array.\n\nDoes the attached patch seem worth being applied separately, or at all?\n\n[1] https://www.postgresql.org/message-id/82651.1720540558%40antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 12 Jul 2024 12:01:11 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Missed opportunity for bsearch() in\n TransactionIdIsCurrentTransactionId()?" }, { "msg_contents": "On Fri, Jul 12, 2024 at 12:01:11PM +0200, Antonin Houska wrote:\n> Nathan Bossart <[email protected]> wrote:\n>> My concern with switching them to bsearch() would be the performance impact\n>> of using a function pointer for the comparisons. Perhaps we could add a\n>> couple of inlined binary search implementations for common types to replace\n>> many of the open-coded ones.\n> \n> bsearch() appears to be used widely, but o.k., I don't insist on using it to\n> replace the existing open-coded searches.\n\nSorry, I didn't mean to say that I was totally against switching to\nbsearch(), but I do think we need to understand whether there is any\nperformance impact before doing so, especially in hot paths. It would be\nnice if we could reduce the number of open-coded binary searches in some\nfashion, and if we can maintain or improve performance by creating a\nhandful of static inline functions, that would be even better. If there's\nno apparent performance difference, bsearch() is probably fine.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 12 Jul 2024 10:58:30 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Missed opportunity for bsearch() in\n TransactionIdIsCurrentTransactionId()?" } ]
[ { "msg_contents": "Please find attached a patch to allow for durations to optionally be sent\nto separate log files. In other words, rather than cluttering up our\npostgres202007.log file with tons of output from\nlog_min_duration_statement, duration lines are sent instead to the file\npostgres202007.duration.\n\nOver the years, durations have been the number one cause of very large log\nfiles, in which the more \"important\" items get buried in the noise. Also,\nprograms that are scanning for durations typically do not care about the\nnormal, non-duration output. Some people have a policy of logging\neverything, which in effect means setting log_min_duration_statement to 0,\nwhich in turn makes your log files nearly worthless for spotting\nnon-duration items. This feature will also be very useful for those who\nneed to temporarily turn on log_min_duration_statement, for some quick\nauditing of exactly what is being run on their database. When done, you can\nmove or remove the duration file without messing up your existing log\nstream.\n\nThis only covers the case when both the duration and statement are set on\nthe same line. In other words, log_min_duration_statement output, but not\nlog_duration (which is best avoided anyway). It also requires\nlogging_collector to be on, obviously.\n\nDetails:\n\nThe edata structure is expanded to have a new message_type, with a matching\nfunction errmessagetype() created.\n[include/utils/elog.h]\n[backend/utils/elog.c]\n\nAny errors that have both a duration and a statement are marked via\nerrmessagetype()\n[backend/tcop/postgres.c]\n\nA new GUC named \"log_duration_destination\" is created, which supports any\ncombination of stderr, csvlog, and jsonlog. It does not need to match\nlog_destination, in order to support different use cases. For example, the\nuser wants durations sent to a CSV file for processing by some other tool,\nbut still wants everything else going to a normal text log file.\n\nCode: [include/utils/guc_hooks.h] [backend/utils/misc/guc_tables.c]\nDocs: [sgml/config.sgml] [backend/utils/misc/postgresql.conf.sample]\n\nCreate a new flag called PIPE_PROTO_DEST_DURATION\n[include/postmaster/syslogger.h]\n\nCreate new flags:\n LOG_DESTINATION_DURATION,\n LOG_DESTINATION_DURATION_CSV,\n LOG_DESTINATION_DURATION_JSON\n[include/utils/elog.h]\n\nRouting and mapping LOG_DESTINATION to PIPE_PROTO\n[backend/utils/error/elog.c]\n\nMinor rerouting when using alternate forms\n[backend/utils/error/csvlog.c]\n[backend/utils/error/jsonlog.c]\n\nCreate new filehandles, do log rotation, map PIPE_PROTO to LOG_DESTINATION.\nRotation and entry into the \"current_logfiles\" file are the same as\nexisting log files. The new names/suffixes are duration, duration.csv, and\nduration.json.\n[backend/postmaster/syslogger.c]\n\nTesting to ensure combinations of log_destination and\nlog_duration_destination work as intended\n[bin/pg_ctl/meson.build]\n[bin/pg_ctl/t/005_log_duration_destination.pl]\n\nQuestions I've asked along the way, and perhaps other might as well:\n\nWhat about logging other things? Why just duration?\n\nDuration logging is a very unique event in our logs. There is nothing quite\nlike it - it's always client-driven, yet automatically generated. And it\ncan be extraordinarily verbose. Removing it from the existing logging\nstream has no particular downsides. Almost any other class of log message\nwould likely meet resistance as far as moving it to a separate log file,\nwith good reason.\n\nWhy not build a more generic log filtering case?\n\nI looked into this, but it would be a large undertaking, given the way our\nlogging system works. And as per above, I don't think the pain would be\nworth it, as duration covers 99% of the use cases for separate logs.\nCertainly, nothing else other than a recurring ERROR from the client can\ncause massive bloat in the size of the files. (There is a nearby patch to\nexclude certain errors from the log file as a way to mitigate the error\nspam - I don't like that idea, but should mention it here as another effort\nto keep the log files a manageable size)\n\nWhy not use an extension for this?\n\nI did start this as an extension, but it only goes so far. We can use\nemit_log_hook, but it requires careful coordination of open filehandles,\nhas to do inefficient regex of every log message, and cannot do things like\nlog rotation.\n\nWhy not bitmap PIPE_PROTO *and* LOG_DESTINATION?\n\nI tried to do both as simple bitmaps (i.e. duration+csv = duration.csv),\nand not have to use e.g. LOG_DESTIATION_DURATION_CSV, but size_rotation_for\nruined it for me. Since our PIPE always sends one thing at a time, a single\nnew flag enables it to stay as a clean bits8 type.\n\nWhat about Windows?\n\nUntested. I don't have access to a Windows build, but I think in theory it\nshould work fine.\n\nCheers,\nGreg", "msg_date": "Wed, 10 Jul 2024 11:58:01 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Send duration output to separate log files" }, { "msg_contents": "On Wed, 10 Jul 2024 at 16:58, Greg Sabino Mullane <[email protected]>\nwrote:\n--snip--\n\n> Why not build a more generic log filtering case?\n>\n> I looked into this, but it would be a large undertaking, given the way our\n> logging system works. And as per above, I don't think the pain would be\n> worth it, as duration covers 99% of the use cases for separate logs.\n>\n\n The other category of logging which would benefit from a separate file is\naudit. It also can create massive volumes of log content. Splitting audit\ninformation off into a separate file for use by a separate team or function\nis also a request I have heard from some financial institutions adopting\nPostgres. With audit being provided by an extension, this would become\nquite an intrusive change.\n\nWhy not use an extension for this?\n>\n> I did start this as an extension, but it only goes so far. We can use\n> emit_log_hook, but it requires careful coordination of open filehandles,\n> has to do inefficient regex of every log message, and cannot do things like\n> log rotation.\n>\n\nWould an extension be able to safely modify the message_type field you have\nadded using emit_log_hook? If so, the field becomes more of a log\ndestination label than a type marker. If an extension could hook into the\nlog file creation/rotation process, that would be a nice basis for enabling\nextensions (I'm particularly thinking of pgAudit) to manage separate\nlogging destinations.\n\nOn Wed, 10 Jul 2024 at 16:58, Greg Sabino Mullane <[email protected]> wrote:--snip--Why not build a more generic log filtering case?I looked into this, but it would be a large undertaking, given the way our logging system works. And as per above, I don't think the pain would be worth it, as duration covers 99% of the use cases for separate logs.  The other category of logging which would benefit from a separate file is audit. It also can create massive volumes of log content. Splitting audit information off into a separate file for use by a separate team or function is also a request I have heard from some financial institutions adopting Postgres. With audit being provided by an extension, this would become quite an intrusive change. Why not use an extension for this?I did start this as an extension, but it only goes so far. We can use emit_log_hook, but it requires careful coordination of open filehandles, has to do inefficient regex of every log message, and cannot do things like log rotation.Would an extension be able to safely modify the message_type field you have added using emit_log_hook? If so, the field becomes more of a log destination label than a type marker. If an extension could hook into the log file creation/rotation process, that would be a nice basis for enabling extensions (I'm particularly thinking of pgAudit) to manage separate logging destinations.", "msg_date": "Thu, 11 Jul 2024 11:47:45 +0100", "msg_from": "Alastair Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Send duration output to separate log files" }, { "msg_contents": "On Thu, Jul 11, 2024 at 6:47 AM Alastair Turner <[email protected]> wrote:\n\n> The other category of logging which would benefit from a separate file is\n> audit. It also can create massive volumes of log content. Splitting audit\n> information off into a separate file for use by a separate team or function\n> is also a request I have heard from some financial institutions adopting\n> Postgres. With audit being provided by an extension, this would become\n> quite an intrusive change.\n>\n\nThanks for the feedback. I agree pgaudit is another thing that can create\nmassive log files, and should be solved at some point. However, I wanted\nto keep this patch self-contained to in-core stuff. And pgaudit is already\nan odd duck, in that it puts CSV into your stderr stream (and into your\njson!). Ideally it would put a single CSV stream into a separate csv file.\nPerhaps even something that did not necessarily live in log_directory.\n\nWould an extension be able to safely modify the message_type field you have\n>> added using emit_log_hook? If so, the field becomes more of a log\n>> destination label than a type marker. If an extension could hook into the\n>> log file creation/rotation process, that would be a nice basis for enabling\n>> extensions (I'm particularly thinking of pgAudit) to manage separate\n>> logging destinations.\n>>\n>\nYes, I had more than duration in mind when I created errmessagetype. A hook\nto set it would be the obvious next step, and then some sort of way of\nmapping that to arbitrary log files. But I see that as mostly orthagonal to\nthis patch (and certainly a much larger endeavor).\n\n(wades in anyways). I'm not sure about hooking into the log rotation\nprocess so much as registering something on startup, then letting Postgres\nhandle all the log files in the queue. Although as I alluded to above,\nsometimes having large log files NOT live in the data directory (or more\nspecifically, not hang out with the log_directory crowd), could be a plus\nfor space, efficiency, and security reasons. That makes log rotation\nharder, however. And do we / should we put extension-driven logfiles into\ncurrent_logfiles? Do we still fall back to stderr even for extension logs?\nLots to ponder. :)\n\nCheers,\nGreg\n\nOn Thu, Jul 11, 2024 at 6:47 AM Alastair Turner <[email protected]> wrote: The other category of logging which would benefit from a separate file is audit. It also can create massive volumes of log content. Splitting audit information off into a separate file for use by a separate team or function is also a request I have heard from some financial institutions adopting Postgres. With audit being provided by an extension, this would become quite an intrusive change.Thanks for the feedback. I agree pgaudit is another thing that can create massive log files, and should be solved at some point.  However, I wanted to keep this patch self-contained to in-core stuff. And pgaudit is already an odd duck, in that it puts CSV into your stderr stream (and into your json!). Ideally it would put a single CSV stream into a separate csv file. Perhaps even something that did not necessarily live in log_directory.Would an extension be able to safely modify the message_type field you have added using emit_log_hook? If so, the field becomes more of a log destination label than a type marker. If an extension could hook into the log file creation/rotation process, that would be a nice basis for enabling extensions (I'm particularly thinking of pgAudit) to manage separate logging destinations.Yes, I had more than duration in mind when I created errmessagetype. A hook to set it would be the obvious next step, and then some sort of way of mapping that to arbitrary log files. But I see that as mostly orthagonal to this patch (and certainly a much larger endeavor).(wades in anyways). I'm not sure about hooking into the log rotation process so much as registering something on startup, then letting Postgres handle all the log files in the queue. Although as I alluded to above, sometimes having large log files NOT live in the data directory (or more specifically, not hang out with the log_directory crowd), could be a plus for space, efficiency, and security reasons. That makes log rotation harder, however. And do we / should we put extension-driven logfiles into current_logfiles? Do we still fall back to stderr even for extension logs? Lots to ponder. :)Cheers,Greg", "msg_date": "Fri, 12 Jul 2024 10:57:40 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Send duration output to separate log files" }, { "msg_contents": "On Fri, 12 Jul 2024 at 15:58, Greg Sabino Mullane <[email protected]>\nwrote:\n\n> On Thu, Jul 11, 2024 at 6:47 AM Alastair Turner <[email protected]>\n> wrote:\n>\n>> The other category of logging which would benefit from a separate file\n>> is audit. It also can create massive volumes of log content. Splitting\n>> audit information off into a separate file for use by a separate team or\n>> function is also a request I have heard from some financial institutions\n>> adopting Postgres. With audit being provided by an extension, this would\n>> become quite an intrusive change.\n>>\n>\n> Thanks for the feedback. I agree pgaudit is another thing that can create\n> massive log files, and should be solved at some point. However, I wanted\n> to keep this patch self-contained to in-core stuff. And pgaudit is already\n> an odd duck, in that it puts CSV into your stderr stream (and into your\n> json!). Ideally it would put a single CSV stream into a separate csv file.\n> Perhaps even something that did not necessarily live in log_directory.\n>\n> Would an extension be able to safely modify the message_type field you\n>>> have added using emit_log_hook? If so, the field becomes more of a log\n>>> destination label than a type marker. If an extension could hook into the\n>>> log file creation/rotation process, that would be a nice basis for enabling\n>>> extensions (I'm particularly thinking of pgAudit) to manage separate\n>>> logging destinations.\n>>>\n>>\n> Yes, I had more than duration in mind when I created errmessagetype. A\n> hook to set it would be the obvious next step, and then some sort of way of\n> mapping that to arbitrary log files. But I see that as mostly orthagonal to\n> this patch (and certainly a much larger endeavor).\n>\n\nOk. This facility to separate out the logging of the duration messages is\nuseful on its own and I can see the reasoning for using the core\nfunctionality for log rotation to manage these separate logs rather than\nredoing all the file handling work in an extension. A broader framework for\nmapping messages to arbitrary log files is a far larger set of changes\nwhich can be tackled later if desired.\n\nI've had a look at the patch. The test cases look comprehensive. The patch\napplies cleanly. The newly supplied tests (13 of the 40) and the\ntest_misc/003_check_guc (1 - no parameters missing from\npostgresql.conf.sample) fail.\n\nTo leave some runway for this idea to be extended on without disrupting the\nuser experience could the GUC name be feature qualified as\nduration_log.log_destination? This would provide a clear naming structure\nfor the most obvious follow-on patch to this one - allowing users to set\nlog_directory separately for these duration logs - as well as any further\nseparate logging efforts. I know that these dot-separated GUC names are\ngenerally associated with extensions, but I can't find a hard rule on the\nissue anywhere, and it feels like a reasonable way to group up the purpose\n(in this case logging duration messages) for which there are specific\nvalues of a number of GUCs (log_destination, log_directory, even\nlog_filename, ...).\n\nCheers\nAlastair\n\n>\n\nOn Fri, 12 Jul 2024 at 15:58, Greg Sabino Mullane <[email protected]> wrote:On Thu, Jul 11, 2024 at 6:47 AM Alastair Turner <[email protected]> wrote: The other category of logging which would benefit from a separate file is audit. It also can create massive volumes of log content. Splitting audit information off into a separate file for use by a separate team or function is also a request I have heard from some financial institutions adopting Postgres. With audit being provided by an extension, this would become quite an intrusive change.Thanks for the feedback. I agree pgaudit is another thing that can create massive log files, and should be solved at some point.  However, I wanted to keep this patch self-contained to in-core stuff. And pgaudit is already an odd duck, in that it puts CSV into your stderr stream (and into your json!). Ideally it would put a single CSV stream into a separate csv file. Perhaps even something that did not necessarily live in log_directory.Would an extension be able to safely modify the message_type field you have added using emit_log_hook? If so, the field becomes more of a log destination label than a type marker. If an extension could hook into the log file creation/rotation process, that would be a nice basis for enabling extensions (I'm particularly thinking of pgAudit) to manage separate logging destinations.Yes, I had more than duration in mind when I created errmessagetype. A hook to set it would be the obvious next step, and then some sort of way of mapping that to arbitrary log files. But I see that as mostly orthagonal to this patch (and certainly a much larger endeavor).Ok. This facility to separate out the logging of the duration messages is useful on its own and I can see the reasoning for using the core functionality for log rotation to manage these separate logs rather than redoing all the file handling work in an extension. A broader framework for mapping messages to arbitrary log files is a far larger set of changes which can be tackled later if desired.I've had a look at the patch. The test cases look comprehensive. The patch applies cleanly. The newly supplied tests (13 of the 40) and the test_misc/003_check_guc (1 - no parameters missing from postgresql.conf.sample) fail.To leave some runway for this idea to be extended on without disrupting the user experience could the GUC name be feature qualified as duration_log.log_destination? This would provide a clear naming structure for the most obvious follow-on patch to this one - allowing users to set log_directory separately for these duration logs - as well as any further separate logging efforts. I know that these dot-separated GUC names are generally associated with extensions, but I can't find a hard rule on the issue anywhere, and it feels like a reasonable way to group up the purpose (in this case logging duration messages) for which there are specific values of a number of GUCs (log_destination, log_directory, even log_filename, ...).CheersAlastair", "msg_date": "Mon, 22 Jul 2024 15:31:50 +0100", "msg_from": "Alastair Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Send duration output to separate log files" }, { "msg_contents": "Hi\n\nst 10. 7. 2024 v 17:58 odesílatel Greg Sabino Mullane <[email protected]>\nnapsal:\n\n> Please find attached a patch to allow for durations to optionally be sent\n> to separate log files. In other words, rather than cluttering up our\n> postgres202007.log file with tons of output from\n> log_min_duration_statement, duration lines are sent instead to the file\n> postgres202007.duration.\n>\n> Over the years, durations have been the number one cause of very large log\n> files, in which the more \"important\" items get buried in the noise. Also,\n> programs that are scanning for durations typically do not care about the\n> normal, non-duration output. Some people have a policy of logging\n> everything, which in effect means setting log_min_duration_statement to 0,\n> which in turn makes your log files nearly worthless for spotting\n> non-duration items. This feature will also be very useful for those who\n> need to temporarily turn on log_min_duration_statement, for some quick\n> auditing of exactly what is being run on their database. When done, you can\n> move or remove the duration file without messing up your existing log\n> stream.\n>\n> This only covers the case when both the duration and statement are set on\n> the same line. In other words, log_min_duration_statement output, but not\n> log_duration (which is best avoided anyway). It also requires\n> logging_collector to be on, obviously.\n>\n> Details:\n>\n> The edata structure is expanded to have a new message_type, with a\n> matching function errmessagetype() created.\n> [include/utils/elog.h]\n> [backend/utils/elog.c]\n>\n> Any errors that have both a duration and a statement are marked via\n> errmessagetype()\n> [backend/tcop/postgres.c]\n>\n> A new GUC named \"log_duration_destination\" is created, which supports any\n> combination of stderr, csvlog, and jsonlog. It does not need to match\n> log_destination, in order to support different use cases. For example, the\n> user wants durations sent to a CSV file for processing by some other tool,\n> but still wants everything else going to a normal text log file.\n>\n> Code: [include/utils/guc_hooks.h] [backend/utils/misc/guc_tables.c]\n> Docs: [sgml/config.sgml] [backend/utils/misc/postgresql.conf.sample]\n>\n> Create a new flag called PIPE_PROTO_DEST_DURATION\n> [include/postmaster/syslogger.h]\n>\n> Create new flags:\n> LOG_DESTINATION_DURATION,\n> LOG_DESTINATION_DURATION_CSV,\n> LOG_DESTINATION_DURATION_JSON\n> [include/utils/elog.h]\n>\n> Routing and mapping LOG_DESTINATION to PIPE_PROTO\n> [backend/utils/error/elog.c]\n>\n> Minor rerouting when using alternate forms\n> [backend/utils/error/csvlog.c]\n> [backend/utils/error/jsonlog.c]\n>\n> Create new filehandles, do log rotation, map PIPE_PROTO to\n> LOG_DESTINATION. Rotation and entry into the \"current_logfiles\" file are\n> the same as existing log files. The new names/suffixes are duration,\n> duration.csv, and duration.json.\n> [backend/postmaster/syslogger.c]\n>\n> Testing to ensure combinations of log_destination and\n> log_duration_destination work as intended\n> [bin/pg_ctl/meson.build]\n> [bin/pg_ctl/t/005_log_duration_destination.pl]\n>\n> Questions I've asked along the way, and perhaps other might as well:\n>\n> What about logging other things? Why just duration?\n>\n> Duration logging is a very unique event in our logs. There is nothing\n> quite like it - it's always client-driven, yet automatically generated. And\n> it can be extraordinarily verbose. Removing it from the existing logging\n> stream has no particular downsides. Almost any other class of log message\n> would likely meet resistance as far as moving it to a separate log file,\n> with good reason.\n>\n> Why not build a more generic log filtering case?\n>\n> I looked into this, but it would be a large undertaking, given the way our\n> logging system works. And as per above, I don't think the pain would be\n> worth it, as duration covers 99% of the use cases for separate logs.\n> Certainly, nothing else other than a recurring ERROR from the client can\n> cause massive bloat in the size of the files. (There is a nearby patch to\n> exclude certain errors from the log file as a way to mitigate the error\n> spam - I don't like that idea, but should mention it here as another effort\n> to keep the log files a manageable size)\n>\n> Why not use an extension for this?\n>\n> I did start this as an extension, but it only goes so far. We can use\n> emit_log_hook, but it requires careful coordination of open filehandles,\n> has to do inefficient regex of every log message, and cannot do things like\n> log rotation.\n>\n> Why not bitmap PIPE_PROTO *and* LOG_DESTINATION?\n>\n> I tried to do both as simple bitmaps (i.e. duration+csv = duration.csv),\n> and not have to use e.g. LOG_DESTIATION_DURATION_CSV, but size_rotation_for\n> ruined it for me. Since our PIPE always sends one thing at a time, a single\n> new flag enables it to stay as a clean bits8 type.\n>\n> What about Windows?\n>\n> Untested. I don't have access to a Windows build, but I think in theory it\n> should work fine.\n>\n\nI like the proposed feature, but I miss two points.\n\n1. possibility to support auto_explain\n\n2. possibility to support rsyslog by setting different or some syslog\nrelated redirection by setting different facility.\n\nRegards\n\nPavel\n\n\n> Cheers,\n> Greg\n>\n>\n\nHist 10. 7. 2024 v 17:58 odesílatel Greg Sabino Mullane <[email protected]> napsal:Please find attached a patch to allow for durations to optionally be sent to separate log files. In other words, rather than cluttering up our postgres202007.log file with tons of output from log_min_duration_statement, duration lines are sent instead to the file postgres202007.duration.Over the years, durations have been the number one cause of very large log files, in which the more \"important\" items get buried in the noise. Also, programs that are scanning for durations typically do not care about the normal, non-duration output. Some people have a policy of logging everything, which in effect means setting log_min_duration_statement to 0, which in turn makes your log files nearly worthless for spotting non-duration items. This feature will also be very useful for those who need to temporarily turn on log_min_duration_statement, for some quick auditing of exactly what is being run on their database. When done, you can move or remove the duration file without messing up your existing log stream.This only covers the case when both the duration and statement are set on the same line. In other words, log_min_duration_statement output, but not log_duration (which is best avoided anyway). It also requires logging_collector to be on, obviously.Details:The edata structure is expanded to have a new message_type, with a matching function errmessagetype() created.[include/utils/elog.h][backend/utils/elog.c]Any errors that have both a duration and a statement are marked via errmessagetype()[backend/tcop/postgres.c]A new GUC named \"log_duration_destination\" is created, which supports any combination of stderr, csvlog, and jsonlog. It does not need to match log_destination, in order to support different use cases. For example, the user wants durations sent to a CSV file for processing by some other tool, but still wants everything else going to a normal text log file.Code: [include/utils/guc_hooks.h] [backend/utils/misc/guc_tables.c]Docs: [sgml/config.sgml]  [backend/utils/misc/postgresql.conf.sample]Create a new flag called PIPE_PROTO_DEST_DURATION[include/postmaster/syslogger.h]Create new flags:  LOG_DESTINATION_DURATION,  LOG_DESTINATION_DURATION_CSV,  LOG_DESTINATION_DURATION_JSON[include/utils/elog.h]Routing and mapping LOG_DESTINATION to PIPE_PROTO[backend/utils/error/elog.c]Minor rerouting when using alternate forms[backend/utils/error/csvlog.c][backend/utils/error/jsonlog.c]Create new filehandles, do log rotation, map PIPE_PROTO to LOG_DESTINATION. Rotation and entry into the \"current_logfiles\" file are the same as existing log files. The new names/suffixes are duration, duration.csv, and duration.json.[backend/postmaster/syslogger.c]Testing to ensure combinations of log_destination and log_duration_destination work as intended[bin/pg_ctl/meson.build][bin/pg_ctl/t/005_log_duration_destination.pl]Questions I've asked along the way, and perhaps other might as well:What about logging other things? Why just duration?Duration logging is a very unique event in our logs. There is nothing quite like it - it's always client-driven, yet automatically generated. And it can be extraordinarily verbose. Removing it from the existing logging stream has no particular downsides. Almost any other class of log message would likely meet resistance as far as moving it to a separate log file, with good reason.Why not build a more generic log filtering case?I looked into this, but it would be a large undertaking, given the way our logging system works. And as per above, I don't think the pain would be worth it, as duration covers 99% of the use cases for separate logs. Certainly, nothing else other than a recurring ERROR from the client can cause massive bloat in the size of the files. (There is a nearby patch to exclude certain errors from the log file as a way to mitigate the error spam - I don't like that idea, but should mention it here as another effort to keep the log files a manageable size)Why not use an extension for this?I did start this as an extension, but it only goes so far. We can use emit_log_hook, but it requires careful coordination of open filehandles, has to do inefficient regex of every log message, and cannot do things like log rotation.Why not bitmap PIPE_PROTO *and* LOG_DESTINATION?I tried to do both as simple bitmaps (i.e. duration+csv = duration.csv), and not have to use e.g. LOG_DESTIATION_DURATION_CSV, but size_rotation_for ruined it for me. Since our PIPE always sends one thing at a time, a single new flag enables it to stay as a clean bits8 type.What about Windows?Untested. I don't have access to a Windows build, but I think in theory it should work fine.I like the proposed feature, but I miss two points. 1. possibility to support auto_explain2. possibility to support rsyslog by setting different or some syslog related redirection by setting different facility.RegardsPavel Cheers,Greg", "msg_date": "Wed, 24 Jul 2024 15:45:19 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Send duration output to separate log files" } ]
[ { "msg_contents": "Hi,\r\nin func&nbsp;get_rel_sync_entry() we access the same tuple in pg_class three times:\r\n&nbsp; &nbsp; Oid\t&nbsp; &nbsp; &nbsp; &nbsp; schemaId = get_rel_namespace(relid);\r\n&nbsp; &nbsp; bool\t\tam_partition = get_rel_relispartition(relid);\r\n&nbsp; &nbsp; char\t\trelkind = get_rel_relkind(relid);\r\nWhy not just merge into one?\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nHi,in func get_rel_sync_entry() we access the same tuple in pg_class three times:    Oid         schemaId = get_rel_namespace(relid);    bool am_partition = get_rel_relispartition(relid);    char relkind = get_rel_relkind(relid);Why not just merge into one?--Regards,ChangAo Chen", "msg_date": "Thu, 11 Jul 2024 14:08:32 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Redundant syscache access in get_rel_sync_entry()" }, { "msg_contents": "On Thu, Jul 11, 2024 at 11:38 AM cca5507 <[email protected]> wrote:\n\n> Hi,\n> in func get_rel_sync_entry() we access the same tuple in pg_class three\n> times:\n> Oid schemaId = get_rel_namespace(relid);\n> bool am_partition = get_rel_relispartition(relid);\n> char relkind = get_rel_relkind(relid);\n> Why not just merge into one?\n>\n\nI think it's just convenient. We do that at multiple places; not exactly\nthese functions but functions which fetch relation attributes from cached\ntuples. Given that the tuple is cached and local to the backend, it's not\ntoo expensive. But if there are multiple places which do something like\nthis, we may want to create more function get_rel_* function which return\nmultiple properties in one function call. I see get_rel_namspace() and\nget_rel_name() called together at many places.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Jul 11, 2024 at 11:38 AM cca5507 <[email protected]> wrote:Hi,in func get_rel_sync_entry() we access the same tuple in pg_class three times:    Oid         schemaId = get_rel_namespace(relid);    bool am_partition = get_rel_relispartition(relid);    char relkind = get_rel_relkind(relid);Why not just merge into one?I think it's just convenient. We do that at multiple places; not exactly these functions but functions which fetch relation attributes from cached tuples. Given that the tuple is cached and local to the backend, it's not too expensive.  But if there are multiple places which do something like this, we may want to create more function get_rel_* function which return multiple properties in one function call. I see get_rel_namspace() and get_rel_name() called together at many places.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 11 Jul 2024 19:10:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant syscache access in get_rel_sync_entry()" }, { "msg_contents": "------------------&nbsp;Original&nbsp;------------------\r\nFrom: \"Ashutosh Bapat\" <[email protected]&gt;;\r\nDate:&nbsp;Thu, Jul 11, 2024 09:40 PM\r\nTo:&nbsp;\"cca5507\"<[email protected]&gt;;\r\nCc:&nbsp;\"pgsql-hackers\"<[email protected]&gt;;\r\nSubject:&nbsp;Re: Redundant syscache access in get_rel_sync_entry()\r\n\r\nI think it's just convenient. We do that at multiple places; not exactly these functions but functions which fetch relation attributes from cached tuples. Given that the tuple is cached and local to the backend, it's not too expensive.&nbsp; But if there are multiple places which do something like this, we may want to create more function&nbsp;get_rel_* function which&nbsp;return multiple properties in one function call. I see get_rel_namspace() and get_rel_name() called together at many places.\r\n\r\n\r\nAgreed\r\n\r\n\r\nThank you for reply\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\n------------------ Original ------------------From: \"Ashutosh Bapat\" <[email protected]>;Date: Thu, Jul 11, 2024 09:40 PMTo: \"cca5507\"<[email protected]>;Cc: \"pgsql-hackers\"<[email protected]>;Subject: Re: Redundant syscache access in get_rel_sync_entry()I think it's just convenient. We do that at multiple places; not exactly these functions but functions which fetch relation attributes from cached tuples. Given that the tuple is cached and local to the backend, it's not too expensive.  But if there are multiple places which do something like this, we may want to create more function get_rel_* function which return multiple properties in one function call. I see get_rel_namspace() and get_rel_name() called together at many places.AgreedThank you for reply--Regards,ChangAo Chen", "msg_date": "Thu, 11 Jul 2024 22:09:13 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Redundant syscache access in get_rel_sync_entry()" }, { "msg_contents": "On Thu, Jul 11, 2024 at 07:10:58PM +0530, Ashutosh Bapat wrote:\n> I think it's just convenient. We do that at multiple places; not exactly\n> these functions but functions which fetch relation attributes from cached\n> tuples. Given that the tuple is cached and local to the backend, it's not\n> too expensive. But if there are multiple places which do something like\n> this, we may want to create more function get_rel_* function which return\n> multiple properties in one function call. I see get_rel_namspace() and\n> get_rel_name() called together at many places.\n\nThat's not worth the complications based on the level of caching.\nThis code is fine as-is.\n--\nMichael", "msg_date": "Fri, 12 Jul 2024 10:25:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant syscache access in get_rel_sync_entry()" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Jul 11, 2024 at 07:10:58PM +0530, Ashutosh Bapat wrote:\n>> I think it's just convenient. We do that at multiple places; not exactly\n>> these functions but functions which fetch relation attributes from cached\n>> tuples. Given that the tuple is cached and local to the backend, it's not\n>> too expensive. But if there are multiple places which do something like\n>> this, we may want to create more function get_rel_* function which return\n>> multiple properties in one function call. I see get_rel_namspace() and\n>> get_rel_name() called together at many places.\n\n> That's not worth the complications based on the level of caching.\n> This code is fine as-is.\n\nI could get behind creating such functions if there were a\ndemonstrable performance win, but in places that are not hot-spots\nthat's unlikely to be demonstrable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2024 22:28:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Redundant syscache access in get_rel_sync_entry()" } ]
[ { "msg_contents": "Hi All,\nUsing PG_TEST_EXTRA with make is simple, one just sets that environment variable\n$ make check\n... snip ...\n PG_REGRESS='/home/ashutosh/work/units/pghead_make/coderoot/pg/src/test/modules/xid_wraparound/../../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../../src/test/perl/ -I . t/*.pl\n# +++ tap check in src/test/modules/xid_wraparound +++\nt/001_emergency_vacuum.pl .. skipped: test xid_wraparound not enabled\nin PG_TEST_EXTRA\nt/002_limits.pl ............ skipped: test xid_wraparound not enabled\nin PG_TEST_EXTRA\nt/003_wraparounds.pl ....... skipped: test xid_wraparound not enabled\nin PG_TEST_EXTRA\nFiles=3, Tests=0, 1 wallclock secs ( 0.02 usr 0.00 sys + 0.20 cusr\n0.03 csys = 0.25 CPU)\nResult: NOTESTS\n\nSet PG_TEST_EXTRA\n$ PG_TEST_EXTRA=xid_wraparound make check\n PG_REGRESS='/home/ashutosh/work/units/pghead_make/coderoot/pg/src/test/modules/xid_wraparound/../../../../src/test/regress/pg_regress'\n/usr/bin/prove -I ../../../../src/test/perl/ -I . t/*.pl\n# +++ tap check in src/test/modules/xid_wraparound +++\nt/001_emergency_vacuum.pl .. ok\nt/002_limits.pl ............ ok\nt/003_wraparounds.pl ....... ok\nAll tests successful.\nFiles=3, Tests=11, 181 wallclock secs ( 0.03 usr 0.00 sys + 2.87\ncusr 3.10 csys = 6.00 CPU)\nResult: PASS\n\nBut this simple trick does not work with meson\n$ meson test -C $BuildDir --suite setup --suite xid_wraparound\nninja: Entering directory `/home/ashutosh/work/units/pg_review/build_dev'\nninja: no work to do.\n1/6 postgresql:setup / tmp_install\n OK 0.85s\n2/6 postgresql:setup / install_test_files\n OK 0.06s\n3/6 postgresql:setup / initdb_cache\n OK 1.57s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n SKIP 0.24s\n5/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n SKIP 0.26s\n6/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n SKIP 0.27s\n\n$ PG_TEST_EXTRA=xid_wraparound meson test -C $BuildDir --suite setup\n--suite xid_wraparound\nninja: Entering directory `/home/ashutosh/work/units/pg_review/build_dev'\nninja: no work to do.\n1/6 postgresql:setup / tmp_install\n OK 0.41s\n2/6 postgresql:setup / install_test_files\n OK 0.06s\n3/6 postgresql:setup / initdb_cache\n OK 1.57s\n4/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n SKIP 0.20s\n5/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n SKIP 0.24s\n6/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n SKIP 0.24s\n\nthe tests are still skipped.\n\nIn order to run these tests, we have to run meson setup again. There\nare couple of problems with this\n1. It's not clear why the tests were skipped. Also not clear that we\nhave to run meson setup again - from the output alone\n2. Running meson setup again is costly, every time we have to run a\nnew test from PG_TEST_EXTRA.\n3. Always configuring meson with PG_TEST_EXTRA means we will run heavy\ntests every time meson test is run. I didn't find a way to not run\nthese tests as part of meson test once configured this way.\n\nWe should either allow make like behaviour or provide a way to not run\nthese tests even if configured that way.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 11 Jul 2024 12:00:15 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Thu, 11 Jul 2024 at 09:30, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> In order to run these tests, we have to run meson setup again. There\n> are couple of problems with this\n> 1. It's not clear why the tests were skipped. Also not clear that we\n> have to run meson setup again - from the output alone\n> 2. Running meson setup again is costly, every time we have to run a\n> new test from PG_TEST_EXTRA.\n> 3. Always configuring meson with PG_TEST_EXTRA means we will run heavy\n> tests every time meson test is run. I didn't find a way to not run\n> these tests as part of meson test once configured this way.\n>\n> We should either allow make like behaviour or provide a way to not run\n> these tests even if configured that way.\n\nI have a two quick solutions to this:\n\n1- More like make behaviour. PG_TEST_EXTRA is able to be set from the\nsetup command, delete this feature so it could be set only from the\nenvironment. Then use it from the environment.\n\n2- If PG_TEST_EXTRA is set from the setup command, use it from the\nsetup command and discard the environment variable. If PG_TEST_EXTRA\nis not set from the setup command, then use it from the environment.\n\nI hope these patches help.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 17 Jul 2024 00:11:47 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Tue, Jul 16, 2024 at 2:12 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> 2- If PG_TEST_EXTRA is set from the setup command, use it from the\n> setup command and discard the environment variable. If PG_TEST_EXTRA\n> is not set from the setup command, then use it from the environment.\n\nIs there a way for the environment to override the Meson setting\nrather than vice-versa? My vote would be to have both available, but\nwith the implementation in patch 2 I'd still have to reconfigure if I\nwanted to change my test setup.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Tue, 16 Jul 2024 14:27:47 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 17 Jul 2024 at 00:27, Jacob Champion\n<[email protected]> wrote:\n>\n> On Tue, Jul 16, 2024 at 2:12 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > 2- If PG_TEST_EXTRA is set from the setup command, use it from the\n> > setup command and discard the environment variable. If PG_TEST_EXTRA\n> > is not set from the setup command, then use it from the environment.\n>\n> Is there a way for the environment to override the Meson setting\n> rather than vice-versa? My vote would be to have both available, but\n> with the implementation in patch 2 I'd still have to reconfigure if I\n> wanted to change my test setup.\n\nI think something like attached does the trick. I did not test it\nextensively but it passed the couple of tests I tried.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 17 Jul 2024 01:01:10 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Jul 17, 2024 at 3:31 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, 17 Jul 2024 at 00:27, Jacob Champion\n> <[email protected]> wrote:\n> >\n> > On Tue, Jul 16, 2024 at 2:12 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> > >\n> > > 2- If PG_TEST_EXTRA is set from the setup command, use it from the\n> > > setup command and discard the environment variable. If PG_TEST_EXTRA\n> > > is not set from the setup command, then use it from the environment.\n> >\n> > Is there a way for the environment to override the Meson setting\n> > rather than vice-versa? My vote would be to have both available, but\n> > with the implementation in patch 2 I'd still have to reconfigure if I\n> > wanted to change my test setup.\n>\n> I think something like attached does the trick. I did not test it\n> extensively but it passed the couple of tests I tried.\n\nThanks a lot for working on this.\n\nI tested this patch with xid_wraparound. It seems to be working.\n$ mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.36s\n2/6 postgresql:setup / install_test_files\n OK 0.10s\n3/6 postgresql:setup / initdb_cache\n OK 1.14s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n SKIP 0.14s\n5/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n SKIP 0.14s\n6/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n SKIP 0.14s\n\n... snip\n\n$ PG_TEST_EXTRA=xid_wraparound mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.38s\n2/6 postgresql:setup / install_test_files\n OK 0.07s\n3/6 postgresql:setup / initdb_cache\n OK 1.13s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n OK 67.33s 7 subtests passed\n5/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n OK 70.14s 3 subtests passed\n6/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n OK 178.01s 1 subtests passed\n\n... snip\n\n$ mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.36s\n2/6 postgresql:setup / install_test_files\n OK 0.06s\n3/6 postgresql:setup / initdb_cache\n OK 1.14s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n SKIP 0.18s\n5/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n SKIP 0.19s\n6/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n SKIP 0.19s\n\n... snip\n\nProviding PG_TEST_EXTRA as a configuration option works as well. But I\nfind it confusing\n$ mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.71s\n2/6 postgresql:setup / install_test_files\n OK 0.06s\n3/6 postgresql:setup / initdb_cache\n OK 1.08s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n OK 52.73s 7 subtests passed\n5/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n OK 56.36s 3 subtests passed\n6/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n OK 160.46s 1 subtests passed\n... snip\n\n$ PG_TEST_EXTRA=ldap mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.37s\n2/6 postgresql:setup / install_test_files\n OK 0.08s\n3/6 postgresql:setup / initdb_cache\n OK 1.16s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n SKIP 0.14s\n5/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n SKIP 0.15s\n6/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n SKIP 0.15s\n... snip\n$ PG_TEST_EXTRA=xid_wraparound mts xid_wraparound\n... snip\n1/6 postgresql:setup / tmp_install\n OK 0.36s\n2/6 postgresql:setup / install_test_files\n OK 0.06s\n3/6 postgresql:setup / initdb_cache\n OK 1.12s\n4/6 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum\n OK 62.53s 7 subtests passed\n5/6 postgresql:xid_wraparound / xid_wraparound/002_limits\n OK 69.78s 3 subtests passed\n6/6 postgresql:xid_wraparound / xid_wraparound/003_wraparounds\n OK 186.78s 1 subtests passed\n\nxid_wraparound tests are run if PG_TEST_EXTRA contains xid_wraparound\nor it is not set. Any other setting will not run xid_wraparound test.\nThat's how the patch is coded but it isn't intuitive that changing\nwhether a test is run by default would require configuring the build\nagain. Probably we should just get rid of config time PG_TEST_EXTRA\naltogether.\n\nI am including +Tristan Partin who knows meson better.\n\nIf you are willing to work on this further, please add it to the commitfest.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 17 Jul 2024 15:43:04 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 17 Jul 2024 at 13:13, Ashutosh Bapat\n<[email protected]> wrote:\n> xid_wraparound tests are run if PG_TEST_EXTRA contains xid_wraparound\n> or it is not set. Any other setting will not run xid_wraparound test.\n> That's how the patch is coded but it isn't intuitive that changing\n> whether a test is run by default would require configuring the build\n> again. Probably we should just get rid of config time PG_TEST_EXTRA\n> altogether.\n>\n> I am including +Tristan Partin who knows meson better.\n>\n> If you are willing to work on this further, please add it to the commitfest.\n\nI think I know why there is confusion. Could you try to set\nPG_TEST_EXTRA with quotes? Like PG_TEST_EXTRA=\"ldap mts\nxid_wraparound\".\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:23:19 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 17 Jul 2024 at 13:23, Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, 17 Jul 2024 at 13:13, Ashutosh Bapat\n> <[email protected]> wrote:\n> > xid_wraparound tests are run if PG_TEST_EXTRA contains xid_wraparound\n> > or it is not set. Any other setting will not run xid_wraparound test.\n> > That's how the patch is coded but it isn't intuitive that changing\n> > whether a test is run by default would require configuring the build\n> > again. Probably we should just get rid of config time PG_TEST_EXTRA\n> > altogether.\n> >\n> > I am including +Tristan Partin who knows meson better.\n> >\n> > If you are willing to work on this further, please add it to the commitfest.\n>\n> I think I know why there is confusion. Could you try to set\n> PG_TEST_EXTRA with quotes? Like PG_TEST_EXTRA=\"ldap mts\n> xid_wraparound\".\n\nSorry, the previous reply was wrong; I misunderstood what you said.\nYes, that is how the patch was coded and I agree that getting rid of\nconfig time PG_TEST_EXTRA could be a better alternative.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:34:26 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Jul 17, 2024 at 3:34 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> Sorry, the previous reply was wrong; I misunderstood what you said.\n> Yes, that is how the patch was coded and I agree that getting rid of\n> config time PG_TEST_EXTRA could be a better alternative.\n\nPersonally I use the config-time PG_TEST_EXTRA extensively. I'd be sad\nto see it go, especially if developers are no longer forced to use it.\n(In practice, I don't change that setting much after initial\nconfigure, because I use separate worktrees/build directories for\ndifferent patchsets. And the reconfiguration is fast when I do need to\nmodify it.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 17 Jul 2024 07:09:44 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Jacob Champion <[email protected]> writes:\n> On Wed, Jul 17, 2024 at 3:34 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> Sorry, the previous reply was wrong; I misunderstood what you said.\n>> Yes, that is how the patch was coded and I agree that getting rid of\n>> config time PG_TEST_EXTRA could be a better alternative.\n\n> Personally I use the config-time PG_TEST_EXTRA extensively. I'd be sad\n> to see it go, especially if developers are no longer forced to use it.\n\nThe existing and documented expectation is that PG_TEST_EXTRA is an\nenvironment variable, ie it's a runtime option not a configure option.\nMaking it be the latter seems like a significant loss of flexibility\nto me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 11:01:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On 2024-07-17 We 11:01 AM, Tom Lane wrote:\n> Jacob Champion<[email protected]> writes:\n>> On Wed, Jul 17, 2024 at 3:34 AM Nazir Bilal Yavuz<[email protected]> wrote:\n>>> Sorry, the previous reply was wrong; I misunderstood what you said.\n>>> Yes, that is how the patch was coded and I agree that getting rid of\n>>> config time PG_TEST_EXTRA could be a better alternative.\n>> Personally I use the config-time PG_TEST_EXTRA extensively. I'd be sad\n>> to see it go, especially if developers are no longer forced to use it.\n> The existing and documented expectation is that PG_TEST_EXTRA is an\n> environment variable, ie it's a runtime option not a configure option.\n> Making it be the latter seems like a significant loss of flexibility\n> to me.\n>\n> \t\n\n\nAIUI the only reason we have it as a configure option at all is that \nmeson is *very* dogmatic about not using environment variables. I get \ntheir POV when it comes to building, but that should not extend to \ntesting. That said, I don't mind if this is a configure option as long \nas it can be overridden at run time without having to run \"meson \nconfigure\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-17 We 11:01 AM, Tom Lane\n wrote:\n\n\nJacob Champion <[email protected]> writes:\n\n\nOn Wed, Jul 17, 2024 at 3:34 AM Nazir Bilal Yavuz <[email protected]> wrote:\n\n\nSorry, the previous reply was wrong; I misunderstood what you said.\nYes, that is how the patch was coded and I agree that getting rid of\nconfig time PG_TEST_EXTRA could be a better alternative.\n\n\n\n\n\n\nPersonally I use the config-time PG_TEST_EXTRA extensively. I'd be sad\nto see it go, especially if developers are no longer forced to use it.\n\n\n\nThe existing and documented expectation is that PG_TEST_EXTRA is an\nenvironment variable, ie it's a runtime option not a configure option.\nMaking it be the latter seems like a significant loss of flexibility\nto me.\n\n\t\n\n\n\nAIUI the only reason we have it as a configure option at all is\n that meson is *very* dogmatic about not using environment\n variables. I get their POV when it comes to building, but that\n should not extend to testing. That said, I don't mind if this is a\n configure option as long as it can be overridden at run time\n without having to run \"meson configure\". \n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 17 Jul 2024 11:11:36 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Jul 17, 2024 at 8:01 AM Tom Lane <[email protected]> wrote:\n> Jacob Champion <[email protected]> writes:\n> > Personally I use the config-time PG_TEST_EXTRA extensively. I'd be sad\n> > to see it go, especially if developers are no longer forced to use it.\n>\n> The existing and documented expectation is that PG_TEST_EXTRA is an\n> environment variable, ie it's a runtime option not a configure option.\n> Making it be the latter seems like a significant loss of flexibility\n> to me.\n\nI think/hope we're saying the same thing -- developers should not be\nforced to lock PG_TEST_EXTRA into their configurations; that's\ninflexible and unhelpful.\n\nWhat I'm saying in addition to that is, I really like that I can\ncurrently put a default PG_TEST_EXTRA into my meson config so that I\ndon't have to keep track of it, and I do that all the time. So I'm in\nfavor of the \"option 3\" approach.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 17 Jul 2024 08:44:47 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Jacob Champion <[email protected]> writes:\n> On Wed, Jul 17, 2024 at 8:01 AM Tom Lane <[email protected]> wrote:\n>> The existing and documented expectation is that PG_TEST_EXTRA is an\n>> environment variable, ie it's a runtime option not a configure option.\n>> Making it be the latter seems like a significant loss of flexibility\n>> to me.\n\n> I think/hope we're saying the same thing -- developers should not be\n> forced to lock PG_TEST_EXTRA into their configurations; that's\n> inflexible and unhelpful.\n\nIndeed.\n\n> What I'm saying in addition to that is, I really like that I can\n> currently put a default PG_TEST_EXTRA into my meson config so that I\n> don't have to keep track of it, and I do that all the time. So I'm in\n> favor of the \"option 3\" approach.\n\nAh. I have no particular objection to that, but I wonder whether\nwe should make the autoconf/makefile infrastructure do it too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 14:11:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:11 AM Tom Lane <[email protected]> wrote:\n> Ah. I have no particular objection to that, but I wonder whether\n> we should make the autoconf/makefile infrastructure do it too.\n\nI don't need it personally, having moved almost entirely to Meson. But\nif the asymmetry is a sticking point, I can work on a patch.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:51:20 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 17 Jul 2024 at 13:13, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> If you are willing to work on this further, please add it to the commitfest.\n\nSince general consensus is more towards having an environment variable\nto override Meson configure option, I converted solution-3 to\nsomething more like a patch. I updated the docs, added more comments\nand added this to the commitfest [1].\n\nThe one downside of this approach is that PG_TEXT_EXTRA in user\ndefined options in meson setup output could be misleading now.\n\nAlso, with this change; PG_TEST_EXTRA from configure_scripts in the\n.cirrus.tasks.yml file should be removed as they are useless now. I\nadded that as a second patch.\n\n[1] https://commitfest.postgresql.org/49/5134/\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Fri, 19 Jul 2024 11:07:26 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi Nazir,\n\nOn Fri, Jul 19, 2024 at 1:37 PM Nazir Bilal Yavuz <[email protected]> wrote:\n\n> >\n> > If you are willing to work on this further, please add it to the commitfest.\n>\n> Since general consensus is more towards having an environment variable\n> to override Meson configure option, I converted solution-3 to\n> something more like a patch. I updated the docs, added more comments\n> and added this to the commitfest [1].\n\nThanks.\n\n>\n> The one downside of this approach is that PG_TEXT_EXTRA in user\n> defined options in meson setup output could be misleading now.\n>\n\nUpthread Tom asked whether we should do a symmetric change to \"make\".\nThis set of patches does not do that. Here are my thoughts:\n1. Those who use make, are not using configure time PG_TEST_EXTRA\nanyway, so they don't need it.\n2. Those meson users who use setup time PG_TEST_EXTRA and also want to\nuse make may find the need for the feature in make.\n3. https://www.postgresql.org/docs/devel/install-requirements.html\nsays that the meson support is currently experimental and only works\nwhen building from a Git checkout. So there's a possibility (even if\ntheoretical) that make and meson will co-exist. Also that we may\nabandon meson?\n\nConsidering those, it seems to me that symmetry is required. I don't\nknow how hard it is to introduce PG_TEST_EXTRA as a configure time\noption in \"make\". If it's simple, we should do that. Otherwise it will\nbe better to just remove PG_EXTRA_TEST option support from meson\nsupport to keep make and meson symmetrical.\n\nAs far as the implementation is concerned the patch seems to be doing\nwhat's expected. If PG_TEST_EXTRA is specified at setup time, it is\nnot needed to be specified as an environment variable at run time. But\nit can also be overridden at runtime. If PG_TEST_EXTRA is not\nspecified at the time of setup, but specified at run time, it works. I\nhave tested xid_wraparound and wal_consistency_check.\n\nI wonder whether we really require pg_test_extra argument to testwrap.\nWhy can't we use the logic in testwrap, to set run time PG_TEST_EXTRA,\nin meson.build directly? I.e. set test_env['PG_TEST_EXTRA'] to\nos.environ[;PG_TEST_EXTRA'] if the latter is set, otherwise set the\nfirst to get_option('PG_TEST_EXTRA').\n\n> Also, with this change; PG_TEST_EXTRA from configure_scripts in the\n> .cirrus.tasks.yml file should be removed as they are useless now. I\n> added that as a second patch.\n\nI think this is useful and allows both make and meson to use the same\nlogic in cirrus.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 23 Jul 2024 14:56:07 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Tue, 23 Jul 2024 at 12:26, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Upthread Tom asked whether we should do a symmetric change to \"make\".\n> This set of patches does not do that. Here are my thoughts:\n> 1. Those who use make, are not using configure time PG_TEST_EXTRA\n> anyway, so they don't need it.\n> 2. Those meson users who use setup time PG_TEST_EXTRA and also want to\n> use make may find the need for the feature in make.\n> 3. https://www.postgresql.org/docs/devel/install-requirements.html\n> says that the meson support is currently experimental and only works\n> when building from a Git checkout. So there's a possibility (even if\n> theoretical) that make and meson will co-exist. Also that we may\n> abandon meson?\n>\n> Considering those, it seems to me that symmetry is required. I don't\n> know how hard it is to introduce PG_TEST_EXTRA as a configure time\n> option in \"make\". If it's simple, we should do that. Otherwise it will\n> be better to just remove PG_EXTRA_TEST option support from meson\n> support to keep make and meson symmetrical.\n\nI agree that symmetry should be the ultimate goal.\n\nUpthread Jacob said he could work on a patch about introducing the\nPG_TEST_EXTRA configure option to make builds. Would you still be\ninterested in working on this? If not, I would gladly work on it.\n\n> As far as the implementation is concerned the patch seems to be doing\n> what's expected. If PG_TEST_EXTRA is specified at setup time, it is\n> not needed to be specified as an environment variable at run time. But\n> it can also be overridden at runtime. If PG_TEST_EXTRA is not\n> specified at the time of setup, but specified at run time, it works. I\n> have tested xid_wraparound and wal_consistency_check.\n\nThanks for testing it!\n\n> I wonder whether we really require pg_test_extra argument to testwrap.\n> Why can't we use the logic in testwrap, to set run time PG_TEST_EXTRA,\n> in meson.build directly? I.e. set test_env['PG_TEST_EXTRA'] to\n> os.environ[;PG_TEST_EXTRA'] if the latter is set, otherwise set the\n> first to get_option('PG_TEST_EXTRA').\n\nWhen test_env('PG_TEST_EXTRA') is set, it could not be overridden\nafterwards. Perhaps there is a way to override test_env() but I do not\nknow how.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 23 Jul 2024 13:32:17 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Tue, Jul 23, 2024 at 4:02 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> > I wonder whether we really require pg_test_extra argument to testwrap.\n> > Why can't we use the logic in testwrap, to set run time PG_TEST_EXTRA,\n> > in meson.build directly? I.e. set test_env['PG_TEST_EXTRA'] to\n> > os.environ[;PG_TEST_EXTRA'] if the latter is set, otherwise set the\n> > first to get_option('PG_TEST_EXTRA').\n>\n> When test_env('PG_TEST_EXTRA') is set, it could not be overridden\n> afterwards. Perhaps there is a way to override test_env() but I do not\n> know how.\n>\n\nI am not suggesting to override test_env['PG_TEST_EXTRA'] but set it\nto the value after overriding if required. meson.build file seems to\nallow some conditional variable setting. So I thought this would be\npossible, haven't tried myself though.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 23 Jul 2024 16:10:31 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Tue, 23 Jul 2024 at 13:40, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Tue, Jul 23, 2024 at 4:02 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > > I wonder whether we really require pg_test_extra argument to testwrap.\n> > > Why can't we use the logic in testwrap, to set run time PG_TEST_EXTRA,\n> > > in meson.build directly? I.e. set test_env['PG_TEST_EXTRA'] to\n> > > os.environ[;PG_TEST_EXTRA'] if the latter is set, otherwise set the\n> > > first to get_option('PG_TEST_EXTRA').\n> >\n> > When test_env('PG_TEST_EXTRA') is set, it could not be overridden\n> > afterwards. Perhaps there is a way to override test_env() but I do not\n> > know how.\n> >\n>\n> I am not suggesting to override test_env['PG_TEST_EXTRA'] but set it\n> to the value after overriding if required. meson.build file seems to\n> allow some conditional variable setting. So I thought this would be\n> possible, haven't tried myself though.\n\nSorry if I caused a misunderstanding. What I meant was, when the\ntest_env('PG_TEST_EXTRA') is set, Meson will always use PG_TEST_EXTRA\nfrom the setup. So, Meson needs to be built again to change\nPG_TEST_EXTRA.\n\nAFAIK Meson does not support accessing environment variables but\nrun_command() could be used to test this:\n\n-test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n+pg_test_extra_env = run_command(\n+ [python,\n+ '-c',\n+ 'import os; print(os.getenv(\"PG_TEST_EXTRA\", \"\"))'],\n+ check: true).stdout()\n+\n+test_env.set('PG_TEST_EXTRA', pg_test_extra_env != '' ?\n+ pg_test_extra_env :\n+ get_option('PG_TEST_EXTRA'))\n+\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 23 Jul 2024 14:09:14 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Tue, Jul 23, 2024 at 3:32 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> Upthread Jacob said he could work on a patch about introducing the\n> PG_TEST_EXTRA configure option to make builds. Would you still be\n> interested in working on this? If not, I would gladly work on it.\n\nSure! Attached is a minimalist approach using AC_ARG_VAR.\n\nIt works for top-level `make check-world`, or `make check -C\nsrc/test`. If you run `make check` directly from a test subdirectory,\nthe variable doesn't get picked up, because it's only exported from\nthe src/test level as of your patch c3382a3c3cc -- but if that turns\nout to be a problem, we can plumb it all the way down or expand the\nscope of the export.\n\nThanks,\n--Jacob", "msg_date": "Tue, 23 Jul 2024 07:23:53 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi Jacob,\n\nOn Tue, Jul 23, 2024 at 7:54 PM Jacob Champion\n<[email protected]> wrote:\n>\n> On Tue, Jul 23, 2024 at 3:32 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > Upthread Jacob said he could work on a patch about introducing the\n> > PG_TEST_EXTRA configure option to make builds. Would you still be\n> > interested in working on this? If not, I would gladly work on it.\n>\n> Sure! Attached is a minimalist approach using AC_ARG_VAR.\n>\n> It works for top-level `make check-world`, or `make check -C\n> src/test`. If you run `make check` directly from a test subdirectory,\n> the variable doesn't get picked up, because it's only exported from\n> the src/test level as of your patch c3382a3c3cc -- but if that turns\n> out to be a problem, we can plumb it all the way down or expand the\n> scope of the export.\n\nSorry for the delay in reply.\n\nHere are my observations with the patch applied\n1. If I run configure without setting PG_TEST_EXTRA, it won't run the\ntests that require PG_TEST_EXTRA to be set. This is expected.\n2. But it wont' run tests even if PG_TEST_EXTRA is set while running\nmake check.- that's unexpected\n3. If I run configure with PG_TEST_EXTRA set and run 'make check' in\nthe test directory, it runs those tests. That's expected from the\nfinal patch but that doesn't seem to be what you described above.\n3. After 3, if I run `PG_TEST_EXTRA=\"something\" make check`, it still\nruns those tests. So it looks like PG_TEST_EXTRA doesn't get\noverridden. If PG_TEST_EXTRA is set to something other than what was\nconfigured, it doesn't take effect when running the corresponding\ntests. E.g. PG_TEST_EXTRA is set to xid_wraparound at configure time,\nbut `PG_TEST_EXTRA=wal_consistency_check make check ` is run, the\ntests won't use wal_consistency_check=all. - that's not expected.\n\nI this the patch lacks overriding PG_TEST_EXTRA at run time.\n\nAFAIU, following was expected behaviour from both meson and make.\nPlease correct if I am wrong.\n1. If PG_TEST_EXTRA is set at the setup/configuration time, it is not\nrequired to be set at run time.\n2. Runtime PG_TEST_EXTRA always overrides configure time PG_TEST_EXTRA.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 9 Aug 2024 14:56:09 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On 2024-08-09 Fr 5:26 AM, Ashutosh Bapat wrote:\n> I this the patch lacks overriding PG_TEST_EXTRA at run time.\n>\n> AFAIU, following was expected behaviour from both meson and make.\n> Please correct if I am wrong.\n> 1. If PG_TEST_EXTRA is set at the setup/configuration time, it is not\n> required to be set at run time.\n> 2. Runtime PG_TEST_EXTRA always overrides configure time PG_TEST_EXTRA.\n\n\nYes, that's my understanding of the requirement.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-09 Fr 5:26 AM, Ashutosh\n Bapat wrote:\n\n\n\nI this the patch lacks overriding PG_TEST_EXTRA at run time.\n\nAFAIU, following was expected behaviour from both meson and make.\nPlease correct if I am wrong.\n1. If PG_TEST_EXTRA is set at the setup/configuration time, it is not\nrequired to be set at run time.\n2. Runtime PG_TEST_EXTRA always overrides configure time PG_TEST_EXTRA.\n\n\n\nYes, that's my understanding of the requirement.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 11 Aug 2024 08:53:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Fri, Aug 9, 2024 at 2:26 AM Ashutosh Bapat\n<[email protected]> wrote:\n> Here are my observations with the patch applied\n> 1. If I run configure without setting PG_TEST_EXTRA, it won't run the\n> tests that require PG_TEST_EXTRA to be set. This is expected.\n> 2. But it wont' run tests even if PG_TEST_EXTRA is set while running\n> make check.- that's unexpected\n\n(see below)\n\n> 3. If I run configure with PG_TEST_EXTRA set and run 'make check' in\n> the test directory, it runs those tests. That's expected from the\n> final patch but that doesn't seem to be what you described above.\n\nI'm not entirely sure what you mean? src/test should work fine,\nanything lower than that (say src/test/ssl) does not.\n\n> 3. After 3, if I run `PG_TEST_EXTRA=\"something\" make check`, it still\n> runs those tests. So it looks like PG_TEST_EXTRA doesn't get\n> overridden. If PG_TEST_EXTRA is set to something other than what was\n> configured, it doesn't take effect when running the corresponding\n> tests. E.g. PG_TEST_EXTRA is set to xid_wraparound at configure time,\n> but `PG_TEST_EXTRA=wal_consistency_check make check ` is run, the\n> tests won't use wal_consistency_check=all. - that's not expected.\n\nI think you're running into the GNU Make override order [1]. For\ninstance when I want to override PG_TEST_EXTRA, I write\n\n make check PG_TEST_EXTRA=whatever\n\nIf you want the environment variable to work by default instead, you can do\n\n PG_TEST_EXTRA=whatever make check -e\n\nIf you don't want devs to have to worry about the difference, I think\nwe can change the assignment operator to `?=` in Makefile.global.in.\n\nThanks,\n--Jacob\n\n[1] https://www.gnu.org/software/make/manual/html_node/Environment.html\n\n\n", "msg_date": "Tue, 13 Aug 2024 13:54:24 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Aug 14, 2024 at 2:24 AM Jacob Champion\n<[email protected]> wrote:\n>\n> On Fri, Aug 9, 2024 at 2:26 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > Here are my observations with the patch applied\n> > 1. If I run configure without setting PG_TEST_EXTRA, it won't run the\n> > tests that require PG_TEST_EXTRA to be set. This is expected.\n> > 2. But it wont' run tests even if PG_TEST_EXTRA is set while running\n> > make check.- that's unexpected\n>\n> (see below)\n>\n> > 3. If I run configure with PG_TEST_EXTRA set and run 'make check' in\n> > the test directory, it runs those tests. That's expected from the\n> > final patch but that doesn't seem to be what you described above.\n>\n> I'm not entirely sure what you mean? src/test should work fine,\n> anything lower than that (say src/test/ssl) does not.\n\nI could run them from src/test/modules/xid_wraparound/. That's desirable.\n\n>\n> > 3. After 3, if I run `PG_TEST_EXTRA=\"something\" make check`, it still\n> > runs those tests. So it looks like PG_TEST_EXTRA doesn't get\n> > overridden. If PG_TEST_EXTRA is set to something other than what was\n> > configured, it doesn't take effect when running the corresponding\n> > tests. E.g. PG_TEST_EXTRA is set to xid_wraparound at configure time,\n> > but `PG_TEST_EXTRA=wal_consistency_check make check ` is run, the\n> > tests won't use wal_consistency_check=all. - that's not expected.\n>\n> I think you're running into the GNU Make override order [1]. For\n> instance when I want to override PG_TEST_EXTRA, I write\n>\n> make check PG_TEST_EXTRA=whatever\n>\n> If you want the environment variable to work by default instead, you can do\n>\n> PG_TEST_EXTRA=whatever make check -e\n>\n> If you don't want devs to have to worry about the difference, I think\n> we can change the assignment operator to `?=` in Makefile.global.in.\n\nWhat is working now should continue to work even after this change.\nPG_TEST_EXTRA=\"xyz\" make check works right now.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 14 Aug 2024 09:37:32 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Tue, Aug 13, 2024 at 9:07 PM Ashutosh Bapat\n<[email protected]> wrote:\n> > I'm not entirely sure what you mean? src/test should work fine,\n> > anything lower than that (say src/test/ssl) does not.\n>\n> I could run them from src/test/modules/xid_wraparound/. That's desirable.\n\nOn my machine, storing xid_wraparound into PG_TEST_EXTRA at configure\ntime and running `make check` from the modules/xid_wraparound\ndirectory causes them to be skipped. Setting the environment variable\nwill continue to work at that directory level, though; my patch\nshouldn't change that.\n\n> What is working now should continue to work even after this change.\n> PG_TEST_EXTRA=\"xyz\" make check works right now.\n\nFair point, see attached.\n\nThanks,\n--Jacob", "msg_date": "Wed, 14 Aug 2024 05:49:19 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi Jacob,\n\nOn Wed, Aug 14, 2024 at 6:19 PM Jacob Champion\n<[email protected]> wrote:\n>\n> On Tue, Aug 13, 2024 at 9:07 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> > > I'm not entirely sure what you mean? src/test should work fine,\n> > > anything lower than that (say src/test/ssl) does not.\n> >\n> > I could run them from src/test/modules/xid_wraparound/. That's desirable.\n>\n> On my machine, storing xid_wraparound into PG_TEST_EXTRA at configure\n> time and running `make check` from the modules/xid_wraparound\n> directory causes them to be skipped. Setting the environment variable\n> will continue to work at that directory level, though; my patch\n> shouldn't change that.\n>\n> > What is working now should continue to work even after this change.\n> > PG_TEST_EXTRA=\"xyz\" make check works right now.\n>\n> Fair point, see attached.\n\nIf I run\nexport PG_TEST_EXTRA=xid_wraparound; ./configure --prefix=$BuildDir\n--enable-tap-tests && make -j4 && make -j4 install; unset\nPG_TEST_EXTRA\nfollowed by\nmake -C $XID_MODULE_DIR check where\nXID_MODULE_DIR=src/test/modules/xid_wraparound - it skips the tests.\n\nI thought this was working before.\n\nAnyway, now I have written a script to test all the scenarios. You may\nwant to test your patch using the script. It just needs PGDir to set\nto root directory of code.\n\nIf there's some other way to setting PG_TEST_EXTRA when running\nconfigure, I think it needs to be documented.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 23 Aug 2024 18:27:48 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Fri, Aug 23, 2024 at 5:59 AM Ashutosh Bapat\n<[email protected]> wrote:\n> If I run\n> export PG_TEST_EXTRA=xid_wraparound; ./configure --prefix=$BuildDir\n> --enable-tap-tests && make -j4 && make -j4 install; unset\n> PG_TEST_EXTRA\n> followed by\n> make -C $XID_MODULE_DIR check where\n> XID_MODULE_DIR=src/test/modules/xid_wraparound - it skips the tests.\n\nRight.\n\n> I thought this was working before.\n\nNo, that goes back to my note in [1] -- as of c3382a3c3cc, the\nvariable was exported at only the src/test level, and I wanted to get\ninput on that so we could decide on the next approach if needed.\n\n> Anyway, now I have written a script to test all the scenarios. You may\n> want to test your patch using the script. It just needs PGDir to set\n> to root directory of code.\n\nThanks! I see failures in 110, 120, and 130, as expected. Note that\nconstructions like\n\n PG_TEST_EXTRA=\"\" cd $XID_MODULE_DIR && make check && cd $PGDir\n\nwill not override the environment variable for the make invocation,\njust for the `cd`. Also, rather than\n\n export PG_TEST_EXTRA; ./configure ...; unset PG_TEST_EXTRA\n\nit's probably easier to just pass PG_TEST_EXTRA=<setting> as a command\nline option to configure.\n\n> If there's some other way to setting PG_TEST_EXTRA when running\n> configure, I think it needs to be documented.\n\n./configure --help shows the new variable, with the same wording as\nMeson. Or do you mean that it's significant enough to deserve a spot\nin installation.sgml?\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAOYmi%2B%3D8HVgxANzFT_BZrAeDPxAgA5_kbHy-4VowdbGr0chHvQ%40mail.gmail.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 09:25:12 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Aug 28, 2024 at 5:26 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Fri, Aug 23, 2024 at 9:55 PM Jacob Champion\n> <[email protected]> wrote:\n> >\n> > On Fri, Aug 23, 2024 at 5:59 AM Ashutosh Bapat\n> > <[email protected]> wrote:\n> > > If I run\n> > > export PG_TEST_EXTRA=xid_wraparound; ./configure --prefix=$BuildDir\n> > > --enable-tap-tests && make -j4 && make -j4 install; unset\n> > > PG_TEST_EXTRA\n> > > followed by\n> > > make -C $XID_MODULE_DIR check where\n> > > XID_MODULE_DIR=src/test/modules/xid_wraparound - it skips the tests.\n> >\n> > Right.\n> >\n> > > I thought this was working before.\n> >\n> > No, that goes back to my note in [1] -- as of c3382a3c3cc, the\n> > variable was exported at only the src/test level, and I wanted to get\n> > input on that so we could decide on the next approach if needed.\n\nI had an offline discussion with Jacob. Because of c3382a3c3cc, `make\n-C src/test/modules/xid_wraparound check` will skip xid_wraparound\ntests. It should be noted that when that commit was added well before\nthe xid_wraparound tests were added. But I don't know whether the\ntests controlled by PG_TEST_EXTRA could be run using make -C that\ntime.\n\nAnyway, I think, supporting PG_TEST_EXTRA at configure time without\nbeing able to run tests using `make -C <test directory> ` is not that\nuseful. It only helps make check-world but not when running individual\ntests.\n\nNazir, since you authored c3382a3c3cc, can you please provide input\nthat Jacob needs?\n\nOtherwise, we should just go ahead with meson support and drop make\nsupport for now. We may revisit it later.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 28 Aug 2024 19:11:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Aug 28, 2024 at 6:41 AM Ashutosh Bapat\n<[email protected]> wrote:\n> Nazir, since you authored c3382a3c3cc, can you please provide input\n> that Jacob needs?\n\nSpecifically, why the PG_TEST_EXTRA variable was being exported at the\nsrc/test level only. If there's no longer a use case being targeted,\nwe can always change it in this patch, but I didn't want to do that\nwithout understanding why it was like that to begin with.\n\n--Jacob\n\n\n", "msg_date": "Wed, 28 Aug 2024 08:11:17 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 28 Aug 2024 at 16:41, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 5:26 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > On Fri, Aug 23, 2024 at 9:55 PM Jacob Champion\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Aug 23, 2024 at 5:59 AM Ashutosh Bapat\n> > > <[email protected]> wrote:\n> > > > If I run\n> > > > export PG_TEST_EXTRA=xid_wraparound; ./configure --prefix=$BuildDir\n> > > > --enable-tap-tests && make -j4 && make -j4 install; unset\n> > > > PG_TEST_EXTRA\n> > > > followed by\n> > > > make -C $XID_MODULE_DIR check where\n> > > > XID_MODULE_DIR=src/test/modules/xid_wraparound - it skips the tests.\n> > >\n> > > Right.\n> > >\n> > > > I thought this was working before.\n> > >\n> > > No, that goes back to my note in [1] -- as of c3382a3c3cc, the\n> > > variable was exported at only the src/test level, and I wanted to get\n> > > input on that so we could decide on the next approach if needed.\n>\n> I had an offline discussion with Jacob. Because of c3382a3c3cc, `make\n> -C src/test/modules/xid_wraparound check` will skip xid_wraparound\n> tests. It should be noted that when that commit was added well before\n> the xid_wraparound tests were added. But I don't know whether the\n> tests controlled by PG_TEST_EXTRA could be run using make -C that\n> time.\n\nI did not test but I think they could not be run because of the same\nreason as below.\n\n> Anyway, I think, supporting PG_TEST_EXTRA at configure time without\n> being able to run tests using `make -C <test directory> ` is not that\n> useful. It only helps make check-world but not when running individual\n> tests.\n>\n> Nazir, since you authored c3382a3c3cc, can you please provide input\n> that Jacob needs?\n\nI think the problem is that we define PG_TEST_EXTRA in the\nMakefile.global file but we do not export it there, instead we export\nin the src/test/Makefile. But when you run tests from their Makefiles,\nthey do not include src/test/Makefile (they include Makefile.global);\nso they can not get the PG_TEST_EXTRA env variable. Exporting\nPG_TEST_EXTRA in the Makefile.global should solve the problem.\n\nAlso, I think TEST 110 and 170 do not look correct to me. In the\ncurrent way, we do not pass PG_TEST_EXTRA to the make command.\n\n110 should be:\n'cd $XID_MODULE_DIR && PG_TEST_EXTRA=xid_wraparound make check'\ninstead of 'PG_TEST_EXTRA=xid_wraparound cd $XID_MODULE_DIR && make\ncheck'\n\n170 should be:\n'cd $XID_MODULE_DIR && PG_TEST_EXTRA=\"\" make check && cd $PGDir'\ninstead of 'PG_TEST_EXTRA=\"\" cd $XID_MODULE_DIR && make check && cd\n$PGDir'\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 28 Aug 2024 18:15:56 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 28 Aug 2024 at 18:11, Jacob Champion\n<[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 6:41 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> > Nazir, since you authored c3382a3c3cc, can you please provide input\n> > that Jacob needs?\n>\n> Specifically, why the PG_TEST_EXTRA variable was being exported at the\n> src/test level only. If there's no longer a use case being targeted,\n> we can always change it in this patch, but I didn't want to do that\n> without understanding why it was like that to begin with.\n\nI do not exactly remember the reason but I think I copied the same\nbehavior as before, PG_TEST_EXTRA variable was checked in the\nsrc/test/Makefile so I exported it there.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 28 Aug 2024 18:21:08 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Aug 28, 2024 at 8:46 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Also, I think TEST 110 and 170 do not look correct to me. In the\n> current way, we do not pass PG_TEST_EXTRA to the make command.\n>\n> 110 should be:\n> 'cd $XID_MODULE_DIR && PG_TEST_EXTRA=xid_wraparound make check'\n> instead of 'PG_TEST_EXTRA=xid_wraparound cd $XID_MODULE_DIR && make\n> check'\n>\n> 170 should be:\n> 'cd $XID_MODULE_DIR && PG_TEST_EXTRA=\"\" make check && cd $PGDir'\n> instead of 'PG_TEST_EXTRA=\"\" cd $XID_MODULE_DIR && make check && cd\n> $PGDir'\n>\n\nYou are right. Jacob did point that out, but I didn't fix all the\nplaces back then. Here's updated script.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 29 Aug 2024 16:03:45 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Wed, Aug 28, 2024 at 8:21 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> I do not exactly remember the reason but I think I copied the same\n> behavior as before, PG_TEST_EXTRA variable was checked in the\n> src/test/Makefile so I exported it there.\n\nOkay, give v3 a try then. This exports directly from Makefile.global.\nSince that gets pulled into a bunch of places, the scope is a lot\nwider than it used to be; I've disabled it for PGXS so it doesn't end\nup poisoning other extensions.\n\n--Jacob", "msg_date": "Fri, 30 Aug 2024 11:35:53 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Fri, 30 Aug 2024 at 21:36, Jacob Champion\n<[email protected]> wrote:\n>\n> On Wed, Aug 28, 2024 at 8:21 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > I do not exactly remember the reason but I think I copied the same\n> > behavior as before, PG_TEST_EXTRA variable was checked in the\n> > src/test/Makefile so I exported it there.\n>\n> Okay, give v3 a try then. This exports directly from Makefile.global.\n> Since that gets pulled into a bunch of places, the scope is a lot\n> wider than it used to be; I've disabled it for PGXS so it doesn't end\n> up poisoning other extensions.\n\nPatch looks good and it passes all the test cases in Ashutosh's test script.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 2 Sep 2024 18:02:27 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Mon, Sep 2, 2024 at 8:32 PM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, 30 Aug 2024 at 21:36, Jacob Champion\n> <[email protected]> wrote:\n> >\n> > On Wed, Aug 28, 2024 at 8:21 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> > > I do not exactly remember the reason but I think I copied the same\n> > > behavior as before, PG_TEST_EXTRA variable was checked in the\n> > > src/test/Makefile so I exported it there.\n> >\n> > Okay, give v3 a try then. This exports directly from Makefile.global.\n> > Since that gets pulled into a bunch of places, the scope is a lot\n> > wider than it used to be;\n\nThis looks similar to what meson does with PG_TEST_EXTRA, it is\navailable via get_option(). So we are closing the gap between meson\nand make. That's the intention.\n> I've disabled it for PGXS so it doesn't end\n> > up poisoning other extensions.\n>\n> Patch looks good and it passes all the test cases in Ashutosh's test script.\n>\n\nThanks Bilal for testing the patch. Can you or Jacob please create one\npatchset including both meson and make fixes? Please keep the meson\nand make changes in separate patches though. I think the meson patches\ncome from [1] (they probably need a rebase, git am failed) and make\npatch comes from [2].The one fixing make needs a good commit message.\n\nsome comments on code\n\n1. comments on changes to meson\n\n+ variable if it exists. See <xref linkend=\"regress-additional\"/> for\n\ns/exists/set/\n\n-# Test suites that are not safe by default but can be run if selected\n-# by the user via the whitespace-separated list in variable PG_TEST_EXTRA.\n-# Export PG_TEST_EXTRA so it can be checked in individual tap tests.\n-test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n\nA naive question. What happens if we add PG_TEST_EXTRA in meson.build\nitself rather than passing it via testwrap? E.g. like\nif \"PG_TEST_EXTRA\" not in os.environ\ntest_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n\nI am worried that we might have to add an extra argument to testwrap\nfor every environment variable that influences the tests. Avoiding it\nwould be better.\n\noption('PG_TEST_EXTRA', type: 'string', value: '',\n- description: 'Enable selected extra tests')\n+ description: 'Enable selected extra tests, please note that this\nconfigure option is overridden by PG_TEST_EXTRA environment variable\nif it exists')\n\nAll the descriptions look much shorter than this one. I suggest we\nshorten this one too as\n\"Enable selected extra tests. Overridden by PG_TEST_EXTRA environment variable.\"\nnot as short as other descriptions but shorter than before and yet\nserves its intended purpose. Or just make it the same as the one in\nconfigure.ac. Either way the descriptions in configure.ac and\nmeson_options.txt should be in sync.\n\n+# If PG_TEST_EXTRA is not set in the environment, then look for its Meson\n+# configure option.\n+if \"PG_TEST_EXTRA\" not in os.environ and args.pg_test_extra:\n+ env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n+\n\nIf somebody looks at just these lines, it's not clear how\nPG_TEST_EXTRA is passed to the test environment if it's available in\nos.environ. So one has to understand that env_dict is the test\nenvironment. If that's so, the code and comment rewritten as below\nmakes more sense to me. What do you think?\n\n# If PG_TEST_EXTRA is not already part of the test environment, check if it's\n# passed via program argument --pg_test_extra. Thus we also override\n# configuration time value by run time value of PG_TEST_EXTRA.\nif \"PG_TEST_EXTRA\" not in env_dict and args.pg_test_extra:\nenv_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n\nBut in case we decide to fix meson.build as suggested in one of the\ncommentsabove, this change will be unnecessary.\n\nNote that PG_TEST_EXTRA is used in only TAP tests right now. The way\nthe value passed to --pg_test_extra is handled in testwrap, it will\navailable to every test, not just TAP tests. But this looks fine to me\nsince the documentation of PG_TEST_EXTRA or its name itself does not\nshow any intention to limit it only to TAP tests.\n\n2. comments on make changes\nSince Makefile.global.in is included in src/test/Makefile I was\nexpecting that the PG_TEST_EXTRA picked from configuration would be\navailable in src/test/Makefile from which it would be exported. But\nthat doesn't seem to be the case. In fact, I am getting doubtful about\nthe role of the current \"export PG_TEST_EXTRA\" in /src/test/Makefile.\nEven if I remove it, it doesn't affect anything. Commands a.\nPG_TEST_EXTRA=xid_wraparound make check, b.\nPG_TEST_EXTRA=xid_wraparound make -C $XID_MODULE_DIR check run the\ntests (and don't skip them).\n\nAnyway with the proposed change PG_TEST_EXTRA passed at configuration\ntime is used if it's not defined at run time as expected. I think the\npatch looks good. Nothing to complain there.\n\n[1] https://www.postgresql.org/message-id/CAN55FZ1Tko2N=X4f6icgFhb7syJYo_APP-9EbFcT-uH6tEi_Xg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAOYmi+=6HNVhbFOVsV6X2_DVDYcUDL4AMnj7iM15gAfw__beKA@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 11 Sep 2024 15:33:53 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Wed, 11 Sept 2024 at 13:04, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Thanks Bilal for testing the patch. Can you or Jacob please create one\n> patchset including both meson and make fixes? Please keep the meson\n> and make changes in separate patches though. I think the meson patches\n> come from [1] (they probably need a rebase, git am failed) and make\n> patch comes from [2].The one fixing make needs a good commit message.\n\nI created and attached a patchset and wrote a commit message to 'make'\nfix. Please feel free to edit them.\n\n> some comments on code\n>\n> 1. comments on changes to meson\n>\n> + variable if it exists. See <xref linkend=\"regress-additional\"/> for\n>\n> s/exists/set/\n\nDone.\n\n> -# Test suites that are not safe by default but can be run if selected\n> -# by the user via the whitespace-separated list in variable PG_TEST_EXTRA.\n> -# Export PG_TEST_EXTRA so it can be checked in individual tap tests.\n> -test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n>\n> A naive question. What happens if we add PG_TEST_EXTRA in meson.build\n> itself rather than passing it via testwrap? E.g. like\n> if \"PG_TEST_EXTRA\" not in os.environ\n> test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n\nThen this configure time option will be passed to the test environment\nand there is no way to change it without reconfiguring if we don't\ntouch the testwrap file.\n\n> I am worried that we might have to add an extra argument to testwrap\n> for every environment variable that influences the tests. Avoiding it\n> would be better.\n\nIf we want to have both configure time and run time variables, I\nbelieve that this is the only way for now.\n\n> option('PG_TEST_EXTRA', type: 'string', value: '',\n> - description: 'Enable selected extra tests')\n> + description: 'Enable selected extra tests, please note that this\n> configure option is overridden by PG_TEST_EXTRA environment variable\n> if it exists')\n>\n> All the descriptions look much shorter than this one. I suggest we\n> shorten this one too as\n> \"Enable selected extra tests. Overridden by PG_TEST_EXTRA environment variable.\"\n> not as short as other descriptions but shorter than before and yet\n> serves its intended purpose. Or just make it the same as the one in\n> configure.ac. Either way the descriptions in configure.ac and\n> meson_options.txt should be in sync.\n\nI liked your suggestion, done in both meson_options.txt and configure.ac.\n\n> +# If PG_TEST_EXTRA is not set in the environment, then look for its Meson\n> +# configure option.\n> +if \"PG_TEST_EXTRA\" not in os.environ and args.pg_test_extra:\n> + env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n> +\n>\n> If somebody looks at just these lines, it's not clear how\n> PG_TEST_EXTRA is passed to the test environment if it's available in\n> os.environ. So one has to understand that env_dict is the test\n> environment. If that's so, the code and comment rewritten as below\n> makes more sense to me. What do you think?\n>\n> # If PG_TEST_EXTRA is not already part of the test environment, check if it's\n> # passed via program argument --pg_test_extra. Thus we also override\n> # configuration time value by run time value of PG_TEST_EXTRA.\n> if \"PG_TEST_EXTRA\" not in env_dict and args.pg_test_extra:\n> env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n\nI think your suggestion is better, done.\n\n> But in case we decide to fix meson.build as suggested in one of the\n> commentsabove, this change will be unnecessary.\n>\n> Note that PG_TEST_EXTRA is used in only TAP tests right now. The way\n> the value passed to --pg_test_extra is handled in testwrap, it will\n> available to every test, not just TAP tests. But this looks fine to me\n> since the documentation of PG_TEST_EXTRA or its name itself does not\n> show any intention to limit it only to TAP tests.\n\nI agree, it looks fine to me as well.\n\n> 2. comments on make changes\n> Since Makefile.global.in is included in src/test/Makefile I was\n> expecting that the PG_TEST_EXTRA picked from configuration would be\n> available in src/test/Makefile from which it would be exported. But\n> that doesn't seem to be the case. In fact, I am getting doubtful about\n> the role of the current \"export PG_TEST_EXTRA\" in /src/test/Makefile.\n> Even if I remove it, it doesn't affect anything. Commands a.\n> PG_TEST_EXTRA=xid_wraparound make check, b.\n> PG_TEST_EXTRA=xid_wraparound make -C $XID_MODULE_DIR check run the\n> tests (and don't skip them).\n\nYes, it looks like it is useless. If we export PG_TEST_EXTRA, then it\nshould be already available on the environment, right?\n\n> Anyway with the proposed change PG_TEST_EXTRA passed at configuration\n> time is used if it's not defined at run time as expected. I think the\n> patch looks good. Nothing to complain there.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 11 Sep 2024 16:26:21 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Thanks Nazir,\n\n> > -# Test suites that are not safe by default but can be run if selected\n> > -# by the user via the whitespace-separated list in variable PG_TEST_EXTRA.\n> > -# Export PG_TEST_EXTRA so it can be checked in individual tap tests.\n> > -test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n> >\n> > A naive question. What happens if we add PG_TEST_EXTRA in meson.build\n> > itself rather than passing it via testwrap? E.g. like\n> > if \"PG_TEST_EXTRA\" not in os.environ\n> > test_env.set('PG_TEST_EXTRA', get_option('PG_TEST_EXTRA'))\n>\n> Then this configure time option will be passed to the test environment\n> and there is no way to change it without reconfiguring if we don't\n> touch the testwrap file.\n>\n> > I am worried that we might have to add an extra argument to testwrap\n> > for every environment variable that influences the tests. Avoiding it\n> > would be better.\n>\n> If we want to have both configure time and run time variables, I\n> believe that this is the only way for now.\n\nHere's what I understand, please correct me: The code in meson.build\nis only called at the time of setup; not during meson test. Hence we\ncan not check the existence of a runtime environment variable in that\nfile. The things in test_env override those set at run time. So we\nsave it as an argument to --pg_test_extra and then use it if\nPG_TEST_EXTRA is not set at run time. I tried to find if there's some\nother place to store \"environment variables that can be overriden at\nruntime\" but I can not find it. So it looks like this is the best we\ncan do for now.\n\nIf it comes to a point where there are more such environment variables\nthat need to be passed, probably we will pass a key-value string of\nthose to testwrap. For now, for a single variable, this looks ok.\n\n>\n> > +# If PG_TEST_EXTRA is not set in the environment, then look for its Meson\n> > +# configure option.\n> > +if \"PG_TEST_EXTRA\" not in os.environ and args.pg_test_extra:\n> > + env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n> > +\n> >\n> > If somebody looks at just these lines, it's not clear how\n> > PG_TEST_EXTRA is passed to the test environment if it's available in\n> > os.environ. So one has to understand that env_dict is the test\n> > environment. If that's so, the code and comment rewritten as below\n> > makes more sense to me. What do you think?\n> >\n> > # If PG_TEST_EXTRA is not already part of the test environment, check if it's\n> > # passed via program argument --pg_test_extra. Thus we also override\n> > # configuration time value by run time value of PG_TEST_EXTRA.\n> > if \"PG_TEST_EXTRA\" not in env_dict and args.pg_test_extra:\n> > env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n>\n> I think your suggestion is better, done.\n\nI didn't see the expected change. I was talking about something like attached.\n\nAlso\n1. I have also made changes to the comment,\n2. renamed the argument --pg_test_extra to --pg-test-extra using\nconvention similar to other arguments.\n3. few other cosmetic changes.\n\nPlease review and incorporate those in the respective patches and\ntests. Sorry for a single diff.\n\nOnce this is done, I think we can mark this CF entry as RFC.\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 12 Sep 2024 15:05:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "Hi,\n\nOn Thu, 12 Sept 2024 at 12:35, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Here's what I understand, please correct me: The code in meson.build\n> is only called at the time of setup; not during meson test. Hence we\n> can not check the existence of a runtime environment variable in that\n> file. The things in test_env override those set at run time. So we\n> save it as an argument to --pg_test_extra and then use it if\n> PG_TEST_EXTRA is not set at run time. I tried to find if there's some\n> other place to store \"environment variables that can be overriden at\n> runtime\" but I can not find it. So it looks like this is the best we\n> can do for now.\n\nYes, that is exactly what is happening.\n\n> If it comes to a point where there are more such environment variables\n> that need to be passed, probably we will pass a key-value string of\n> those to testwrap. For now, for a single variable, this looks ok.\n\nYes, that would be better.\n\n> > > +# If PG_TEST_EXTRA is not set in the environment, then look for its Meson\n> > > +# configure option.\n> > > +if \"PG_TEST_EXTRA\" not in os.environ and args.pg_test_extra:\n> > > + env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n> > > +\n> > >\n> > > If somebody looks at just these lines, it's not clear how\n> > > PG_TEST_EXTRA is passed to the test environment if it's available in\n> > > os.environ. So one has to understand that env_dict is the test\n> > > environment. If that's so, the code and comment rewritten as below\n> > > makes more sense to me. What do you think?\n> > >\n> > > # If PG_TEST_EXTRA is not already part of the test environment, check if it's\n> > > # passed via program argument --pg_test_extra. Thus we also override\n> > > # configuration time value by run time value of PG_TEST_EXTRA.\n> > > if \"PG_TEST_EXTRA\" not in env_dict and args.pg_test_extra:\n> > > env_dict[\"PG_TEST_EXTRA\"] = args.pg_test_extra\n> >\n> > I think your suggestion is better, done.\n>\n> I didn't see the expected change. I was talking about something like attached.\n>\n> Also\n> 1. I have also made changes to the comment,\n> 2. renamed the argument --pg_test_extra to --pg-test-extra using\n> convention similar to other arguments.\n> 3. few other cosmetic changes.\n>\n> Please review and incorporate those in the respective patches and\n> tests. Sorry for a single diff.\n>\n> Once this is done, I think we can mark this CF entry as RFC.\n\nThanks for the changes. I applied all of them in respective patches.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 12 Sep 2024 13:58:28 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRA and meson" }, { "msg_contents": "On Thu, Sep 12, 2024 at 4:28 PM Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > Once this is done, I think we can mark this CF entry as RFC.\n>\n> Thanks for the changes. I applied all of them in respective patches.\n\nThanks a lot. PFA the patchset with\n\n1. Improved comment related to PG_TEST_EXTRA in meson.build. More on\nthe improvement in the commit message of patch 0002, which should be\nmerged into 0001.\n2. You have written comprehensive commit messages. I elaborated on\nthem a bit. I have left your version in the commit message for\ncommitter to pick up appropriate one.\n\nMarking the entry as RFC.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 13 Sep 2024 16:11:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG_TEST_EXTRA and meson" } ]
[ { "msg_contents": "Hi,\r\n\r\n\r\n-&nbsp; &nbsp; &nbsp; &nbsp;/* use Oid as relation identifier */\r\n+&nbsp; &nbsp; &nbsp; &nbsp;/* use Oid as type identifier */\r\n&nbsp; &nbsp; &nbsp; &nbsp; pq_sendint32(out, typoid);\r\n\r\n\r\n\r\nI think it must be \"type identifier\"&nbsp;rather than \"relation identifier\".\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Thu, 11 Jul 2024 15:16:13 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a comment error in logicalrep_write_typ()" }, { "msg_contents": "On Thu, Jul 11, 2024 at 12:46 PM cca5507 <[email protected]> wrote:\n>\n> - /* use Oid as relation identifier */\n> + /* use Oid as type identifier */\n> pq_sendint32(out, typoid);\n>\n> I think it must be \"type identifier\" rather than \"relation identifier\".\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Jul 2024 16:04:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a comment error in logicalrep_write_typ()" }, { "msg_contents": "Thank you for review!\r\n\r\nThe commitfest link for tracking:\r\nhttps://commitfest.postgresql.org/49/5121/\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\nThank you for review!The commitfest link for tracking:https://commitfest.postgresql.org/49/5121/--Regards,ChangAo Chen", "msg_date": "Thu, 11 Jul 2024 19:05:48 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a comment error in logicalrep_write_typ()" }, { "msg_contents": "On Thu, Jul 11, 2024 at 4:35 PM cca5507 <[email protected]> wrote:\n>\n> Thank you for review!\n>\n> The commitfest link for tracking:\n> https://commitfest.postgresql.org/49/5121/\n>\n\nI've pushed and closed the CF entry.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:25:39 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a comment error in logicalrep_write_typ()" } ]
[ { "msg_contents": "Hi,\n\nA cfbot test suddenly failed with regression test error.\n\nhttp://cfbot.cputube.org/highlights/all.html#4460\n\nFollowing the link \"regress\" shows:\nhttps://api.cirrus-ci.com/v1/artifact/task/5428792720621568/testrun/build/testrun/recovery/027_stream_regress/data/regression.diffs\n\ndiff --strip-trailing-cr -U3 C:/cirrus/src/test/regress/expected/collate.windows.win1252.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/collate.windows.win1252.out\n--- C:/cirrus/src/test/regress/expected/collate.windows.win1252.out\t2024-07-11 02:44:08.385966100 +0000\n+++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/collate.windows.win1252.out\t2024-07-11 02:49:51.280212100 +0000\n@@ -21,10 +21,10 @@\n );\n \\d collate_test1\n Table \"collate_tests.collate_test1\"\n- Column | Type | Collation | Nullable | Default\n+ Column | Type | Collation | Nullable | Default \n --------+---------+-----------+----------+---------\n- a | integer | | |\n- b | text | en_US | not null |\n+ a | integer | | | \n+ b | text | en_US | not null | \n :\n :\n\nThe differences are that the result has an extra space at the end of\nline. I am not familiar with Windows and has no idea why this could\nhappen (I haven't changed the patch set since May 24. The last\nsucceeded test was on July 9 14:58:44 (I am not sure about time zone).\nAlso I see exactly the same test failures in some other tests (for\nexample http://cfbot.cputube.org/highlights/all.html#4337)\n\nAny idea?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 11 Jul 2024 17:07:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "CFbot failed on Windows platform" }, { "msg_contents": "Hi,\n\nOn Thu, 11 Jul 2024 at 11:07, Tatsuo Ishii <[email protected]> wrote:\n>\n> Hi,\n>\n> A cfbot test suddenly failed with regression test error.\n>\n> http://cfbot.cputube.org/highlights/all.html#4460\n>\n> Following the link \"regress\" shows:\n> https://api.cirrus-ci.com/v1/artifact/task/5428792720621568/testrun/build/testrun/recovery/027_stream_regress/data/regression.diffs\n>\n> diff --strip-trailing-cr -U3 C:/cirrus/src/test/regress/expected/collate.windows.win1252.out C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/collate.windows.win1252.out\n> --- C:/cirrus/src/test/regress/expected/collate.windows.win1252.out 2024-07-11 02:44:08.385966100 +0000\n> +++ C:/cirrus/build/testrun/recovery/027_stream_regress/data/results/collate.windows.win1252.out 2024-07-11 02:49:51.280212100 +0000\n> @@ -21,10 +21,10 @@\n> );\n> \\d collate_test1\n> Table \"collate_tests.collate_test1\"\n> - Column | Type | Collation | Nullable | Default\n> + Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> - a | integer | | |\n> - b | text | en_US | not null |\n> + a | integer | | |\n> + b | text | en_US | not null |\n> :\n> :\n>\n> The differences are that the result has an extra space at the end of\n> line. I am not familiar with Windows and has no idea why this could\n> happen (I haven't changed the patch set since May 24. The last\n> succeeded test was on July 9 14:58:44 (I am not sure about time zone).\n> Also I see exactly the same test failures in some other tests (for\n> example http://cfbot.cputube.org/highlights/all.html#4337)\n>\n> Any idea?\n\nI think It is related to the '628c1d1f2c' commit. This commit changed\nthe output of the regress test in Windows.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 11 Jul 2024 12:04:42 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot failed on Windows platform" }, { "msg_contents": ">> The differences are that the result has an extra space at the end of\n>> line. I am not familiar with Windows and has no idea why this could\n>> happen (I haven't changed the patch set since May 24. The last\n>> succeeded test was on July 9 14:58:44 (I am not sure about time zone).\n>> Also I see exactly the same test failures in some other tests (for\n>> example http://cfbot.cputube.org/highlights/all.html#4337)\n>>\n>> Any idea?\n> \n> I think It is related to the '628c1d1f2c' commit. This commit changed\n> the output of the regress test in Windows.\n\nYeah, it seems that explains. I see few buildfarm window animals\ncomplain too.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 18:34:33 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CFbot failed on Windows platform" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n>> I think It is related to the '628c1d1f2c' commit. This commit changed\n>> the output of the regress test in Windows.\n\n> Yeah, it seems that explains. I see few buildfarm window animals\n> complain too.\n\nI think that the contents of\nsrc/test/regress/expected/collate.windows.win1252.out are actually\nwrong, and we'd not noticed because it was only checked with diff -w.\npsql does put an extra trailing space in some lines of table output,\nbut that space isn't there in collate.windows.win1252.out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jul 2024 09:50:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot failed on Windows platform" }, { "msg_contents": "\nOn 2024-07-11 Th 5:34 AM, Tatsuo Ishii wrote:\n>>> The differences are that the result has an extra space at the end of\n>>> line. I am not familiar with Windows and has no idea why this could\n>>> happen (I haven't changed the patch set since May 24. The last\n>>> succeeded test was on July 9 14:58:44 (I am not sure about time zone).\n>>> Also I see exactly the same test failures in some other tests (for\n>>> example http://cfbot.cputube.org/highlights/all.html#4337)\n>>>\n>>> Any idea?\n>> I think It is related to the '628c1d1f2c' commit. This commit changed\n>> the output of the regress test in Windows.\n> Yeah, it seems that explains. I see few buildfarm window animals\n> complain too.\n>\n\nI have partially reverted that patch. Thanks for the report.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 09:54:31 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot failed on Windows platform" }, { "msg_contents": "\nOn 2024-07-11 Th 9:50 AM, Tom Lane wrote:\n> Tatsuo Ishii <[email protected]> writes:\n>>> I think It is related to the '628c1d1f2c' commit. This commit changed\n>>> the output of the regress test in Windows.\n>> Yeah, it seems that explains. I see few buildfarm window animals\n>> complain too.\n> I think that the contents of\n> src/test/regress/expected/collate.windows.win1252.out are actually\n> wrong, and we'd not noticed because it was only checked with diff -w.\n> psql does put an extra trailing space in some lines of table output,\n> but that space isn't there in collate.windows.win1252.out.\n>\n> \t\t\t\n\n\nYeah, makes sense I will produce an updated output file and then \nre-enable --strip-trailing-cr (after a full test run).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Jul 2024 12:28:28 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot failed on Windows platform" }, { "msg_contents": "> On 2024-07-11 Th 5:34 AM, Tatsuo Ishii wrote:\n>>>> The differences are that the result has an extra space at the end of\n>>>> line. I am not familiar with Windows and has no idea why this could\n>>>> happen (I haven't changed the patch set since May 24. The last\n>>>> succeeded test was on July 9 14:58:44 (I am not sure about time zone).\n>>>> Also I see exactly the same test failures in some other tests (for\n>>>> example http://cfbot.cputube.org/highlights/all.html#4337)\n>>>>\n>>>> Any idea?\n>>> I think It is related to the '628c1d1f2c' commit. This commit changed\n>>> the output of the regress test in Windows.\n>> Yeah, it seems that explains. I see few buildfarm window animals\n>> complain too.\n>>\n> \n> I have partially reverted that patch. Thanks for the report.\n\nThank you for taking care of this!\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 12 Jul 2024 05:58:22 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CFbot failed on Windows platform" } ]
[ { "msg_contents": "Per discussion elsewhere [1], I've been looking at $SUBJECT.\nI have a very crude Perl hack (too ugly to show yet ;-)) that\nproduces output along the lines of\n\n\nExisting code:\n\n/* CREATE STATISTICS <name> */\n else if (Matches(\"CREATE\", \"STATISTICS\", MatchAny))\n COMPLETE_WITH(\"(\", \"ON\");\n else if (Matches(\"CREATE\", \"STATISTICS\", MatchAny, \"(\"))\n COMPLETE_WITH(\"ndistinct\", \"dependencies\", \"mcv\");\n else if (Matches(\"CREATE\", \"STATISTICS\", MatchAny, \"(*)\"))\n COMPLETE_WITH(\"ON\");\n else if (HeadMatches(\"CREATE\", \"STATISTICS\", MatchAny) &&\n TailMatches(\"FROM\"))\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n\n\nGenerated tables:\n\nenum TCPatternID\n{\n ...\n M_CREATE_STATISTICS_ANY,\n M_CREATE_STATISTICS_ANY_LPAREN,\n M_CREATE_STATISTICS_ANY_PARENSTARPAREN,\n HM_CREATE_STATISTICS_ANY,\n ...\n};\n\nenum TCPatternKind\n{\n Match,\n MatchCS,\n HeadMatch,\n HeadMatchCS,\n TailMatch,\n TailMatchCS,\n};\n\ntypedef struct\n{\n enum TCPatternID id;\n enum TCPatternKind kind;\n int nwords;\n const char *const *words;\n} TCPattern;\n\n#define TCPAT(id, kind, ...) \\\n { (id), (kind), VA_ARGS_NARGS(__VA_ARGS__), \\\n (const char * const []) { __VA_ARGS__ } }\n\nstatic const TCPattern tcpatterns[] =\n{\n ...\n TCPAT(M_CREATE_STATISTICS_ANY,\n Match, \"CREATE\", \"STATISTICS\", MatchAny),\n TCPAT(M_CREATE_STATISTICS_ANY_LPAREN,\n Match, \"CREATE\", \"STATISTICS\", MatchAny, \"(\"),\n TCPAT(M_CREATE_STATISTICS_ANY_PARENSTARPAREN,\n Match, \"CREATE\", \"STATISTICS\", MatchAny, \"(*)\"),\n TCPAT(HM_CREATE_STATISTICS_ANY,\n HeadMatch, \"CREATE\", \"STATISTICS\", MatchAny),\n ...\n};\n\n\nConverted code:\n\n/* CREATE STATISTICS <name> */\n case M_CREATE_STATISTICS_ANY:\n COMPLETE_WITH(\"(\", \"ON\");\n break;\n case M_CREATE_STATISTICS_ANY_LPAREN:\n COMPLETE_WITH(\"ndistinct\", \"dependencies\", \"mcv\");\n break;\n case M_CREATE_STATISTICS_ANY_PARENSTARPAREN:\n COMPLETE_WITH(\"ON\");\n break;\n case HM_CREATE_STATISTICS_ANY:\n if (TailMatches(\"FROM\"))\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n break;\n\n\nThe idea (I've not actually written this part yet) is that an\nouter loop would iterate through the table entries and invoke\nthe appropriate switch case for any successful match. As soon\nas the switch code sets \"matches\" non-NULL (it might not, as in\nthe last case in the example), we can exit.\n\nWhile this clearly can be made to work, it doesn't seem like the\nresult will be very maintainable. You have to invent a single-use\nenum label, and the actual definition of the primary match pattern\nis far away from the code using it, and we have two fundamentally\ndifferent ways of writing the same pattern test (since we'll still\nneed some instances of direct calls to TailMatches and friends,\nas in the last example case).\n\nWhat I'm thinking about doing instead of pursuing this exact\nimplementation idea is that we should create a preprocessor\nto deal with building the table. I'm envisioning that the\nnew source code will look like\n\n\n/* CREATE STATISTICS <name> */\n case Matches(\"CREATE\", \"STATISTICS\", MatchAny):\n COMPLETE_WITH(\"(\", \"ON\");\n break;\n case Matches(\"CREATE\", \"STATISTICS\", MatchAny, \"(\"):\n COMPLETE_WITH(\"ndistinct\", \"dependencies\", \"mcv\");\n break;\n case Matches(\"CREATE\", \"STATISTICS\", MatchAny, \"(*)\"):\n COMPLETE_WITH(\"ON\");\n break;\n case HeadMatches(\"CREATE\", \"STATISTICS\", MatchAny):\n if (TailMatches(\"FROM\"))\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables);\n break;\n\n\nOf course this isn't legal C, since the case labels are not\nconstant expressions. The preprocessing script would look\nfor case labels that look like this, generate the appropriate\ntable entries, and replace the case labels with mechanically-\ngenerated ID codes that don't need to be particularly human\nreadable. On the downside, YA preprocessor and mini-language\nisn't much fun; but at least this is a lot closer to real C\nthan some of the other things we've invented.\n\n(Further down the line, we can look into improvements such as\navoiding duplicate string comparisons; but that can be done behind\nthe scenes, without messing with this source-code notation.)\n\nDoes anyone particularly hate this plan, or have a better idea?\n\nBTW, we have quite a few instances of code like the aforementioned\n\n else if (HeadMatches(\"CREATE\", \"STATISTICS\", MatchAny) &&\n TailMatches(\"FROM\"))\n\nI'm thinking we should invent a Matches() option \"MatchAnyN\",\nwhich could appear at most once and would represent an automatic\nmatch to zero or more words appearing between the head part and\nthe tail part. Then this could be transformed to\n\n else if (Matches(\"CREATE\", \"STATISTICS\", MatchAny, MatchAnyN, \"FROM\"))\n\nthe advantage being that more of the pattern can be embedded in the\ntable and we need fewer bits of ad-hoc logic. Maybe this'd be worth\ndoing even if we don't go forward with the switch conversion.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1203570.1720470608%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 11 Jul 2024 16:25:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Converting tab-complete.c's else-if chain to a switch" }, { "msg_contents": "I wrote:\n> Per discussion elsewhere [1], I've been looking at $SUBJECT.\n\nHere's a finished patchset to perform this change. I don't\nbelieve it has any material effect on runtime behavior or\nperformance, but it does achieve what Andres anticipated\nwould happen: the compile time for tab-complete.c drops\nconsiderably. With gcc 8.5.0 I see the time drop from\nabout 3 seconds to about 0.7. The win is less noticeable\nwith clang version 15.0 on a Mac M1: from about 0.7s to 0.45s.\nOf course the main point is the hope that it will work at all\nwith MSVC, but I'm not in a position to test that.\n\nThe most painful part of this is all the changes like\n\n COMPLETE_WITH(\"ADD\", \"DROP\", \"OWNER TO\", \"RENAME\", \"SET\",\n \"VALIDATE CONSTRAINT\");\n+ break;\n /* ALTER DOMAIN <sth> DROP */\n- else if (Matches(\"ALTER\", \"DOMAIN\", MatchAny, \"DROP\"))\n+ case Matches(\"ALTER\", \"DOMAIN\", MatchAny, \"DROP\"):\n COMPLETE_WITH(\"CONSTRAINT\", \"DEFAULT\", \"NOT NULL\");\n+ break;\n /* ALTER DOMAIN <sth> DROP|RENAME|VALIDATE CONSTRAINT */\n\nI despaired of doing that accurately by hand, so I wrote\na Perl script to do it. The script can cope with all but\nabout three of the existing tests; those have to be manually\nmodified before running the script, and then the actual\n\"switch()\" has to be inserted afterwards. There won't be\nany need to commit the script, since it's a one-time tool,\nbut I've included it for documentation's sake, along with\nthe \"pre-convert.diff\" and \"post-convert.diff\" manual patch\nsteps that are required to construct the complete 0004 patch.\n\nI'm going to add this to the September CF, but I'd really\nrather get it reviewed and committed pretty quickly.\nEven with the helper script, I'm not eager to have to\nrebase this a bunch of times.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 13 Jul 2024 13:16:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting tab-complete.c's else-if chain to a switch" }, { "msg_contents": "Hi,\n\nOn 2024-07-13 13:16:14 -0400, Tom Lane wrote:\n> I wrote:\n> > Per discussion elsewhere [1], I've been looking at $SUBJECT.\n>\n> Here's a finished patchset to perform this change.\n\nAwesome!\n\n> With gcc 8.5.0 I see the time drop from about 3 seconds to about 0.7. The\n> win is less noticeable with clang version 15.0 on a Mac M1: from about 0.7s\n> to 0.45s.\n\nNice!\n\n\nWith gcc 15 I see:\n\n before after\n-O0 1.2s 0.6s\n-O2 10.5s 2.6s\n-O3 10.8s 2.8s\n\nQuite a nice win.\n\n\n> Of course the main point is the hope that it will work at all with MSVC, but\n> I'm not in a position to test that.\n\nIt does work with just your patches applied - very nice.\n\nThe only obvious thing that doesn't is that ctrl-c without a running query\nterminates psql - interrupting a running query works without terminating psql,\nwhich is a bit weird. But that seems independent of this series.\n\n\n> Subject: [PATCH v1 3/5] Install preprocessor infrastructure for\n> tab-complete.c.\n\nAh - I was wondering how the above would actually work when I read your intro :)\n\n\n> +tab-complete.c: gen_tabcomplete.pl tab-complete.in.c\n> +\t$(PERL) $^ --outfile $@\n> +\n\nWhen I first built this with make, I got this error:\nmake: *** No rule to make target '/home/andres/src/postgresql/src/bin/psql/tab-complete.c', needed by 'tab-complete.o'. Stop.\n\nbut that's just a a consequence of the make dependency handling... Removing\nthe .deps directory fixed it.\n\n\n\n> +# The input is a C file that contains some case labels that are not\n> +# constants, but look like function calls, for example:\n> +#\tcase Matches(\"A\", \"B\"):\n> +# The function name can be any one of Matches, HeadMatches, TailMatches,\n> +# MatchesCS, HeadMatchesCS, or TailMatchesCS. The argument(s) must be\n> +# string literals or macros that expand to string literals or NULL.\n\nHm, the fact that we are continuing to use the same macros in the switch makes\nit a bit painful to edit the .in.c in an editor with compiler-warnings\nintegration - I see a lot of reported errors (\"Expression is not an integer\nconstant expression\") due to case statements not being something that the\nnormal C switch can handle.\n\nPerhaps we could use a distinct macro for use in the switch, which generates\nvalid (but nonsensical) code, so we avoid warnings?\n\n\n> +# These lines are replaced by \"case N:\" where N is a unique number\n> +# for each such case label. (For convenience, we use the line number\n> +# of the case label, although other values would work just as well.)\n\nHm, using the line number seems to make it a bit harder to compare the output\nof the script after making changes to the input. Not the end of the world, but\n...\n\n\n\n> +tabcomplete = custom_target('tabcomplete',\n> + input: 'tab-complete.in.c',\n> + output: 'tab-complete.c',\n> + command: [\n> + perl, files('gen_tabcomplete.pl'), files('tab-complete.in.c'),\n> + '--outfile', '@OUTPUT@',\n> + '@INPUT@',\n> + ],\n> + install: true,\n> + install_dir: dir_include_server / 'utils',\n> +)\n\nI don't think we want to install tab-complete.c?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Jul 2024 10:40:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting tab-complete.c's else-if chain to a switch" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-07-13 13:16:14 -0400, Tom Lane wrote:\n>> Of course the main point is the hope that it will work at all with MSVC, but\n>> I'm not in a position to test that.\n\n> It does work with just your patches applied - very nice.\n\nThanks for testing!\n\n> The only obvious thing that doesn't is that ctrl-c without a running query\n> terminates psql - interrupting a running query works without terminating psql,\n> which is a bit weird. But that seems independent of this series.\n\nYeah, whatever's going on there is surely orthogonal to this.\n\n>> +# The input is a C file that contains some case labels that are not\n>> +# constants, but look like function calls, for example:\n>> +#\tcase Matches(\"A\", \"B\"):\n\n> Hm, the fact that we are continuing to use the same macros in the switch makes\n> it a bit painful to edit the .in.c in an editor with compiler-warnings\n> integration - I see a lot of reported errors (\"Expression is not an integer\n> constant expression\") due to case statements not being something that the\n> normal C switch can handle.\n\nUgh, good point.\n\n> Perhaps we could use a distinct macro for use in the switch, which generates\n> valid (but nonsensical) code, so we avoid warnings?\n\nDunno. I'd explicitly wanted to not have different notations for\npatterns implemented in switch labels and those used in ad-hoc tests.\n\nThinking a bit further outside the box ... maybe the original source\ncode could be like it is now (or more likely, like it is after 0002,\nwith the switch-to-be stuff all in a separate function), and we\ncould make the preprocessor perform the else-to-switch conversion\nevery time instead of viewing that as a one-time conversion?\nThat would make it a bit more fragile, but perhaps not impossibly so.\n\n>> +# These lines are replaced by \"case N:\" where N is a unique number\n>> +# for each such case label. (For convenience, we use the line number\n>> +# of the case label, although other values would work just as well.)\n\n> Hm, using the line number seems to make it a bit harder to compare the output\n> of the script after making changes to the input.\n\nOK. I'd thought this choice might be helpful for debugging, but I'm\nnot wedded to it. The obvious alternative is to use sequential\nnumbers for case labels, but wouldn't that also be a bit problematic\nif \"making changes\" includes adding or removing rules?\n\n> I don't think we want to install tab-complete.c?\n\nGood point --- I borrowed that rule from somewhere else, and\nfailed to study it closely enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:42:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting tab-complete.c's else-if chain to a switch" }, { "msg_contents": "I wrote:\n> Andres Freund <[email protected]> writes:\n>> Hm, the fact that we are continuing to use the same macros in the switch makes\n>> it a bit painful to edit the .in.c in an editor with compiler-warnings\n>> integration - I see a lot of reported errors (\"Expression is not an integer\n>> constant expression\") due to case statements not being something that the\n>> normal C switch can handle.\n\n> Ugh, good point.\n\n> Thinking a bit further outside the box ... maybe the original source\n> code could be like it is now (or more likely, like it is after 0002,\n> with the switch-to-be stuff all in a separate function), and we\n> could make the preprocessor perform the else-to-switch conversion\n> every time instead of viewing that as a one-time conversion?\n> That would make it a bit more fragile, but perhaps not impossibly so.\n\nI modified the preprocessor to work like that, and I like the results\nbetter than what I had. This version of the patch is set up so that\nboth the preprocessor input and output files are legal, functional C:\nwithout preprocessing it runs through the else-if chain, but after\npreprocessing it uses a loop around the switch. Maybe that's\noverkill, but to do anything less you have to make assumptions about\nhow smart the editor's compiler helper is. That doesn't sound like\nan arms race that I want to engage in.\n\n0001 attached is the same as the v1 version, but 0002 does a little\nmore than before, and then 0003 adds the preprocessor. (I fixed the\nbogus install rule, too.)\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Jul 2024 16:25:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting tab-complete.c's else-if chain to a switch" }, { "msg_contents": "I wrote:\n> I modified the preprocessor to work like that, and I like the results\n> better than what I had.\n\nI thought this version of the patch would be less subject to merge\nconflicts than v1, but it didn't take long for somebody to break it.\nRebased v3 attached -- no nontrivial changes from v2.\n\nI'd like to get this merged soon ...\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 26 Jul 2024 16:02:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting tab-complete.c's else-if chain to a switch" } ]
[ { "msg_contents": "Hey there,\n\nI am new to contributing in PostgreSQL and I've just installed the source\ncode and built the system successfully from the source code and trying to\nfind a bug to start working on but I face some problems with the pgsql-bugs\nmailing list as I can't filter them by which are solved and which are not,\nwhich are assigned to somebody and which are not, which is first good issue\nand which are advanced, so I am totally lost in this mailing list\nwith nearly no useful information that help me get an issue and start\nworking on it to submit my first patch. So if anyone faced the same problem\nplease let me know.\n\nThanks.\n\nHey there, I am new to contributing in PostgreSQL and I've just installed the source code and built the system successfully from the source code and trying to find a bug to start working on but I face some problems with the pgsql-bugs mailing list as I can't filter them by which are solved and which are not, which are assigned to somebody and which are not, which is first good issue and which are advanced, so I am totally lost in this mailing list with nearly no useful information that help me get an issue and start working on it to submit my first patch. So if anyone faced the same problem please let me know.Thanks.", "msg_date": "Fri, 12 Jul 2024 17:59:26 +0300", "msg_from": "Mohab Yaser <[email protected]>", "msg_from_op": true, "msg_subject": "Can't find bugs to work on" }, { "msg_contents": "On Fri, Jul 12, 2024 at 7:59 AM Mohab Yaser\n<[email protected]> wrote:\n>\n> So if anyone faced the same problem please let me know.\n\nI agree that it's rough at the moment. I don't pretend to have any\nsolutions, but you might check the Bug Fixes section of the current\nCommitfest, to help review:\n\n https://commitfest.postgresql.org/48/\n\nor our TODO list:\n\n https://wiki.postgresql.org/wiki/Todo\n\nor the Live Issues section of our v17 release items:\n\n https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Live_issues\n\n(but that last one is unlikely to be beginner-friendly). None of these\nare ideal for your use case, to be honest, but maybe someone else here\nhas a better idea.\n\nHTH,\n--Jacob\n\n\n", "msg_date": "Fri, 12 Jul 2024 08:32:51 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find bugs to work on" }, { "msg_contents": "Jacob Champion <[email protected]> writes:\n> On Fri, Jul 12, 2024 at 7:59 AM Mohab Yaser\n> <[email protected]> wrote:\n>> So if anyone faced the same problem please let me know.\n\n> I agree that it's rough at the moment. I don't pretend to have any\n> solutions, but you might check the Bug Fixes section of the current\n> Commitfest, to help review:\n> https://commitfest.postgresql.org/48/\n\nCertainly, the CF app is a good place to find out about bugs that\nsomeone else is already working on. As far as checking the status\nof reports in the bugs mailing list goes, we don't have a tracker\nfor that (yes, it's been discussed). If the mailing list thread\nabout a bug doesn't make the status obvious, you could search the\ncommit log to see if the thread is referenced anywhere. For\nexample, this set of commits closed out bug #18467:\n\n-----\nAuthor: Etsuro Fujita <[email protected]>\nBranch: master Release: REL_17_BR [8cfbac149] 2024-06-07 17:45:00 +0900\nBranch: REL_16_STABLE [8405d5a37] 2024-06-07 17:45:02 +0900\nBranch: REL_15_STABLE [b33c141cc] 2024-06-07 17:45:04 +0900\nBranch: REL_14_STABLE [269e2c391] 2024-06-07 17:45:06 +0900\nBranch: REL_13_STABLE [2b461efc5] 2024-06-07 17:45:08 +0900\n\n postgres_fdw: Refuse to send FETCH FIRST WITH TIES to remote servers.\n \n Previously, when considering LIMIT pushdown, postgres_fdw failed to\n check whether the query has this clause, which led to pushing false\n LIMIT clauses, causing incorrect results.\n \n This clause has been supported since v13, so we need to do a\n remote-version check before deciding that it will be safe to push such a\n clause, but we do not currently have a way to do the check (without\n accessing the remote server); disable pushing such a clause for now.\n \n Oversight in commit 357889eb1. Back-patch to v13, where that commit\n added the support.\n \n Per bug #18467 from Onder Kalaci.\n \n Patch by Japin Li, per a suggestion from Tom Lane, with some changes to\n the comments by me. Review by Onder Kalaci, Alvaro Herrera, and me.\n \n Discussion: https://postgr.es/m/18467-7bb89084ff03a08d%40postgresql.org\n-----\n\n(This output is from our tool src/tools/git_changelog, which\nI like because it knows how to collapse similar commits across\nvarious branches into one entry.)\n\nAs this example shows, it's now standard for PG commit log messages\nto include a link to the relevant email thread, so all you need\nis the message-ID of the first message in the thread to search\nthe commit log with.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jul 2024 11:44:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find bugs to work on" }, { "msg_contents": "On Fri, Jul 12, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n\n>\n> As this example shows, it's now standard for PG commit log messages\n> to include a link to the relevant email thread, so all you need\n> is the message-ID of the first message in the thread to search\n> the commit log with.\n>\n>\nCross-posting my comment from Hackers Discord:\n\n\"\"\"\nThe specific request here seems fairly doable as something we could offer\nas a custom search mode on the mailing list website. For a given time\nperiod show me all messages that began a new message thread where there are\nzero replies to the message. Apply to the -bugs list and you have at least\nmost of what is being asked for. [...]\n\"\"\"\n\nDavid J.\n\nOn Fri, Jul 12, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\nAs this example shows, it's now standard for PG commit log messages\nto include a link to the relevant email thread, so all you need\nis the message-ID of the first message in the thread to search\nthe commit log with.Cross-posting my comment from Hackers Discord:\"\"\"The specific request here seems fairly doable as something we could offer as a custom search mode on the mailing list website.  For a given time period show me all messages that began a new message thread where there are zero replies to the message.  Apply to the -bugs list and you have at least most of what is being asked for.  [...]\"\"\"David J.", "msg_date": "Fri, 12 Jul 2024 09:08:24 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find bugs to work on" }, { "msg_contents": "On Sat, Jul 13, 2024 at 12:09 AM David G. Johnston\n<[email protected]> wrote:\n>\n> On Fri, Jul 12, 2024 at 8:44 AM Tom Lane <[email protected]> wrote:\n>>\n>>\n>> As this example shows, it's now standard for PG commit log messages\n>> to include a link to the relevant email thread, so all you need\n>> is the message-ID of the first message in the thread to search\n>> the commit log with.\n>>\n>\n> Cross-posting my comment from Hackers Discord:\n>\n> \"\"\"\n> The specific request here seems fairly doable as something we could offer as a custom search mode on the mailing list website. For a given time period show me all messages that began a new message thread where there are zero replies to the message. Apply to the -bugs list and you have at least most of what is being asked for. [...]\n> \"\"\"\n>\n\nIn an ideal world, we should be able to use postgres FDW to query all\nkinds of metadata about postgres mailing lists.\n\nit will be same as \"Search for archive\" in\nhttps://www.postgresql.org/list/pgsql-hackers/\n\n\n", "msg_date": "Wed, 24 Jul 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't find bugs to work on" } ]
[ { "msg_contents": "Hi,\n\nWhile working on CIC for partitioned tables [1], I noticed that REINDEX \nfor partitioned tables is not tracking keeping progress of partitioned \ntables, so I'm creating a separate thread for this fix as suggested.\n\nThe way REINDEX for partitioned tables works now ReindexMultipleInternal \ntreats every partition as an independent table without keeping track of \npartition_done/total counts, because this is just generic code for \nprocessing multiple tables. The patch addresses that by passing down the \nknowledge a flag to distinguish partitions from independent tables \nin ReindexMultipleInternal and its callees. I also noticed that the \npartitioned CREATE INDEX progress tracking could also benefit from \nprogress_index_partition_done function that zeroizes params in addition \nto incrementing the counter, so I applied it there as well.\n\n[1] \nhttps://www.postgresql.org/message-id/55cfae76-2ffa-43ed-a7e7-901bffbebee4%40gmail.com", "msg_date": "Fri, 12 Jul 2024 23:07:49 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": true, "msg_subject": "REINDEX not updating partition progress" }, { "msg_contents": "On Fri, Jul 12, 2024 at 11:07:49PM +0100, Ilya Gladyshev wrote:\n> While working on CIC for partitioned tables [1], I noticed that REINDEX for\n> partitioned tables is not tracking keeping progress of partitioned tables,\n> so I'm creating a separate thread for this fix as suggested.\n\nThis limitation is not a bug, because we already document that\npartitions_total and partitions_done are 0 during a REINDEX. Compared\nto CREATE INDEX, a REINDEX is a bit wilder as it would work on all the\nindexes for all the partitions, providing this information makes\nsense.\n\nAgreed that this could be better, but what's now on HEAD is not wrong\neither.\n\n> The way REINDEX for partitioned tables works now ReindexMultipleInternal\n> treats every partition as an independent table without keeping track of\n> partition_done/total counts, because this is just generic code for\n> processing multiple tables. The patch addresses that by passing down the\n> knowledge a flag to distinguish partitions from independent tables\n> in ReindexMultipleInternal and its callees. I also noticed that the\n> partitioned CREATE INDEX progress tracking could also benefit from\n> progress_index_partition_done function that zeroizes params in addition to\n> incrementing the counter, so I applied it there as well.\n\nStill it does not do it because we know that the fields are going to\nbe overwritten pretty quickly anyway, no? For now, we could just do\nwithout progress_index_partition_done(), I guess, keeping it to the\nincrementation of PROGRESS_CREATEIDX_PARTITIONS_DONE. There's an\nargument that this makes the code slightly easier to follow, with less\nwrappers around the progress reporting.\n\n+\tint\t\t\tprogress_params[3] = {\n+\t\tPROGRESS_CREATEIDX_COMMAND,\n+\t\tPROGRESS_CREATEIDX_PHASE,\n+\t\tPROGRESS_CREATEIDX_PARTITIONS_TOTAL\n+\t};\n+\tint64\t\tprogress_values[3];\n+\tOid\t\t\theapId = relid;\n\nRather than having new code blocks, let's use a style consistent with\nDefineIndex() where we have the pgstat_progress_update_multi_param(),\nwith a single {} block.\n\nAdding the counter increment at the end of the loop in\nReindexMultipleInternal() is a good idea. It considers both the\nconcurrent and non-concurrent cases.\n\n+ progress_values[2] = list_length(partitions);\n+ pgstat_progress_start_command(PROGRESS_COMMAND_CREATE_INDEX, heapId);\n\nHmm. Is setting the relid only once for pg_stat_progress_create_index\nthe best choice there is? Could it be better to report the partition\nOID instead? Let's remember that indexes may have been attached with\nnames inconsistent with the partitioned table's index. It is a bit\nconfusing to stick to the relid all the partitioned table all the\ntime, for all the indexes of all the partitions reindexed. Could it\nbe better to actually introduce an entirely new field to the progress\ntable? What I would look for is more information:\n1) the partitioned table OID on which the REINDEX runs\n2) the partition table being processed\n3) the index OID being processed (old and new for concurrent case).\n\nThe patch sets 1) to the OID of the partitioned table, lacks 2) and\nsets 3) each time an index is rebuilt.\n--\nMichael", "msg_date": "Fri, 19 Jul 2024 12:17:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX not updating partition progress" }, { "msg_contents": "> 19 июля 2024 г., в 04:17, Michael Paquier <[email protected]> \n> написал(а):\n>\n> On Fri, Jul 12, 2024 at 11:07:49PM +0100, Ilya Gladyshev wrote:\n>> While working on CIC for partitioned tables [1], I noticed that \n>> REINDEX for\n>> partitioned tables is not tracking keeping progress of partitioned \n>> tables,\n>> so I'm creating a separate thread for this fix as suggested.\n>\n> This limitation is not a bug, because we already document that\n> partitions_total and partitions_done are 0 during a REINDEX.  Compared\n> to CREATE INDEX, a REINDEX is a bit wilder as it would work on all the\n> indexes for all the partitions, providing this information makes\n> sense.\n>\n> Agreed that this could be better, but what's now on HEAD is not wrong\n> either.\n\nThanks for pointing it out, I didn’t notice it. You’re right, this not a \nbug, but an improvement. Will remove the bits from the documentation as \nwell.\n\n>\n> Still it does not do it because we know that the fields are going to\n> be overwritten pretty quickly anyway, no?  For now, we could just do\n> without progress_index_partition_done(), I guess, keeping it to the\n> incrementation of PROGRESS_CREATEIDX_PARTITIONS_DONE.  There's an\n> argument that this makes the code slightly easier to follow, with less\n> wrappers around the progress reporting.\n\nThe use-case that I thought this would improve is REINDEX CONCURRENT, \nwhere data from later stages of the previous partitions would linger for \nquite a while before it gets to the same stage of the current partition. \nI don’t think this is of big importance, so I’m ok with making code \nsimpler and leaving it out.\n\n>\n> +intprogress_params[3] = {\n> +PROGRESS_CREATEIDX_COMMAND,\n> +PROGRESS_CREATEIDX_PHASE,\n> +PROGRESS_CREATEIDX_PARTITIONS_TOTAL\n> +};\n> +int64progress_values[3];\n> +OidheapId = relid;\n>\n> Rather than having new code blocks, let's use a style consistent with\n> DefineIndex() where we have the pgstat_progress_update_multi_param(),\n> with a single {} block.\n\nFixed.\n\n> Adding the counter increment at the end of the loop in\n> ReindexMultipleInternal() is a good idea.  It considers both the\n> concurrent and non-concurrent cases.\n\nAnother reason to do so is that reindex_relation calls itself \nrecursively, so it'd require some additional mechanism so that it \nwouldn't increment multiple times per partition.\n>\n> +   progress_values[2] = list_length(partitions);\n> +   pgstat_progress_start_command(PROGRESS_COMMAND_CREATE_INDEX, heapId);\n>\n> Hmm.  Is setting the relid only once for pg_stat_progress_create_index\n> the best choice there is?  Could it be better to report the partition\n> OID instead?\n\nTechnically, we could do this, but it looks like no other commands do it \nand currently there’s no API to do it without erasing the rest of \nprogress information.\n\n>  Let's remember that indexes may have been attached with\n> names inconsistent with the partitioned table's index.  It is a bit\n> confusing to stick to the relid all the partitioned table all the\n> time, for all the indexes of all the partitions reindexed.\n\nI agree that it’s a bit confusing especially because documentation gives \nno hints about it. Judging by documentation, I would expect relid and \nindex_relid to correspond to the same table. However, if we say that \nthis field represents the oid of the relation on which the command was \ninvoked, then I think it does make sense. Edited documentation to say \nthat in the new patch.\n\n>  Could it\n> be better to actually introduce an entirely new field to the progress\n> table?  What I would look for is more information:\n> 1) the partitioned table OID on which the REINDEX runs\n> 2) the partition table being processed\n> 3) the index OID being processed (old and new for concurrent case).\n>\n> The patch sets 1) to the OID of the partitioned table, lacks 2) and\n> sets 3) each time an index is rebuilt.\n> —\n> Michael\n\nI like this approach more, but I’m not sure whether adding another field \nfor partition oid is worth it, since we already have index_relid that we \ncan use to join with pg_index to get that. On the other hand, \nindex_relid is missing during regular CREATE INDEX, so this new field \ncould be useful to indicate which table is being indexed in this case. \nI'm on the fence about this, attached this as a separate patch, if you \nthink it's a good idea.\n\nThank you for the review!", "msg_date": "Sun, 21 Jul 2024 01:49:39 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX not updating partition progress" }, { "msg_contents": "Forgot to update partition_relid in reindex_index in the second patch. Fixed in attachment.\n\n\n\n> 21 июля 2024 г., в 01:49, Ilya Gladyshev <[email protected]> написал(а):\n> \n> \n> \n>> 19 июля 2024 г., в 04:17, Michael Paquier <[email protected]> <mailto:[email protected]> написал(а):\n>> \n>> On Fri, Jul 12, 2024 at 11:07:49PM +0100, Ilya Gladyshev wrote:\n>>> While working on CIC for partitioned tables [1], I noticed that REINDEX for\n>>> partitioned tables is not tracking keeping progress of partitioned tables,\n>>> so I'm creating a separate thread for this fix as suggested.\n>> \n>> This limitation is not a bug, because we already document that\n>> partitions_total and partitions_done are 0 during a REINDEX. Compared\n>> to CREATE INDEX, a REINDEX is a bit wilder as it would work on all the\n>> indexes for all the partitions, providing this information makes\n>> sense.\n>> \n>> Agreed that this could be better, but what's now on HEAD is not wrong\n>> either.\n> \n> Thanks for pointing it out, I didn’t notice it. You’re right, this not a bug, but an improvement. Will remove the bits from the documentation as well.\n> \n>> \n>> Still it does not do it because we know that the fields are going to\n>> be overwritten pretty quickly anyway, no? For now, we could just do\n>> without progress_index_partition_done(), I guess, keeping it to the\n>> incrementation of PROGRESS_CREATEIDX_PARTITIONS_DONE. There's an\n>> argument that this makes the code slightly easier to follow, with less\n>> wrappers around the progress reporting.\n> \n> The use-case that I thought this would improve is REINDEX CONCURRENT, where data from later stages of the previous partitions would linger for quite a while before it gets to the same stage of the current partition. I don’t think this is of big importance, so I’m ok with making code simpler and leaving it out.\n> \n>> \n>> +\tint\t\t\tprogress_params[3] = {\n>> +\t\tPROGRESS_CREATEIDX_COMMAND,\n>> +\t\tPROGRESS_CREATEIDX_PHASE,\n>> +\t\tPROGRESS_CREATEIDX_PARTITIONS_TOTAL\n>> +\t};\n>> +\tint64\t\tprogress_values[3];\n>> +\tOid\t\t\theapId = relid;\n>> \n>> Rather than having new code blocks, let's use a style consistent with\n>> DefineIndex() where we have the pgstat_progress_update_multi_param(),\n>> with a single {} block.\n> \n> Fixed.\n> \n>> Adding the counter increment at the end of the loop in\n>> ReindexMultipleInternal() is a good idea. It considers both the\n>> concurrent and non-concurrent cases.\n> \n> Another reason to do so is that reindex_relation calls itself recursively, so it'd require some additional mechanism so that it wouldn't increment multiple times per partition.\n>> \n>> + progress_values[2] = list_length(partitions);\n>> + pgstat_progress_start_command(PROGRESS_COMMAND_CREATE_INDEX, heapId);\n>> \n>> Hmm. Is setting the relid only once for pg_stat_progress_create_index\n>> the best choice there is? Could it be better to report the partition\n>> OID instead? \n> \n> Technically, we could do this, but it looks like no other commands do it and currently there’s no API to do it without erasing the rest of progress information.\n> \n>> Let's remember that indexes may have been attached with\n>> names inconsistent with the partitioned table's index. It is a bit\n>> confusing to stick to the relid all the partitioned table all the\n>> time, for all the indexes of all the partitions reindexed.\n> \n> I agree that it’s a bit confusing especially because documentation gives no hints about it. Judging by documentation, I would expect relid and index_relid to correspond to the same table. However, if we say that this field represents the oid of the relation on which the command was invoked, then I think it does make sense. Edited documentation to say that in the new patch.\n> \n>> Could it\n>> be better to actually introduce an entirely new field to the progress\n>> table? What I would look for is more information:\n>> 1) the partitioned table OID on which the REINDEX runs\n>> 2) the partition table being processed\n>> 3) the index OID being processed (old and new for concurrent case).\n>> \n>> The patch sets 1) to the OID of the partitioned table, lacks 2) and\n>> sets 3) each time an index is rebuilt.\n>> —\n>> Michael\n> \n> \n> I like this approach more, but I’m not sure whether adding another field for partition oid is worth it, since we already have index_relid that we can use to join with pg_index to get that. On the other hand, index_relid is missing during regular CREATE INDEX, so this new field could be useful to indicate which table is being indexed in this case. I'm on the fence about this, attached this as a separate patch, if you think it's a good idea.\n> \n> Thank you for the review!\n> \n> <v2-0002-partition_relid-column-for-create-index-progress.patch><v2-0001-make-REINDEX-track-partition-progress.patch>", "msg_date": "Sun, 21 Jul 2024 11:41:43 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX not updating partition progress" }, { "msg_contents": "On Sun, Jul 21, 2024 at 11:41:43AM +0100, Ilya Gladyshev wrote:\n> Forgot to update partition_relid in reindex_index in the second patch. Fixed in attachment.\n\n <structfield>relid</structfield> <type>oid</type>\n </para>\n <para>\n- OID of the table on which the index is being created.\n+ OID of the table on which the command was run.\n </para></entry>\n\nHmm. I am not sure if we really need to change the definition of this\nfield, because it can have the same meaning when using a REINDEX on a\npartitioned table, pointing to the parent table (the partition) of the\nindex currently rebuilt.\n\nHence, rather than a partition_relid, could a partitioned_relid\nreflect better the situation, set only when issuing a REINDEX on a\npartitioned relation?\n\n+ if (relkind == RELKIND_PARTITIONED_INDEX)\n+ {\n+ heapId = IndexGetRelation(relid, true);\n+ }\nShouldn't we report the partitioned index OID rather than its parent\ntable when running a REINDEX on a partitioned index?\n--\nMichael", "msg_date": "Thu, 25 Jul 2024 17:55:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX not updating partition progress" }, { "msg_contents": "> 25 июля 2024 г., в 09:55, Michael Paquier <[email protected]> \n> написал(а):\n>\n> On Sun, Jul 21, 2024 at 11:41:43AM +0100, Ilya Gladyshev wrote:\n>> Forgot to update partition_relid in reindex_index in the second \n>> patch. Fixed in attachment.\n>\n>        <structfield>relid</structfield> <type>oid</type>\n>       </para>\n>       <para>\n> -       OID of the table on which the index is being created.\n> +       OID of the table on which the command was run.\n>       </para></entry>\n>\n> Hmm.  I am not sure if we really need to change the definition of this\n> field, because it can have the same meaning when using a REINDEX on a\n> partitioned table, pointing to the parent table (the partition) of the\n> index currently rebuilt.\n>\n> Hence, rather than a partition_relid, could a partitioned_relid\n> reflect better the situation, set only when issuing a REINDEX on a\n> partitioned relation?\n\nI'm not quite happy with the documentation update, but I think the \napproach for partitioned tables in this patch makes sense. I checked \nwhat other commands, that deal with partitions, (CREATE INDEX and \nANALYZE) do, and they put a root partitioned table in \"relid\". ANALYZE \nhas a separate column for the id of partition named \ncurrent_child_table_relid, so I think it makes sense to have REINDEX do \nthe same.\n\nIn addition, the current API for progress tracking doesn't have a way of \nupdating \"relid\" without wiping out all other fields (that's what \npgstat_progress_start_command does). This can definitely be changed, but \nthat's another thing that made me not think in this direction.\n\n> +   if (relkind == RELKIND_PARTITIONED_INDEX)\n> +   {\n> +       heapId = IndexGetRelation(relid, true);\n> +   }\n> Shouldn't we report the partitioned index OID rather than its parent\n> table when running a REINDEX on a partitioned index?\n> —\n> Michael\n\nIt’s used to update the \"relid\" field of the progress report. It’s the \none that’s described in docs currently as \"OID of the table on which the \nindex is being created.\", so I think it’s correct.\n\n\n\n\n\n\n\n\n\n25 июля 2024 г., в 09:55, Michael Paquier\n <[email protected]> написал(а):\n\n\nOn Sun, Jul 21, 2024 at 11:41:43AM +0100, Ilya Gladyshev\n wrote:\nForgot to update partition_relid in\n reindex_index in the second patch. Fixed in attachment.\n\n\n        <structfield>relid</structfield>\n <type>oid</type>\n       </para>\n       <para>\n -       OID of the table on which the index is being\n created.\n +       OID of the table on which the command was run.\n       </para></entry>\n\n Hmm.  I am not sure if we really need to change the\n definition of this\n field, because it can have the same meaning when using a\n REINDEX on a\n partitioned table, pointing to the parent table (the\n partition) of the\n index currently rebuilt.\n\n Hence, rather than a partition_relid, could a\n partitioned_relid\n reflect better the situation, set only when issuing a\n REINDEX on a\n partitioned relation?\n\n\n\n\n\nI'm not\n quite happy with the documentation update, but I think the\n approach for partitioned tables in this patch makes sense. I\n checked what other commands, that deal with partitions, (CREATE\n INDEX and ANALYZE) do, and they put a root partitioned table in\n \"relid\". ANALYZE has a separate column for the id of partition\n named current_child_table_relid, so I think it makes sense to\n have REINDEX do the same.\n\n\n\nIn\n addition, the current API for progress tracking doesn't have a way\n of updating \"relid\" without wiping out all other fields (that's\n what pgstat_progress_start_command does). This can definitely be\n changed, but that's another thing that made me not think in this\n direction.\n\n\n\n\n+   if (relkind == RELKIND_PARTITIONED_INDEX)\n +   {\n +       heapId = IndexGetRelation(relid, true);\n +   }\n Shouldn't we report the partitioned index OID rather than\n its parent\n table when running a REINDEX on a partitioned index?\n —\n Michael\n\n\n\n\n\nIt’s used\n to update the \"relid\" field of the progress report. It’s the one\n that’s described in docs currently as \"OID of the table on which\n the index is being created.\", so I think it’s correct.", "msg_date": "Thu, 25 Jul 2024 22:21:49 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX not updating partition progress" } ]
[ { "msg_contents": "Hi hackers,\n\nI ran into an issue today when I was trying to insert a complex types \nwhere one of its attributes is also an array of complex types,\n\nAs an example:\n\nCREATE TYPE inventory_item AS\n(\n     name        text,\n     supplier_id integer,\n     price       numeric\n);\nCREATE TYPE item_2d AS (id int, items inventory_item[][]);\n\nCREATE TABLE item_2d_table (id int, item item_2d);\n\nINSERT INTO item_2d_table VALUES(1, '(1,{{(\"inv a\",42,1.99),(\"inv \nb\",42,1.99)},{(\"inv c\",42,1.99),(\"inv d\",42,2)}})');\n\nThe INSERT statement will fail due to how complex types are parsed, I \nhave included a patch in this email to support this scenario.\n\nThe reason why this fails is because record_in lacks support of \ndetecting an array string when one of the attributes is of type array.\nDue to this, it will stop processing the column value prematurely, which \nresults in a corrupted value for that particular column.\nAs a result array_in will receive a malformed string which is bound to \nerror.\n\nTo fix this, record_in can detect columns that are of type array and in \nsuch cases leave the input intact.\narray_in will attempt to extract the elements one by one. In case it is \ndealing with unquoted elements, the logic needs to slightly change, \nsince if the element is a record, quotes can be allowed, ex: {{(\"test \nfield\")}\n\nThere are some adjustments that can be made to the patch, for example:\nWe can detect the number of the dimensions of the array in record_in, do \nwe want to error out in case the string has more dimensions than MAXDIM \nin array.h, (to prevent number over/underflow-ing) or whether we want to \nerror out if number of dimensions is not the same with the number of \ndimensions that the attribute is supposed to have, or both?\n\nRegards,\nArjan Marku", "msg_date": "Sat, 13 Jul 2024 19:03:37 +0200", "msg_from": "Arjan Marku <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH v1] Fix parsing of a complex type that has an array of complex\n types" }, { "msg_contents": "Arjan Marku <[email protected]> writes:\n> INSERT INTO item_2d_table VALUES(1, '(1,{{(\"inv a\",42,1.99),(\"inv \n> b\",42,1.99)},{(\"inv c\",42,1.99),(\"inv d\",42,2)}})');\n\n> The INSERT statement will fail due to how complex types are parsed, I \n> have included a patch in this email to support this scenario.\n\nThe actual problem with this input is that it's inadequately quoted.\nThe record fields need to be quoted according to the rules in\n\nhttps://www.postgresql.org/docs/current/rowtypes.html#ROWTYPES-IO-SYNTAX\n\nand then in the field that is an array, the array elements need to be\nquoted according to the rules in\n\nhttps://www.postgresql.org/docs/current/arrays.html#ARRAYS-IO\n\nand then the innermost record fields need yet another level of quoting\n(well, you might be able to skip that given that there are no special\ncharacters in this particular data, but in general you'd have to).\n\nIt looks to me like your patch is trying to make array_in and\nrecord_in sufficiently aware of each other's rules that the outer\nquoting could be omitted, but I believe that that's a seriously bad\nidea. In the first place, it's far from clear that that's even\npossible without ambiguity (much less that this specific patch does\nit correctly). In the second place, this will make it even more\ndifficult for these functions to issue on-point error messages for\nincorrect input. (They're already struggling with that, see\ne.g. [1].) And in the third place, these functions are nearly\nunmaintainable spaghetti already. (There have also been complaints\nthat they're too slow, which this won't improve.) We don't need\nanother big dollop of complexity here.\n\nOur normal recommendation for handwritten input is to not try to deal\nwith the complications of correctly quoting nested array/record data.\nInstead use row() and array[] constructors. So you could write\nsomething like\n\nINSERT INTO item_2d_table\nVALUES(1,\n row(1, array[[row('inv a',42,1.99), row('inv b',42,1.99)],\n [row('inv c',42,1.99), row('inv d',42,2)]]::inventory_item[]));\n\nIn this particular example we need an explicit cast to cue the\nparser about the type of the array elements, but then it can\ncope with casting the outer row() construct automatically.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CACJufxExAcpvrkbLGrZGdZ%3DbFAuj7OVp1mOhk%2BfsBzeUbOGuHQ%40mail.gmail.com\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:15:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] Fix parsing of a complex type that has an array of\n complex types" }, { "msg_contents": "I agree with all your points as I encountered them myself, especially \nambiguity and error handling.\n\nThe introduced dependency between those functions also is for me a bad \nidea but it seemed like the only way to support that use case, but turns \nout my assumption that the array literal shouldn't be quoted was wrong,\n\nI managed to execute this query fine:\nINSERT INTO item_2d_table VALUES(1, '(1,\"{{\"\"(\\\\\"\"inv \na\\\\\"\",42,1.99)\"\",\"\"(\\\\\"inv b\\\\\",42,1.99)\"\"},{\"\"(\\\\\"\"inv \nc\\\\\"\",42,1.99)\"\",\"\"(\\\\\"\"inv d\\\\\"\",42,2)\"\"}}\")');\n\nThanks for your insights on this,\n\nKind regards,\nArjan Marku\n\nOn 7/15/24 9:15 PM, Tom Lane wrote:\n> INSERT INTO item_2d_table VALUES(1, '(1,{{(\"inv a\",42,1.99),(\"inv\n> b\",42,1.99)},{(\"inv c\",42,1.99),(\"inv d\",42,2)}})');\n\n\n\n\n\nI agree with all your points as I encountered them myself,\n especially ambiguity and error handling.\nThe introduced dependency between those functions also is for me\n a bad idea but it seemed like the only way to support that use\n case, but turns out my assumption that the array literal shouldn't\n be quoted was wrong,\nI managed to execute this query fine:\n INSERT INTO item_2d_table VALUES(1, '(1,\"{{\"\"(\\\\\"\"inv\n a\\\\\"\",42,1.99)\"\",\"\"(\\\\\"inv b\\\\\",42,1.99)\"\"},{\"\"(\\\\\"\"inv\n c\\\\\"\",42,1.99)\"\",\"\"(\\\\\"\"inv d\\\\\"\",42,2)\"\"}}\")');\n\nThanks for your insights on this,\nKind regards,\n Arjan Marku\n\nOn 7/15/24 9:15 PM, Tom Lane wrote:\n\n\nINSERT INTO item_2d_table VALUES(1, '(1,{{(\"inv a\",42,1.99),(\"inv \nb\",42,1.99)},{(\"inv c\",42,1.99),(\"inv d\",42,2)}})');", "msg_date": "Mon, 15 Jul 2024 21:59:24 +0200", "msg_from": "Arjan Marku <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH v1] Fix parsing of a complex type that has an array of\n complex types" } ]
[ { "msg_contents": "Hello, everyone!\n\nWhile reviewing [1], I noticed that check_exclusion_or_unique_constraint\noccasionally returns false negatives for btree unique indexes during UPSERT\noperations.\nAlthough this doesn't cause any real issues with INSERT ON CONFLICT, I\nwanted to bring it to your attention, as it might indicate an underlying\nproblem.\n\nAttached is a patch to reproduce the issue.\n\nmake -C src/test/modules/test_misc/ check PROVE_TESTS='t/006_*'\n....\n Failed test 'concurrent INSERTs status (got 2 vs expected 0)'\n# at t/006_concurrently_unique_fail.pl line 26.\n\n# Failed test 'concurrent INSERTs stderr /(?^:^$)/'\n# at t/006_concurrently_unique_fail.pl line 26.\n# 'pgbench: error: client 34 script 0 aborted in command\n0 query 0: ERROR: we know 31337 in the index!\n\nBest regards,\nMikhail,\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/CANtu0ogs10w%3DDgbYzZ8MswXE3PUC3J4SGDc0YEuZZeWbL0b6HA%40mail.gmail.com#8c01dcf6051e28c47d25e9471736947e", "msg_date": "Sun, 14 Jul 2024 20:01:50 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "[BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello, Andres.\n\nSorry to bother you, but I feel it's necessary to validate the possible\nissue regarding someone who can decide whether it is okay or not.\nThe issue is reproducible with the first UPSERT implementation (your commit\n168d5805e4c08bed7b95d351bf097cff7c07dd65 from 2015) and up to now.\n\nThe problem appears as follows:\n* A unique index contains a specific value (in the test, it is the only\nvalue for the entire index).\n* check_exclusion_or_unique_constraint returns FALSE for that value in some\nrandom cases.\n* Technically, this means index_getnext finds 0 records, even though we\nknow the value exists in the index.\n\nI was able to reproduce this only with an UNLOGGED table.\nI can't find any scenarios that are actually broken (since the issue is\nresolved by speculative insertion later), but this looks suspicious to me.\nIt could be a symptom of some tricky race condition in the btree.\n\nBest regards,\nMikhail\n\n>\n\nHello, Andres.Sorry to bother you, but I feel it's necessary to validate the possible issue regarding someone who can decide whether it is okay or not.The issue is reproducible with the first UPSERT implementation (your commit 168d5805e4c08bed7b95d351bf097cff7c07dd65 from 2015) and up to now.The problem appears as follows:* A unique index contains a specific value (in the test, it is the only value for the entire index).* check_exclusion_or_unique_constraint returns FALSE for that value in some random cases.* Technically, this means index_getnext finds 0 records, even though we know the value exists in the index.I was able to reproduce this only with an UNLOGGED table.I can't find any scenarios that are actually broken (since the issue is resolved by speculative insertion later), but this looks suspicious to me. It could be a symptom of some tricky race condition in the btree.Best regards,Mikhail", "msg_date": "Sun, 21 Jul 2024 17:27:01 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello, everyone!\n\nUpdates so far:\n* issue happens with both LOGGED and UNLOGGED relations\n* issue happens with DirtySnapshot\n* not happens with SnapshotSelf\n* not happens with SnapshotAny\n* not related to speculative inserted tuples - I have commented the code of\nits insertion - and the issue continues to occur.\n\nBest regards,\nMikhail.\n\nHello, everyone!Updates so far:* issue happens with both LOGGED and UNLOGGED relations* issue happens with DirtySnapshot* not happens with SnapshotSelf* not happens with SnapshotAny* not related to speculative inserted tuples - I have commented the code of its insertion - and the issue continues to occur.Best regards,Mikhail.", "msg_date": "Wed, 24 Jul 2024 22:01:23 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "It seems like I've identified the cause of the issue.\n\nCurrently, any DirtySnapshot (or SnapshotSelf) scan over a B-tree index may\nskip (not find the TID for) some records in the case of parallel updates.\n\nThe following scenario is possible:\n\n* Session 1 reads a B-tree page using SnapshotDirty and copies item X to\nthe buffer.\n* Session 2 updates item X, inserting a new TID Y into the same page.\n* Session 2 commits its transaction.\n* Session 1 starts to fetch from the heap and tries to fetch X, but it was\nalready deleted by session 2. So, it goes to the B-tree for the next TID.\n* The B-tree goes to the next page, skipping Y.\n* Therefore, the search finds nothing, but tuple Y is still alive.\n\nThis situation is somewhat controversial. DirtySnapshot might seem to show\nmore (or more recent, even uncommitted) data than MVCC, but not less. So,\nDirtySnapshot scan over a B-tree does not provide any guarantees, as far as\nI understand.\nWhy does it work for MVCC? Because tuple X will be visible due to the\nsnapshot, making Y unnecessary.\nThis might be \"as designed,\" but I think it needs to be clearly documented\n(I couldn't find any documentation on this particular case, only\n_bt_drop_lock_and_maybe_pin - related).\n\nHere are the potential consequences of the issue:\n\n* check_exclusion_or_unique_constraint\n\nIt may not find a record in a UNIQUE index during INSERT ON CONFLICT\nUPDATE. However, this is just a minor performance issue.\n\n* Exclusion constraints with B-tree, like ADD CONSTRAINT exclusion_data\nEXCLUDE USING btree (data WITH =)\n\nIt should work correctly because the first inserter may \"skip\" the TID from\na concurrent inserter, but the second one should still find the TID from\nthe first.\n\n* RelationFindReplTupleByIndex\n\nAmit, this is why I've included you in this previously solo thread :)\nRelationFindReplTupleByIndex uses DirtySnapshot and may not find some\nrecords if they are updated by a parallel transaction. This could lead to\nlost deletes/updates, especially in the case of streaming=parallel mode.\nI'm not familiar with how parallel workers apply transactions, so maybe\nthis isn't possible.\n\nBest regards,\nMikhail\n\n>\n\nIt seems like I've identified the cause of the issue.Currently, any DirtySnapshot (or SnapshotSelf) scan over a B-tree index may skip (not find the TID for) some records in the case of parallel updates.The following scenario is possible:* Session 1 reads a B-tree page using SnapshotDirty and copies item X to the buffer.* Session 2 updates item X, inserting a new TID Y into the same page.* Session 2 commits its transaction.* Session 1 starts to fetch from the heap and tries to fetch X, but it was already deleted by session 2. So, it goes to the B-tree for the next TID.* The B-tree goes to the next page, skipping Y.* Therefore, the search finds nothing, but tuple Y is still alive.This situation is somewhat controversial. DirtySnapshot might seem to show more (or more recent, even uncommitted) data than MVCC, but not less. So, DirtySnapshot scan over a B-tree does not provide any guarantees, as far as I understand.Why does it work for MVCC? Because tuple X will be visible due to the snapshot, making Y unnecessary.This might be \"as designed,\" but I think it needs to be clearly documented (I couldn't find any documentation on this particular case, only _bt_drop_lock_and_maybe_pin - related).Here are the potential consequences of the issue:* check_exclusion_or_unique_constraintIt may not find a record in a UNIQUE index during INSERT ON CONFLICT UPDATE. However, this is just a minor performance issue.* Exclusion constraints with B-tree, like ADD CONSTRAINT exclusion_data EXCLUDE USING btree (data WITH =)It should work correctly because the first inserter may \"skip\" the TID from a concurrent inserter, but the second one should still find the TID from the first.* RelationFindReplTupleByIndexAmit, this is why I've included you in this previously solo thread :)RelationFindReplTupleByIndex uses DirtySnapshot and may not find some records if they are updated by a parallel transaction. This could lead to lost deletes/updates, especially in the case of streaming=parallel mode. I'm not familiar with how parallel workers apply transactions, so maybe this isn't possible.Best regards,Mikhail", "msg_date": "Wed, 31 Jul 2024 22:57:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Dear Michail,\r\n\r\nThanks for pointing out the issue!\r\n\r\n>* RelationFindReplTupleByIndex\r\n>\r\n>Amit, this is why I've included you in this previously solo thread :)\r\n>RelationFindReplTupleByIndex uses DirtySnapshot and may not find some records\r\n>if they are updated by a parallel transaction. This could lead to lost\r\n>deletes/updates, especially in the case of streaming=parallel mode. \r\n>I'm not familiar with how parallel workers apply transactions, so maybe this\r\n>isn't possible.\r\n\r\nIIUC, the issue can happen when two concurrent transactions using DirtySnapshot access\r\nthe same tuples, which is not specific to the parallel apply. Consider that two\r\nsubscriptions exist and publishers modify the same tuple of the same table.\r\nIn this case, two workers access the tuple, so one of the changes may be missed\r\nby the scenario you said. I feel we do not need special treatments for parallel\r\napply.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Thu, 1 Aug 2024 05:54:37 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello, Hayato!\n\n> Thanks for pointing out the issue!\n\nThanks for your attention!\n\n> IIUC, the issue can happen when two concurrent transactions using\nDirtySnapshot access\n> the same tuples, which is not specific to the parallel apply\n\nNot exactly, it happens for any DirtySnapshot scan over a B-tree index with\nsome other transaction updating the same index page (even using the MVCC\nsnapshot).\n\nSo, logical replication related scenario looks like this:\n\n* subscriber worker receives a tuple update\\delete from the publisher\n* it calls RelationFindReplTupleByIndex to find the tuple in the local table\n* some other transaction updates the tuple in the local table (on\nsubscriber side) in parallel\n* RelationFindReplTupleByIndex may not find the tuple because it uses\nDirtySnapshot\n* update\\delete is lost\n\nParallel apply mode looks like more dangerous because it uses multiple\nworkers on the subscriber side, so the probability of the issue is higher.\nIn that case, \"some other transaction\" is just another worker applying\nchanges of different transaction in parallel.\n\nBest regards,\nMikhail.\n\nHello, Hayato!> Thanks for pointing out the issue!Thanks for your attention!> IIUC, the issue can happen when two concurrent transactions using DirtySnapshot access> the same tuples, which is not specific to the parallel applyNot exactly, it happens for any DirtySnapshot scan over a B-tree index with some other transaction updating the same index page (even using the MVCC snapshot).So, logical replication related scenario looks like this:* subscriber worker receives a tuple update\\delete from the publisher* it calls RelationFindReplTupleByIndex to find the tuple in the local table* some other transaction updates the tuple in the local table (on subscriber side) in parallel* RelationFindReplTupleByIndex may not find the tuple because it uses DirtySnapshot* update\\delete is lostParallel apply mode looks like more dangerous because it uses multiple workers on the subscriber side, so the probability of the issue is higher.In that case, \"some other transaction\" is just another worker applying changes of different transaction in parallel.Best regards,Mikhail.", "msg_date": "Thu, 1 Aug 2024 11:25:36 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "On Thu, Aug 1, 2024 at 2:55 PM Michail Nikolaev\n<[email protected]> wrote:\n>\n> > Thanks for pointing out the issue!\n>\n> Thanks for your attention!\n>\n> > IIUC, the issue can happen when two concurrent transactions using DirtySnapshot access\n> > the same tuples, which is not specific to the parallel apply\n>\n> Not exactly, it happens for any DirtySnapshot scan over a B-tree index with some other transaction updating the same index page (even using the MVCC snapshot).\n>\n> So, logical replication related scenario looks like this:\n>\n> * subscriber worker receives a tuple update\\delete from the publisher\n> * it calls RelationFindReplTupleByIndex to find the tuple in the local table\n> * some other transaction updates the tuple in the local table (on subscriber side) in parallel\n> * RelationFindReplTupleByIndex may not find the tuple because it uses DirtySnapshot\n> * update\\delete is lost\n>\n> Parallel apply mode looks like more dangerous because it uses multiple workers on the subscriber side, so the probability of the issue is higher.\n> In that case, \"some other transaction\" is just another worker applying changes of different transaction in parallel.\n>\n\nI think it is rather less likely or not possible in a parallel apply\ncase because such conflicting updates (updates on the same tuple)\nshould be serialized at the publisher itself. So one of the updates\nwill be after the commit that has the second update.\n\nI haven't tried the test based on your description of the general\nproblem with DirtySnapshot scan. In case of logical replication, we\nwill LOG update_missing type of conflict and the user may need to take\nsome manual action based on that. I have not tried a test so I could\nbe wrong as well. I am not sure we can do anything specific to logical\nreplication for this but feel free to suggest if you have ideas to\nsolve this problem in general or specific to logical replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 2 Aug 2024 10:26:40 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello, Amit!\n\n> I think it is rather less likely or not possible in a parallel apply\n> case because such conflicting updates (updates on the same tuple)\n> should be serialized at the publisher itself. So one of the updates\n> will be after the commit that has the second update.\n\nGlad to hear! But anyway, such logic looks very fragile to me.\n\n> I haven't tried the test based on your description of the general\n> problem with DirtySnapshot scan. In case of logical replication, we\n> will LOG update_missing type of conflict and the user may need to take\n> some manual action based on that.\n\nCurrent it is just DEBUG1, so it will be probably missed by the user.\n\n> * XXX should this be promoted to ereport(LOG) perhaps?\n> */\n> elog(DEBUG1,\n> \"logical replication did not find row to be updated \"\n> \"in replication target relation \\\"%s\\\"\",\n> RelationGetRelationName(localrel));\n> }\n\n> I have not tried a test so I could\n> be wrong as well. I am not sure we can do anything specific to logical\n> replication for this but feel free to suggest if you have ideas to\n> solve this problem in general or specific to logical replication.\n\nI've implemented a solution to address the problem more generally, attached\nthe patch (and also the link [1]).\n\nHere's a summary of the changes:\n\n* For each tuple skipped because it was deleted, we now accumulate the\nmaximum xmax.\n* Before the scan begins, we store the value of the latest completed\ntransaction.\n* If no tuples are found in the index, we check the max(xmax) value. If\nthis value is newer than the latest completed transaction stored before the\nscan, it indicates that a tuple was deleted by another transaction after\nthe scan started. To ensure all tuples are correctly processed we then\nrescan the index.\n\n\nAlso added a test case to cover this scenario using the new injection point\nmechanism and\nupdated the b-tree index documentation to include a description of this\ncase.\n\nI'll add this into the next commitfest.\n\nBest regards,\nMikhail.\n\n[1]:\nhttps://github.com/postgres/postgres/compare/master...michail-nikolaev:postgres:concurrent_unique", "msg_date": "Fri, 2 Aug 2024 19:08:39 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "On Fri, Aug 2, 2024 at 10:38 PM Michail Nikolaev\n<[email protected]> wrote:\n>\n> > I think it is rather less likely or not possible in a parallel apply\n> > case because such conflicting updates (updates on the same tuple)\n> > should be serialized at the publisher itself. So one of the updates\n> > will be after the commit that has the second update.\n>\n> Glad to hear! But anyway, such logic looks very fragile to me.\n>\n> > I haven't tried the test based on your description of the general\n> > problem with DirtySnapshot scan. In case of logical replication, we\n> > will LOG update_missing type of conflict and the user may need to take\n> > some manual action based on that.\n>\n> Current it is just DEBUG1, so it will be probably missed by the user.\n>\n> > * XXX should this be promoted to ereport(LOG) perhaps?\n> > */\n> > elog(DEBUG1,\n> > \"logical replication did not find row to be updated \"\n> > \"in replication target relation \\\"%s\\\"\",\n> > RelationGetRelationName(localrel));\n> > }\n>\n\nRight, but we are extending this functionality to detect and resolve\nsuch conflicts [1][2]. I am hoping after that such updates won't be\nmissed.\n\n[1] - https://commitfest.postgresql.org/49/5064/\n[2] - https://commitfest.postgresql.org/49/5021/\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Aug 2024 15:46:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello!\n\n> Right, but we are extending this functionality to detect and resolve\n> such conflicts [1][2]. I am hoping after that such updates won't be\n> missed.\n\nYes, this is a nice feature. However, without the DirtySnapshot index scan\nfix, it will fail in numerous instances, especially in master-master\nreplication.\n\nThe update_missing feature is helpful in this case, but it is still not the\ncorrect event because a real tuple exists, and we should receive\nupdate_differ instead. As a result, some conflict resolution systems may\nmalfunction. For example, if the resolution method is set to apply_or_skip,\nit will insert the new row, causing two rows to exist. This system is quite\nfragile, and I am sure there are many more complicated scenarios that could\narise.\nBest regards,\nMikhail.\n\nHello!> Right, but we are extending this functionality to detect and resolve> such conflicts [1][2]. I am hoping after that such updates won't be> missed.Yes, this is a nice feature. However, without the DirtySnapshot index scan fix, it will fail in numerous instances, especially in master-master replication.The update_missing feature is helpful in this case, but it is still not the correct event because a real tuple exists, and we should receive update_differ instead. As a result, some conflict resolution systems may malfunction. For example, if the resolution method is set to apply_or_skip, it will insert the new row, causing two rows to exist. This system is quite fragile, and I am sure there are many more complicated scenarios that could arise.Best regards,Mikhail.", "msg_date": "Mon, 5 Aug 2024 13:11:50 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hi,\r\n\r\nThanks for reporting the issue !\r\n\r\nI tried to reproduce this in logical replication but failed. If possible,\r\ncould you please share some steps to reproduce it in logicalrep context ?\r\n\r\nIn my test, if the tuple is updated and new tuple is in the same page,\r\nheapam_index_fetch_tuple should find the new tuple using HOT chain. So, it's a\r\nbit unclear to me how the updated tuple is missing. Maybe I missed some other\r\nconditions for this issue.\r\n\r\nIt would be better if we can reproduce this by adding some breakpoints using\r\ngdb, which may help us to write a tap test using injection point to reproduce\r\nthis reliably. I see the tap test you shared used pgbench to reproduce this,\r\nit works, but It would be great if we can analyze the issue more deeply by\r\ndebugging the code.\r\n\r\nAnd I have few questions related the steps you shared:\r\n\r\n> * Session 1 reads a B-tree page using SnapshotDirty and copies item X to the buffer.\r\n> * Session 2 updates item X, inserting a new TID Y into the same page.\r\n> * Session 2 commits its transaction.\r\n> * Session 1 starts to fetch from the heap and tries to fetch X, but it was\r\n> already deleted by session 2. So, it goes to the B-tree for the next TID.\r\n> * The B-tree goes to the next page, skipping Y.\r\n> * Therefore, the search finds nothing, but tuple Y is still alive.\r\n\r\nI am wondering at which point should the update happen ? should it happen after\r\ncalling index_getnext_tid and before index_fetch_heap ? It would be great if\r\nyou could give more details in above steps. Thanks !\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 12 Aug 2024 03:32:32 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello, Hou zj!\n\n> In my test, if the tuple is updated and new tuple is in the same page,\n> heapam_index_fetch_tuple should find the new tuple using HOT chain. So,\nit's a\n> bit unclear to me how the updated tuple is missing. Maybe I missed some\nother\n> conditions for this issue.\n\nYeah, I think the pgbench-based reproducer may also cause page splits in\nbtree.\nBut we may add an index to the table to disable HOT.\n\nI have attached a reproducer for this case using a spec and injection\npoints.\n\nI hope it helps, check the attached files.\n\nBest regards,\nMikhail.", "msg_date": "Mon, 12 Aug 2024 13:11:26 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "On Monday, August 12, 2024 7:11 PM Michail Nikolaev <[email protected]> wrote:\r\n> > In my test, if the tuple is updated and new tuple is in the same page,\r\n> > heapam_index_fetch_tuple should find the new tuple using HOT chain. So, it's a\r\n> > bit unclear to me how the updated tuple is missing. Maybe I missed some other\r\n> > conditions for this issue.\r\n> \r\n> Yeah, I think the pgbench-based reproducer may also cause page splits in btree.\r\n> But we may add an index to the table to disable HOT.\r\n> \r\n> I have attached a reproducer for this case using a spec and injection points.\r\n> \r\n> I hope it helps, check the attached files.\r\n\r\nThanks a lot for the steps!\r\n\r\nI successfully reproduced the issue you mentioned in the context of logical\r\nreplication[1]. As you said, it could increase the possibility of tuple missing\r\nwhen applying updates or deletes in the logical apply worker. I think this is a\r\nlong-standing issue and I will investigate the fix you proposed.\r\n\r\nIn addition, I think the bug is not a blocker for the conflict detection\r\nfeature. As the feature simply reports the current behavior of the logical\r\napply worker (either unique violation or tuple missing) without introducing any\r\nnew functionality. Furthermore, I think that the new ExecCheckIndexConstraints\r\ncall after ExecInsertIndexTuples() is not affected by the dirty snapshot bug.\r\nThis is because a tuple has already been inserted into the btree before the\r\ndirty snapshot scan, which means that a concurrent non-HOT update would not be\r\npossible (it would be blocked after finding the just inserted tuple and wait\r\nfor the apply worker to commit the current transaction).\r\n\r\nIt would be good if others could also share their opinion on this.\r\n\r\n\r\n[1] The steps to reproduce the tuple missing in logical replication.\r\n\r\n1. setup pub/sub env, and publish a table with 1 row.\r\n\r\npub:\r\nCREATE TABLE t(a int primary key, b int);\r\nINSERT INTO t VALUES(1,1);\r\nCREATE PUBLICATION pub FOR TABLE t;\r\n\r\nsub:\r\nCREATE TABLE t (a int primary key, b int check (b < 5));\r\nCREATE INDEX t_b_idx ON t(b);\r\nCREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres port=$port_publisher' PUBLICATION pub;\r\n\r\n2. Execute an UPDATE(UPDATE t set b = b + 1) on the publisher and use gdb to\r\nstop the apply worker at the point after index_getnext_tid() and before\r\nindex_fetch_heap().\r\n\r\n3. execute a concurrent update(UPDATE t set b = b + 100) on the subscriber to\r\nupdate a non-key column value and commit the update.\r\n\r\n4. release the apply worker and it would report the update_missing conflict.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Thu, 15 Aug 2024 08:06:03 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [BUG?] check_exclusion_or_unique_constraint false negative" }, { "msg_contents": "Hello!\n\n> In addition, I think the bug is not a blocker for the conflict detection\n> feature. As the feature simply reports the current behavior of the logical\n> apply worker (either unique violation or tuple missing) without\nintroducing any\n> new functionality. Furthermore, I think that the new\nExecCheckIndexConstraints\n> call after ExecInsertIndexTuples() is not affected by the dirty snapshot\nbug.\n> This is because a tuple has already been inserted into the btree before\nthe\n> dirty snapshot scan, which means that a concurrent non-HOT update would\nnot be\n> possible (it would be blocked after finding the just inserted tuple and\nwait\n> for the apply worker to commit the current transaction).\n\n> It would be good if others could also share their opinion on this.\n\nYes, you are right. At least, I can't find any scenario for that case.\n\nBest regards,\nMikhail.\n\nHello!> In addition, I think the bug is not a blocker for the conflict detection> feature. As the feature simply reports the current behavior of the logical> apply worker (either unique violation or tuple missing) without introducing any> new functionality. Furthermore, I think that the new ExecCheckIndexConstraints> call after ExecInsertIndexTuples() is not affected by the dirty snapshot bug.> This is because a tuple has already been inserted into the btree before the> dirty snapshot scan, which means that a concurrent non-HOT update would not be> possible (it would be blocked after finding the just inserted tuple and wait> for the apply worker to commit the current transaction).> It would be good if others could also share their opinion on this.Yes, you are right. At least, I can't find any scenario for that case.Best regards,Mikhail.", "msg_date": "Fri, 16 Aug 2024 15:31:55 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUG?] check_exclusion_or_unique_constraint false negative" } ]
[ { "msg_contents": "In [1], it is suggested that it might be a good idea to support\nspecifying the tablespace for each merged/split partition.\n\nWe can do the following after this feature is supported:\n\nCREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\nCREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\nCREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\nCREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n\nALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2 TABLESPACE tblspc;\n\nALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1) TABLESPACE tblspc,\n PARTITION tp_1_2 FOR VALUES FROM (1) TO (2));\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Mon, 15 Jul 2024 13:49:16 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Support specify tablespace for each merged/split partition" }, { "msg_contents": "On Mon, Jul 15, 2024 at 11:19 AM Junwang Zhao <[email protected]> wrote:\n>\n> In [1], it is suggested that it might be a good idea to support\n> specifying the tablespace for each merged/split partition.\n>\n> We can do the following after this feature is supported:\n>\n> CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> CREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\n> CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n>\n> ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2 TABLESPACE tblspc;\n>\n> ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n> (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1) TABLESPACE tblspc,\n> PARTITION tp_1_2 FOR VALUES FROM (1) TO (2));\n>\n> [1] https://www.postgresql.org/message-id/[email protected]\n>\n\n+1 for this enhancement. Here are few comments for the patch:\n\n- INTO <replaceable class=\"parameter\">partition_name</replaceable>\n+ INTO <replaceable\nclass=\"parameter\">partition_name</replaceable> [ TABLESPACE\ntablespace_name ]\n\ntablespace_name should be wrapped in the <replaceable> tag, like partition_name.\n--\n\n static Relation\n-createPartitionTable(RangeVar *newPartName, Relation modelRel,\n- AlterTableUtilityContext *context)\n+createPartitionTable(RangeVar *newPartName, char *tablespacename,\n+\n\nThe comment should mention the tablespace setting in the same way it\nmentions the access method.\n--\n\n /*\n- * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands\n+ * PartitionCmd - info for ALTER TABLE/INDEX\nATTACH/DETACH/MERGE/SPLIT PARTITION commands\n */\n\nThis change should be a separate patch since it makes sense\nindependently of your patch. Also, the comments for the \"name\"\nvariable in the same structure need to be updated.\n--\n\n+SELECT tablename, tablespace FROM pg_tables\n+ WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n+ ORDER BY tablename, tablespace;\n+ tablename | tablespace\n+-----------+------------------\n+ t |\n+ tp_0_2 | regress_tblspace\n+(2 rows)\n+\n+SELECT tablename, indexname, tablespace FROM pg_indexes\n+ WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n+ ORDER BY tablename, indexname, tablespace;\n+ tablename | indexname | tablespace\n+-----------+-------------+------------\n+ t | t_pkey |\n+ tp_0_2 | tp_0_2_pkey |\n+(2 rows)\n+\n\nThis seems problematic to me. The index should be in the same\ntablespace as the table.\n--\n\nPlease add the commitfest[1] entry if not already done.\n\n1] https://commitfest.postgresql.org/\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 5 Aug 2024 18:07:41 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support specify tablespace for each merged/split partition" }, { "msg_contents": "Hi Amul,\n\nThanks for your review.\n\nOn Mon, Aug 5, 2024 at 8:38 PM Amul Sul <[email protected]> wrote:\n>\n> On Mon, Jul 15, 2024 at 11:19 AM Junwang Zhao <[email protected]> wrote:\n> >\n> > In [1], it is suggested that it might be a good idea to support\n> > specifying the tablespace for each merged/split partition.\n> >\n> > We can do the following after this feature is supported:\n> >\n> > CREATE TABLESPACE tblspc LOCATION '/tmp/tblspc';\n> > CREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\n> > CREATE TABLE tp_0_1 PARTITION OF t FOR VALUES FROM (0) TO (1);\n> > CREATE TABLE tp_1_2 PARTITION OF t FOR VALUES FROM (1) TO (2);\n> >\n> > ALTER TABLE t MERGE PARTITIONS (tp_0_1, tp_1_2) INTO tp_0_2 TABLESPACE tblspc;\n> >\n> > ALTER TABLE t SPLIT PARTITION tp_0_2 INTO\n> > (PARTITION tp_0_1 FOR VALUES FROM (0) TO (1) TABLESPACE tblspc,\n> > PARTITION tp_1_2 FOR VALUES FROM (1) TO (2));\n> >\n> > [1] https://www.postgresql.org/message-id/[email protected]\n> >\n>\n> +1 for this enhancement. Here are few comments for the patch:\n>\n> - INTO <replaceable class=\"parameter\">partition_name</replaceable>\n> + INTO <replaceable\n> class=\"parameter\">partition_name</replaceable> [ TABLESPACE\n> tablespace_name ]\n>\n> tablespace_name should be wrapped in the <replaceable> tag, like partition_name.\n\nWill add in next version.\n\n> --\n>\n> static Relation\n> -createPartitionTable(RangeVar *newPartName, Relation modelRel,\n> - AlterTableUtilityContext *context)\n> +createPartitionTable(RangeVar *newPartName, char *tablespacename,\n> +\n>\n> The comment should mention the tablespace setting in the same way it\n> mentions the access method.\n\nI'm not good at wording, can you give some advice?\n\n> --\n>\n> /*\n> - * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands\n> + * PartitionCmd - info for ALTER TABLE/INDEX\n> ATTACH/DETACH/MERGE/SPLIT PARTITION commands\n> */\n>\n> This change should be a separate patch since it makes sense\n> independently of your patch. Also, the comments for the \"name\"\n> variable in the same structure need to be updated.\n\nWill be split into a separate patch in the next version.\n\n> --\n>\n> +SELECT tablename, tablespace FROM pg_tables\n> + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> + ORDER BY tablename, tablespace;\n> + tablename | tablespace\n> +-----------+------------------\n> + t |\n> + tp_0_2 | regress_tblspace\n> +(2 rows)\n> +\n> +SELECT tablename, indexname, tablespace FROM pg_indexes\n> + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> + ORDER BY tablename, indexname, tablespace;\n> + tablename | indexname | tablespace\n> +-----------+-------------+------------\n> + t | t_pkey |\n> + tp_0_2 | tp_0_2_pkey |\n> +(2 rows)\n> +\n>\n> This seems problematic to me. The index should be in the same\n> tablespace as the table.\n\nI'm not sure about this, it seems to me that partition index will alway\ninherit the tablespaceId of its parent(see generateClonedIndexStmt),\ndo you think we should change the signature of this function?\n\nOne thing worth mentioning is that for UNIQUE/PRIMARY KEY,\nit allows setting *USING INDEX TABLESPACE tablespace_name*,\nI don't think we should change the index tablespace in this case,\nwhat do you think?\n\n> --\n>\n> Please add the commitfest[1] entry if not already done.\n>\n> 1] https://commitfest.postgresql.org/\n\nhttps://commitfest.postgresql.org/49/5157/\n\nI have added you as a reviewer, hope you don't mind.\n\n>\n> Regards,\n> Amul\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 5 Aug 2024 23:35:34 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support specify tablespace for each merged/split partition" }, { "msg_contents": "On Mon, Aug 5, 2024 at 9:05 PM Junwang Zhao <[email protected]> wrote:\n>\n> Hi Amul,\n>\n> Thanks for your review.\n>\n> On Mon, Aug 5, 2024 at 8:38 PM Amul Sul <[email protected]> wrote:\n> >\n> > On Mon, Jul 15, 2024 at 11:19 AM Junwang Zhao <[email protected]> wrote:\n> > >\n> >[...]\n> > static Relation\n> > -createPartitionTable(RangeVar *newPartName, Relation modelRel,\n> > - AlterTableUtilityContext *context)\n> > +createPartitionTable(RangeVar *newPartName, char *tablespacename,\n> > +\n> >\n> > The comment should mention the tablespace setting in the same way it\n> > mentions the access method.\n>\n> I'm not good at wording, can you give some advice?\n\nMy suggestion is to rewrite the third paragraph as follows, but\nsomeone else might have a better version:\n---\n The new partitions will also be created in the same tablespace as the parent\n if not specified. Also, this function sets the new partition access method\n same as parent table access methods (similarly to CREATE TABLE ... PARTITION\n OF). It checks that parent and child tables have compatible persistence.\n---\n> >\n> > +SELECT tablename, tablespace FROM pg_tables\n> > + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> > + ORDER BY tablename, tablespace;\n> > + tablename | tablespace\n> > +-----------+------------------\n> > + t |\n> > + tp_0_2 | regress_tblspace\n> > +(2 rows)\n> > +\n> > +SELECT tablename, indexname, tablespace FROM pg_indexes\n> > + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> > + ORDER BY tablename, indexname, tablespace;\n> > + tablename | indexname | tablespace\n> > +-----------+-------------+------------\n> > + t | t_pkey |\n> > + tp_0_2 | tp_0_2_pkey |\n> > +(2 rows)\n> > +\n> >\n> > This seems problematic to me. The index should be in the same\n> > tablespace as the table.\n>\n> I'm not sure about this, it seems to me that partition index will alway\n> inherit the tablespaceId of its parent(see generateClonedIndexStmt),\n> do you think we should change the signature of this function?\n>\n> One thing worth mentioning is that for UNIQUE/PRIMARY KEY,\n> it allows setting *USING INDEX TABLESPACE tablespace_name*,\n> I don't think we should change the index tablespace in this case,\n> what do you think?\n>\n\nI think you are correct; my understanding is a bit hazy.\n\n>\n> I have added you as a reviewer, hope you don't mind.\n\nThank you.\n\nRegards,\nAmul\n\n\n", "msg_date": "Tue, 6 Aug 2024 15:04:12 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support specify tablespace for each merged/split partition" }, { "msg_contents": "On Tue, Aug 6, 2024 at 5:34 PM Amul Sul <[email protected]> wrote:\n>\n> On Mon, Aug 5, 2024 at 9:05 PM Junwang Zhao <[email protected]> wrote:\n> >\n> > Hi Amul,\n> >\n> > Thanks for your review.\n> >\n> > On Mon, Aug 5, 2024 at 8:38 PM Amul Sul <[email protected]> wrote:\n> > >\n> > > On Mon, Jul 15, 2024 at 11:19 AM Junwang Zhao <[email protected]> wrote:\n> > > >\n> > >[...]\n> > > static Relation\n> > > -createPartitionTable(RangeVar *newPartName, Relation modelRel,\n> > > - AlterTableUtilityContext *context)\n> > > +createPartitionTable(RangeVar *newPartName, char *tablespacename,\n> > > +\n> > >\n> > > The comment should mention the tablespace setting in the same way it\n> > > mentions the access method.\n> >\n> > I'm not good at wording, can you give some advice?\n>\n> My suggestion is to rewrite the third paragraph as follows, but\n> someone else might have a better version:\n> ---\n> The new partitions will also be created in the same tablespace as the parent\n> if not specified. Also, this function sets the new partition access method\n> same as parent table access methods (similarly to CREATE TABLE ... PARTITION\n> OF). It checks that parent and child tables have compatible persistence.\n> ---\n\nI changed to this with minor changes.\n\n> > >\n> > > +SELECT tablename, tablespace FROM pg_tables\n> > > + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> > > + ORDER BY tablename, tablespace;\n> > > + tablename | tablespace\n> > > +-----------+------------------\n> > > + t |\n> > > + tp_0_2 | regress_tblspace\n> > > +(2 rows)\n> > > +\n> > > +SELECT tablename, indexname, tablespace FROM pg_indexes\n> > > + WHERE tablename IN ('t', 'tp_0_2') AND schemaname = 'partitions_merge_schema'\n> > > + ORDER BY tablename, indexname, tablespace;\n> > > + tablename | indexname | tablespace\n> > > +-----------+-------------+------------\n> > > + t | t_pkey |\n> > > + tp_0_2 | tp_0_2_pkey |\n> > > +(2 rows)\n> > > +\n> > >\n> > > This seems problematic to me. The index should be in the same\n> > > tablespace as the table.\n> >\n> > I'm not sure about this, it seems to me that partition index will alway\n> > inherit the tablespaceId of its parent(see generateClonedIndexStmt),\n> > do you think we should change the signature of this function?\n> >\n> > One thing worth mentioning is that for UNIQUE/PRIMARY KEY,\n> > it allows setting *USING INDEX TABLESPACE tablespace_name*,\n> > I don't think we should change the index tablespace in this case,\n> > what do you think?\n> >\n>\n> I think you are correct; my understanding is a bit hazy.\n\nThanks for your confirmation.\n\nAttached v2 addressed all the problems you mentioned, thanks.\n\n>\n> >\n> > I have added you as a reviewer, hope you don't mind.\n>\n> Thank you.\n>\n> Regards,\n> Amul\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Tue, 6 Aug 2024 18:28:16 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support specify tablespace for each merged/split partition" }, { "msg_contents": "\n\nOn 2024/08/06 19:28, Junwang Zhao wrote:\n> Attached v2 addressed all the problems you mentioned, thanks.\n\nThanks for updating the patches!\n\n\nIn the ALTER TABLE documentation, v1 patch updated the syntax, but the descriptions for MERGE and SPLIT should also be updated to explain the tablespace of new partitions.\n\n\n+\tchar\t *tablespacename; /* name of tablespace, or NULL for default */\n \tPartitionBoundSpec *bound;\t/* FOR VALUES, if attaching */\n\nThis is not the fault of v1 patch, but the comment \"if attaching\" seems incorrect.\n\n\nI also noticed the comment for SinglePartitionSpec refers to a different struct name, PartitionDesc. This should be corrected, and it might be better to include this fix in the v2 patch.\n\n\n- * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands\n+ * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH/MERGE/SPLIT PARTITION commands\n\nHow about changing it to \"info for ALTER TABLE ATTACH/DETACH/MERGE/SPLIT and ALTER INDEX ATTACH commands\" for more precision?\n\n\n-\tRangeVar *name;\t\t\t/* name of partition to attach/detach */\n+\tRangeVar *name;\t\t\t/* name of partition to\n+\t\t\t\t\t\t\t\t * attach/detach/merge/split */\n\nIn the case of MERGE, it refers to the name of the partition that the command merges into. So, would \"merge into\" be more appropriate than just \"merge\" in this comment?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 7 Aug 2024 22:54:40 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Support specify tablespace for each merged/split partition" }, { "msg_contents": "Hi Fujii,\n\nThanks for your review.\n\nOn Wed, Aug 7, 2024 at 9:54 PM Fujii Masao <[email protected]> wrote:\n>\n>\n>\n> On 2024/08/06 19:28, Junwang Zhao wrote:\n> > Attached v2 addressed all the problems you mentioned, thanks.\n>\n> Thanks for updating the patches!\n>\n>\n> In the ALTER TABLE documentation, v1 patch updated the syntax, but the descriptions for MERGE and SPLIT should also be updated to explain the tablespace of new partitions.\n\nUpdated.\n\n>\n>\n> + char *tablespacename; /* name of tablespace, or NULL for default */\n> PartitionBoundSpec *bound; /* FOR VALUES, if attaching */\n>\n> This is not the fault of v1 patch, but the comment \"if attaching\" seems incorrect.\n\nI checked the gram.y, bound can be DEFAULT, so I think a simple comment like\n /* a partition bound specification */ may be more proper?\n\n>\n>\n> I also noticed the comment for SinglePartitionSpec refers to a different struct name, PartitionDesc. This should be corrected, and it might be better to include this fix in the v2 patch.\n\nFixed.\n\n>\n>\n> - * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH PARTITION commands\n> + * PartitionCmd - info for ALTER TABLE/INDEX ATTACH/DETACH/MERGE/SPLIT PARTITION commands\n>\n> How about changing it to \"info for ALTER TABLE ATTACH/DETACH/MERGE/SPLIT and ALTER INDEX ATTACH commands\" for more precision?\n\nYeah, this is more precise, updated.\n\n>\n>\n> - RangeVar *name; /* name of partition to attach/detach */\n> + RangeVar *name; /* name of partition to\n> + * attach/detach/merge/split */\n>\n> In the case of MERGE, it refers to the name of the partition that the command merges into. So, would \"merge into\" be more appropriate than just \"merge\" in this comment?\n\nAgree, changing *merge* to *merge into* directly will unhappy pgindent,\nso I use comma instead of slash instead.\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Thu, 8 Aug 2024 09:47:20 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Support specify tablespace for each merged/split partition" } ]
[ { "msg_contents": "Hi\n\nI have a large database (multi TB) which had a vacuum full running but \nthe database ran out of space during the rebuild of one of the large \ndata tables.\n\nCleaning down the WAL files got the database restarted (an archiving \nproblem led to the initial disk full).\n\nHowever, the disk space is still at 99% as it appears the large table \nrebuild files are still hanging around using space and have not been \ndeleted.\n\nMy problem now is how do I get this space back to return my free space \nback to where it should be?\n\nI tried some scripts to map the data files to relations but this didn't \nwork as removing some files led to startup failure despite them \nappearing to be unrelated to anything in the database - I had to put \nthem back and then startup worked.\n\nAny suggestions here?\n\nThanks\n\nTom\n\n\n\n\n\n\n\nHi\nI have a large database (multi TB) which had a vacuum full\n running but the database ran out of space during the rebuild of\n one of the large data tables.\nCleaning down the WAL files got the database restarted (an\n archiving problem led to the initial disk full).\nHowever, the disk space is still at 99% as it appears the large\n table rebuild files are still hanging around using space and have\n not been deleted.\nMy problem now is how do I get this space back to return my free\n space back to where it should be?\nI tried some scripts to map the data files to relations but this\n didn't work as removing some files led to startup failure despite\n them appearing to be unrelated to anything in the database - I had\n to put them back and then startup worked.\nAny suggestions here?\nThanks\nTom", "msg_date": "Mon, 15 Jul 2024 14:47:28 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:\n> I have a large database (multi TB) which had a vacuum full running but the database\n> ran out of space during the rebuild of one of the large data tables.\n> \n> Cleaning down the WAL files got the database restarted (an archiving problem led to\n> the initial disk full).\n> \n> However, the disk space is still at 99% as it appears the large table rebuild files\n> are still hanging around using space and have not been deleted.\n> \n> My problem now is how do I get this space back to return my free space back to where\n> it should be?\n> \n> I tried some scripts to map the data files to relations but this didn't work as\n> removing some files led to startup failure despite them appearing to be unrelated\n> to anything in the database - I had to put them back and then startup worked.\n> \n> Any suggestions here?\n\nThat reads like the sad old story: \"cleaning down\" WAL files - you mean deleting the\nvery files that would have enabled PostgreSQL to recover from the crash that was\ncaused by the full file system.\n\nDid you run \"pg_resetwal\"? If yes, that probably led to data corruption.\n\nThe above are just guesses. Anyway, there is no good way to get rid of the files\nthat were left behind after the crash. The reliable way of doing so is also the way\nto get rid of potential data corruption caused by \"cleaning down\" the database:\npg_dump the whole thing and restore the dump to a new, clean cluster.\n\nYes, that will be a painfully long down time. An alternative is to restore a backup\ntaken before the crash.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 16 Jul 2024 02:58:43 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "Also, you can use multi process dump and restore using pg_dump plus pigz\nutility for zipping.\n\nThanks\n\nOn Tue, Jul 16, 2024, 4:00 AM Laurenz Albe <[email protected]> wrote:\n\n> On Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:\n> > I have a large database (multi TB) which had a vacuum full running but\n> the database\n> > ran out of space during the rebuild of one of the large data tables.\n> >\n> > Cleaning down the WAL files got the database restarted (an archiving\n> problem led to\n> > the initial disk full).\n> >\n> > However, the disk space is still at 99% as it appears the large table\n> rebuild files\n> > are still hanging around using space and have not been deleted.\n> >\n> > My problem now is how do I get this space back to return my free space\n> back to where\n> > it should be?\n> >\n> > I tried some scripts to map the data files to relations but this didn't\n> work as\n> > removing some files led to startup failure despite them appearing to be\n> unrelated\n> > to anything in the database - I had to put them back and then startup\n> worked.\n> >\n> > Any suggestions here?\n>\n> That reads like the sad old story: \"cleaning down\" WAL files - you mean\n> deleting the\n> very files that would have enabled PostgreSQL to recover from the crash\n> that was\n> caused by the full file system.\n>\n> Did you run \"pg_resetwal\"? If yes, that probably led to data corruption.\n>\n> The above are just guesses. Anyway, there is no good way to get rid of\n> the files\n> that were left behind after the crash. The reliable way of doing so is\n> also the way\n> to get rid of potential data corruption caused by \"cleaning down\" the\n> database:\n> pg_dump the whole thing and restore the dump to a new, clean cluster.\n>\n> Yes, that will be a painfully long down time. An alternative is to\n> restore a backup\n> taken before the crash.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n>\n\nAlso, you can use multi process dump and restore using pg_dump plus pigz utility for zipping.ThanksOn Tue, Jul 16, 2024, 4:00 AM Laurenz Albe <[email protected]> wrote:On Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:\n> I have a large database (multi TB) which had a vacuum full running but the database\n> ran out of space during the rebuild of one of the large data tables.\n> \n> Cleaning down the WAL files got the database restarted (an archiving problem led to\n> the initial disk full).\n> \n> However, the disk space is still at 99% as it appears the large table rebuild files\n> are still hanging around using space and have not been deleted.\n> \n> My problem now is how do I get this space back to return my free space back to where\n> it should be?\n> \n> I tried some scripts to map the data files to relations but this didn't work as\n> removing some files led to startup failure despite them appearing to be unrelated\n> to anything in the database - I had to put them back and then startup worked.\n> \n> Any suggestions here?\n\nThat reads like the sad old story: \"cleaning down\" WAL files - you mean deleting the\nvery files that would have enabled PostgreSQL to recover from the crash that was\ncaused by the full file system.\n\nDid you run \"pg_resetwal\"?  If yes, that probably led to data corruption.\n\nThe above are just guesses.  Anyway, there is no good way to get rid of the files\nthat were left behind after the crash.  The reliable way of doing so is also the way\nto get rid of potential data corruption caused by \"cleaning down\" the database:\npg_dump the whole thing and restore the dump to a new, clean cluster.\n\nYes, that will be a painfully long down time.  An alternative is to restore a backup\ntaken before the crash.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 16 Jul 2024 04:14:25 +0300", "msg_from": "Imran Khan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "Thanks Laurenz & Imran for your comments.\n\nMy responses inline below.\n\nThanks\n\nTom\n\n\nOn 15-Jul-2024 20:58, Laurenz Albe wrote:\n> On Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:\n>> I have a large database (multi TB) which had a vacuum full running but the database\n>> ran out of space during the rebuild of one of the large data tables.\n>>\n>> Cleaning down the WAL files got the database restarted (an archiving problem led to\n>> the initial disk full).\n>>\n>> However, the disk space is still at 99% as it appears the large table rebuild files\n>> are still hanging around using space and have not been deleted.\n>>\n>> My problem now is how do I get this space back to return my free space back to where\n>> it should be?\n>>\n>> I tried some scripts to map the data files to relations but this didn't work as\n>> removing some files led to startup failure despite them appearing to be unrelated\n>> to anything in the database - I had to put them back and then startup worked.\n>>\n>> Any suggestions here?\n> That reads like the sad old story: \"cleaning down\" WAL files - you mean deleting the\n> very files that would have enabled PostgreSQL to recover from the crash that was\n> caused by the full file system.\n>\n> Did you run \"pg_resetwal\"? If yes, that probably led to data corruption.\n\nNo, I just removed the excess already archived WALs to get space and \nrestarted.  The vacuum full that was running had created files for the \nlarge table it was processing and these are still hanging around eating \nspace without doing anything useful.  The shutdown prevented the \nrollback cleanly removing them which seems to be the core problem.\n\n> The above are just guesses. Anyway, there is no good way to get rid of the files\n> that were left behind after the crash. The reliable way of doing so is also the way\n> to get rid of potential data corruption caused by \"cleaning down\" the database:\n> pg_dump the whole thing and restore the dump to a new, clean cluster.\n>\n> Yes, that will be a painfully long down time. An alternative is to restore a backup\n> taken before the crash.\n\nMy issue now is the dump & reload is taking a huge time; I know the \nhardware is capable of multi-GB/s throughput but the reload is taking a \nlong time - projected to be about 10 days to reload at the current rate \n(about 30Mb/sec).  The old server and new server have a 10G link between \nthem and storage is SSD backed, so the hardware is capable of much much \nmore than it is doing now.\n\nIs there a way to improve the reload performance?  Tuning of any type - \neven if I need to undo it later once the reload is done.\n\nMy backups were in progress when all the issues happened, so they're not \nsuch a good starting point and I'd actually prefer the clean reload \nsince this DB has been through multiple upgrades (without reloads) until \nnow so I know it's not especially clean. The size has always prevented \nthe full reload before but the database is relatively low traffic now so \nI can afford some time to reload, but ideally not 10 days.\n\n> Yours,\n> Laurenz Albe\n\n\n\n\n\n\nThanks Laurenz & Imran for your comments.\nMy responses inline below.\nThanks\nTom\n\n\nOn 15-Jul-2024 20:58, Laurenz Albe\n wrote:\n\n\nOn Mon, 2024-07-15 at 14:47 -0400, Thomas Simpson wrote:\n\n\nI have a large database (multi TB) which had a vacuum full running but the database\nran out of space during the rebuild of one of the large data tables.\n\nCleaning down the WAL files got the database restarted (an archiving problem led to\nthe initial disk full).\n\nHowever, the disk space is still at 99% as it appears the large table rebuild files\nare still hanging around using space and have not been deleted.\n\nMy problem now is how do I get this space back to return my free space back to where\nit should be?\n\nI tried some scripts to map the data files to relations but this didn't work as\nremoving some files led to startup failure despite them appearing to be unrelated\nto anything in the database - I had to put them back and then startup worked.\n\nAny suggestions here?\n\n\n\nThat reads like the sad old story: \"cleaning down\" WAL files - you mean deleting the\nvery files that would have enabled PostgreSQL to recover from the crash that was\ncaused by the full file system.\n\nDid you run \"pg_resetwal\"? If yes, that probably led to data corruption.\n\n\nNo, I just removed the excess already archived WALs to get space\n and restarted.  The vacuum full that was running had created files\n for the large table it was processing and these are still hanging\n around eating space without doing anything useful.  The shutdown\n prevented the rollback cleanly removing them which seems to be the\n core problem.\n\n\n\nThe above are just guesses. Anyway, there is no good way to get rid of the files\nthat were left behind after the crash. The reliable way of doing so is also the way\nto get rid of potential data corruption caused by \"cleaning down\" the database:\npg_dump the whole thing and restore the dump to a new, clean cluster.\n\nYes, that will be a painfully long down time. An alternative is to restore a backup\ntaken before the crash.\n\n\nMy issue now is the dump & reload is taking a huge time; I\n know the hardware is capable of multi-GB/s throughput but the\n reload is taking a long time - projected to be about 10 days to\n reload at the current rate (about 30Mb/sec).  The old server and\n new server have a 10G link between them and storage is SSD backed,\n so the hardware is capable of much much more than it is doing now.\n\nIs there a way to improve the reload performance?  Tuning of any\n type - even if I need to undo it later once the reload is done.\nMy backups were in progress when all the issues happened, so\n they're not such a good starting point and I'd actually prefer the\n clean reload since this DB has been through multiple upgrades\n (without reloads) until now so I know it's not especially clean. \n The size has always prevented the full reload before but the\n database is relatively low traffic now so I can afford some time\n to reload, but ideally not 10 days.\n\n\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 17 Jul 2024 09:24:43 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson <[email protected]> wrote:\n[snip]\n\n> uge time; I know the hardware is capable of multi-GB/s throughput but the\n> reload is taking a long time - projected to be about 10 days to reload at\n> the current rate (about 30Mb/sec). The old server and new server have a\n> 10G link between them and storage is SSD backed, so the hardware is capable\n> of much much more than it is doing now.\n>\n> Is there a way to improve the reload performance? Tuning of any type -\n> even if I need to undo it later once the reload is done.\n>\nThat would, of course, depend on what you're currently doing. pg_dumpall\nof a Big Database is certainly suboptimal compared to \"pg_dump -Fd\n--jobs=24\".\n\nThis is what I run (which I got mostly from a databasesoup.com blog post)\non the target instance before doing \"pg_restore -Fd --jobs=24\":\ndeclare -i CheckPoint=30\ndeclare -i SharedBuffs=32\ndeclare -i MaintMem=3\ndeclare -i MaxWalSize=36\ndeclare -i WalBuffs=64\npg_ctl restart -wt$TimeOut -mfast \\\n -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\\n -o \"-c fsync=off\" \\\n -o \"-c log_statement=none\" \\\n -o \"-c log_temp_files=100kB\" \\\n -o \"-c log_checkpoints=on\" \\\n -o \"-c log_min_duration_statement=120000\" \\\n -o \"-c shared_buffers=${SharedBuffs}GB\" \\\n -o \"-c maintenance_work_mem=${MaintMem}GB\" \\\n -o \"-c synchronous_commit=off\" \\\n -o \"-c archive_mode=off\" \\\n -o \"-c full_page_writes=off\" \\\n -o \"-c checkpoint_timeout=${CheckPoint}min\" \\\n -o \"-c max_wal_size=${MaxWalSize}GB\" \\\n -o \"-c wal_level=minimal\" \\\n -o \"-c max_wal_senders=0\" \\\n -o \"-c wal_buffers=${WalBuffs}MB\" \\\n -o \"-c autovacuum=off\"\n\nAfter the pg_restore -Fd --jobs=24 and vacuumdb --analyze-only --jobs=24:\npg_ctl stop -wt$TimeOut && pg_ctl start -wt$TimeOut\n\nOf course, these parameter values were for *my* hardware.\n\n> My backups were in progress when all the issues happened, so they're not\n> such a good starting point and I'd actually prefer the clean reload since\n> this DB has been through multiple upgrades (without reloads) until now so I\n> know it's not especially clean. The size has always prevented the full\n> reload before but the database is relatively low traffic now so I can\n> afford some time to reload, but ideally not 10 days.\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\nOn Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson <[email protected]> wrote:[snip] uge time; I\n know the hardware is capable of multi-GB/s throughput but the\n reload is taking a long time - projected to be about 10 days to\n reload at the current rate (about 30Mb/sec).  The old server and\n new server have a 10G link between them and storage is SSD backed,\n so the hardware is capable of much much more than it is doing now.\n\nIs there a way to improve the reload performance?  Tuning of any\n type - even if I need to undo it later once the reload is done.That would, of course, depend on what you're currently doing.  pg_dumpall of a Big Database is certainly suboptimal compared to \"pg_dump -Fd --jobs=24\".This is what I run (which I got mostly from a databasesoup.com blog post) on the target instance before doing \"pg_restore -Fd --jobs=24\":declare -i CheckPoint=30declare -i SharedBuffs=32declare -i MaintMem=3declare -i MaxWalSize=36declare -i WalBuffs=64pg_ctl restart -wt$TimeOut -mfast \\        -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\        -o \"-c fsync=off\" \\        -o \"-c log_statement=none\" \\        -o \"-c log_temp_files=100kB\" \\        -o \"-c log_checkpoints=on\" \\        -o \"-c log_min_duration_statement=120000\" \\        -o \"-c shared_buffers=${SharedBuffs}GB\" \\        -o \"-c maintenance_work_mem=${MaintMem}GB\" \\        -o \"-c synchronous_commit=off\" \\        -o \"-c archive_mode=off\" \\        -o \"-c full_page_writes=off\" \\        -o \"-c checkpoint_timeout=${CheckPoint}min\" \\        -o \"-c max_wal_size=${MaxWalSize}GB\" \\        -o \"-c wal_level=minimal\" \\        -o \"-c max_wal_senders=0\" \\        -o \"-c wal_buffers=${WalBuffs}MB\" \\        -o \"-c autovacuum=off\" After the pg_restore -Fd --jobs=24 and vacuumdb --analyze-only --jobs=24:pg_ctl stop -wt$TimeOut && pg_ctl start -wt$TimeOutOf course, these parameter values were for my hardware.My backups were in progress when all the issues happened, so\n they're not such a good starting point and I'd actually prefer the\n clean reload since this DB has been through multiple upgrades\n (without reloads) until now so I know it's not especially clean. \n The size has always prevented the full reload before but the\n database is relatively low traffic now so I can afford some time\n to reload, but ideally not 10 days.\n\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 17 Jul 2024 09:49:26 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "Thanks Ron for the suggestions - I applied some of the settings which \nhelped throughput a little bit but were not an ideal solution for me - \nlet me explain.\n\nDue to the size, I do not have the option to use the directory mode (or \nanything that uses disk space) for dump as that creates multiple \ndirectories (hence why it can do multiple jobs).  I do not have the \nseveral hundred TB of space to hold the output and there is no practical \nway to get it, especially for a transient reload.\n\nI have my original server plus my replica; as the replica also applied \nthe WALs, it too filled up and went down.  I've basically recreated this \nas a primary server and am using a pipeline to dump from the original \ninto this as I know that has enough space for the final loaded database \nand should have space left over from the clean rebuild (whereas the \noriginal server still has space exhausted due to the leftover files).\n\nIncidentally, this state is also why going to a backup is not helpful \neither as the restore and then re-apply the WALs would just end up \nfilling the disk and recreating the original problem.\n\nEven with the improved throughput, current calculations are pointing to \nalmost 30 days to recreate the database through dump and reload which is \na pretty horrible state to be in.\n\nI think this is perhaps an area of improvement - especially as larger \nPostgreSQL databases become more common, I'm not the only person who \ncould face this issue.\n\nPerhaps an additional dumpall mode that generates multiple output pipes \n(I'm piping via netcat to the other server) - it would need to combine \nwith a multiple listening streams too and some degree of \nordering/feedback to get to the essentially serialized output from the \ncurrent dumpall.  But this feels like PostgreSQL expert developer territory.\n\nThanks\n\nTom\n\n\nOn 17-Jul-2024 09:49, Ron Johnson wrote:\n> On Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson <[email protected]> wrote:\n---8<--snip,snip---8<---\n> That would, of course, depend on what you're currently doing.  \n> pg_dumpall of a Big Database is certainly suboptimal compared to \n> \"pg_dump -Fd --jobs=24\".\n>\n> This is what I run (which I got mostly from a databasesoup.com \n> <http://databasesoup.com> blog post) on the target instance before \n> doing \"pg_restore -Fd --jobs=24\":\n> declare -i CheckPoint=30\n> declare -i SharedBuffs=32\n> declare -i MaintMem=3\n> declare -i MaxWalSize=36\n> declare -i WalBuffs=64\n> pg_ctl restart -wt$TimeOut -mfast \\\n>         -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\\n>         -o \"-c fsync=off\" \\\n>         -o \"-c log_statement=none\" \\\n>         -o \"-c log_temp_files=100kB\" \\\n>         -o \"-c log_checkpoints=on\" \\\n>         -o \"-c log_min_duration_statement=120000\" \\\n>         -o \"-c shared_buffers=${SharedBuffs}GB\" \\\n>         -o \"-c maintenance_work_mem=${MaintMem}GB\" \\\n>         -o \"-c synchronous_commit=off\" \\\n>         -o \"-c archive_mode=off\" \\\n>         -o \"-c full_page_writes=off\" \\\n>         -o \"-c checkpoint_timeout=${CheckPoint}min\" \\\n>         -o \"-c max_wal_size=${MaxWalSize}GB\" \\\n>         -o \"-c wal_level=minimal\" \\\n>         -o \"-c max_wal_senders=0\" \\\n>         -o \"-c wal_buffers=${WalBuffs}MB\" \\\n>         -o \"-c autovacuum=off\"\n>\n> After the pg_restore -Fd --jobs=24 and vacuumdb --analyze-only --jobs=24:\n> pg_ctl stop -wt$TimeOut && pg_ctl start -wt$TimeOut\n>\n> Of course, these parameter values were for *my* hardware.\n>\n> My backups were in progress when all the issues happened, so\n> they're not such a good starting point and I'd actually prefer the\n> clean reload since this DB has been through multiple upgrades\n> (without reloads) until now so I know it's not especially clean. \n> The size has always prevented the full reload before but the\n> database is relatively low traffic now so I can afford some time\n> to reload, but ideally not 10 days.\n>\n>> Yours,\n>> Laurenz Albe\n>\n\n\n\n\n\n\nThanks Ron for the suggestions - I applied some of the settings\n which helped throughput a little bit but were not an ideal\n solution for me - let me explain.\nDue to the size, I do not have the option to use the directory\n mode (or anything that uses disk space) for dump as that creates\n multiple directories (hence why it can do multiple jobs).  I do\n not have the several hundred TB of space to hold the output and\n there is no practical way to get it, especially for a transient\n reload.\nI have my original server plus my replica; as the replica also\n applied the WALs, it too filled up and went down.  I've basically\n recreated this as a primary server and am using a pipeline to dump\n from the original into this as I know that has enough space for\n the final loaded database and should have space left over from the\n clean rebuild (whereas the original server still has space\n exhausted due to the leftover files).\nIncidentally, this state is also why going to a backup is not\n helpful either as the restore and then re-apply the WALs would\n just end up filling the disk and recreating the original problem.\nEven with the improved throughput, current calculations are\n pointing to almost 30 days to recreate the database through dump\n and reload which is a pretty horrible state to be in.\nI think this is perhaps an area of improvement - especially as\n larger PostgreSQL databases become more common, I'm not the only\n person who could face this issue.\nPerhaps an additional dumpall mode that generates multiple output\n pipes (I'm piping via netcat to the other server) - it would need\n to combine with a multiple listening streams too and some degree\n of ordering/feedback to get to the essentially serialized output\n from the current dumpall.  But this feels like PostgreSQL expert\n developer territory.\nThanks\nTom\n\n\nOn 17-Jul-2024 09:49, Ron Johnson\n wrote:\n\n\n\n\nOn Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson\n <[email protected]>\n wrote:\n\n\n\n ---8<--snip,snip---8<---\n\n\nThat would, of course, depend on what\n you're currently doing.  pg_dumpall of a Big Database is\n certainly suboptimal compared to \"pg_dump -Fd --jobs=24\".\n \n\nThis is what I run (which I got mostly from a databasesoup.com\n blog post) on the target instance before doing \"pg_restore\n -Fd --jobs=24\":\ndeclare -i CheckPoint=30\n declare -i SharedBuffs=32\n declare -i MaintMem=3\n declare -i MaxWalSize=36\n declare -i WalBuffs=64\n\npg_ctl restart -wt$TimeOut -mfast\n \\\n         -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\\n         -o \"-c fsync=off\" \\\n         -o \"-c log_statement=none\" \\\n         -o \"-c log_temp_files=100kB\" \\\n         -o \"-c log_checkpoints=on\" \\\n         -o \"-c log_min_duration_statement=120000\" \\\n         -o \"-c shared_buffers=${SharedBuffs}GB\" \\\n         -o \"-c maintenance_work_mem=${MaintMem}GB\" \\\n         -o \"-c synchronous_commit=off\" \\\n         -o \"-c archive_mode=off\" \\\n         -o \"-c full_page_writes=off\" \\\n         -o \"-c checkpoint_timeout=${CheckPoint}min\" \\\n         -o \"-c max_wal_size=${MaxWalSize}GB\" \\\n         -o \"-c wal_level=minimal\" \\\n         -o \"-c max_wal_senders=0\" \\\n         -o \"-c wal_buffers=${WalBuffs}MB\" \\\n         -o \"-c autovacuum=off\" \n\n\n\nAfter the pg_restore -Fd\n --jobs=24 and vacuumdb\n --analyze-only --jobs=24:\npg_ctl stop -wt$TimeOut &&\n pg_ctl start -wt$TimeOut\n\n\n\nOf course, these parameter values were for my hardware.\n\n\nMy backups were in progress when all the issues\n happened, so they're not such a good starting point and\n I'd actually prefer the clean reload since this DB has\n been through multiple upgrades (without reloads) until\n now so I know it's not especially clean.  The size has\n always prevented the full reload before but the database\n is relatively low traffic now so I can afford some time\n to reload, but ideally not 10 days.\n\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 18 Jul 2024 09:54:07 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "There's no free lunch, and you can't squeeze blood from a turnip.\n\nSingle-threading will *ALWAYS* be slow: if you want speed, temporarily\nthrow more hardware at it: specifically another disk (and possibly more RAM\nand CPU).\n\nOn Thu, Jul 18, 2024 at 9:55 AM Thomas Simpson <[email protected]> wrote:\n\n> Thanks Ron for the suggestions - I applied some of the settings which\n> helped throughput a little bit but were not an ideal solution for me - let\n> me explain.\n>\n> Due to the size, I do not have the option to use the directory mode (or\n> anything that uses disk space) for dump as that creates multiple\n> directories (hence why it can do multiple jobs). I do not have the several\n> hundred TB of space to hold the output and there is no practical way to get\n> it, especially for a transient reload.\n>\n> I have my original server plus my replica; as the replica also applied the\n> WALs, it too filled up and went down. I've basically recreated this as a\n> primary server and am using a pipeline to dump from the original into this\n> as I know that has enough space for the final loaded database and should\n> have space left over from the clean rebuild (whereas the original server\n> still has space exhausted due to the leftover files).\n>\n> Incidentally, this state is also why going to a backup is not helpful\n> either as the restore and then re-apply the WALs would just end up filling\n> the disk and recreating the original problem.\n>\n> Even with the improved throughput, current calculations are pointing to\n> almost 30 days to recreate the database through dump and reload which is a\n> pretty horrible state to be in.\n>\n> I think this is perhaps an area of improvement - especially as larger\n> PostgreSQL databases become more common, I'm not the only person who could\n> face this issue.\n>\n> Perhaps an additional dumpall mode that generates multiple output pipes\n> (I'm piping via netcat to the other server) - it would need to combine with\n> a multiple listening streams too and some degree of ordering/feedback to\n> get to the essentially serialized output from the current dumpall. But\n> this feels like PostgreSQL expert developer territory.\n>\n> Thanks\n>\n> Tom\n>\n>\n> On 17-Jul-2024 09:49, Ron Johnson wrote:\n>\n> On Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson <[email protected]> wrote:\n>\n> ---8<--snip,snip---8<---\n>\n> That would, of course, depend on what you're currently doing. pg_dumpall\n> of a Big Database is certainly suboptimal compared to \"pg_dump -Fd\n> --jobs=24\".\n>\n> This is what I run (which I got mostly from a databasesoup.com blog post)\n> on the target instance before doing \"pg_restore -Fd --jobs=24\":\n> declare -i CheckPoint=30\n> declare -i SharedBuffs=32\n> declare -i MaintMem=3\n> declare -i MaxWalSize=36\n> declare -i WalBuffs=64\n> pg_ctl restart -wt$TimeOut -mfast \\\n> -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\\n> -o \"-c fsync=off\" \\\n> -o \"-c log_statement=none\" \\\n> -o \"-c log_temp_files=100kB\" \\\n> -o \"-c log_checkpoints=on\" \\\n> -o \"-c log_min_duration_statement=120000\" \\\n> -o \"-c shared_buffers=${SharedBuffs}GB\" \\\n> -o \"-c maintenance_work_mem=${MaintMem}GB\" \\\n> -o \"-c synchronous_commit=off\" \\\n> -o \"-c archive_mode=off\" \\\n> -o \"-c full_page_writes=off\" \\\n> -o \"-c checkpoint_timeout=${CheckPoint}min\" \\\n> -o \"-c max_wal_size=${MaxWalSize}GB\" \\\n> -o \"-c wal_level=minimal\" \\\n> -o \"-c max_wal_senders=0\" \\\n> -o \"-c wal_buffers=${WalBuffs}MB\" \\\n> -o \"-c autovacuum=off\"\n>\n> After the pg_restore -Fd --jobs=24 and vacuumdb --analyze-only --jobs=24:\n> pg_ctl stop -wt$TimeOut && pg_ctl start -wt$TimeOut\n>\n> Of course, these parameter values were for *my* hardware.\n>\n>> My backups were in progress when all the issues happened, so they're not\n>> such a good starting point and I'd actually prefer the clean reload since\n>> this DB has been through multiple upgrades (without reloads) until now so I\n>> know it's not especially clean. The size has always prevented the full\n>> reload before but the database is relatively low traffic now so I can\n>> afford some time to reload, but ideally not 10 days.\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>>\n\nThere's no free lunch, and you can't squeeze blood from a turnip.Single-threading will ALWAYS be slow: if you want speed, temporarily throw more hardware at it: specifically another disk (and possibly more RAM and CPU).On Thu, Jul 18, 2024 at 9:55 AM Thomas Simpson <[email protected]> wrote:\n\nThanks Ron for the suggestions - I applied some of the settings\n which helped throughput a little bit but were not an ideal\n solution for me - let me explain.\nDue to the size, I do not have the option to use the directory\n mode (or anything that uses disk space) for dump as that creates\n multiple directories (hence why it can do multiple jobs).  I do\n not have the several hundred TB of space to hold the output and\n there is no practical way to get it, especially for a transient\n reload.\nI have my original server plus my replica; as the replica also\n applied the WALs, it too filled up and went down.  I've basically\n recreated this as a primary server and am using a pipeline to dump\n from the original into this as I know that has enough space for\n the final loaded database and should have space left over from the\n clean rebuild (whereas the original server still has space\n exhausted due to the leftover files).\nIncidentally, this state is also why going to a backup is not\n helpful either as the restore and then re-apply the WALs would\n just end up filling the disk and recreating the original problem.\nEven with the improved throughput, current calculations are\n pointing to almost 30 days to recreate the database through dump\n and reload which is a pretty horrible state to be in.\nI think this is perhaps an area of improvement - especially as\n larger PostgreSQL databases become more common, I'm not the only\n person who could face this issue.\nPerhaps an additional dumpall mode that generates multiple output\n pipes (I'm piping via netcat to the other server) - it would need\n to combine with a multiple listening streams too and some degree\n of ordering/feedback to get to the essentially serialized output\n from the current dumpall.  But this feels like PostgreSQL expert\n developer territory.\nThanks\nTom\n\n\nOn 17-Jul-2024 09:49, Ron Johnson\n wrote:\n\n\n\nOn Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson\n <[email protected]>\n wrote:\n\n\n\n ---8<--snip,snip---8<---\n\n\nThat would, of course, depend on what\n you're currently doing.  pg_dumpall of a Big Database is\n certainly suboptimal compared to \"pg_dump -Fd --jobs=24\".\n \n\nThis is what I run (which I got mostly from a databasesoup.com\n blog post) on the target instance before doing \"pg_restore\n -Fd --jobs=24\":\ndeclare -i CheckPoint=30\n declare -i SharedBuffs=32\n declare -i MaintMem=3\n declare -i MaxWalSize=36\n declare -i WalBuffs=64\n\npg_ctl restart -wt$TimeOut -mfast\n \\\n         -o \"-c hba_file=$PGDATA/pg_hba_maintmode.conf\" \\\n         -o \"-c fsync=off\" \\\n         -o \"-c log_statement=none\" \\\n         -o \"-c log_temp_files=100kB\" \\\n         -o \"-c log_checkpoints=on\" \\\n         -o \"-c log_min_duration_statement=120000\" \\\n         -o \"-c shared_buffers=${SharedBuffs}GB\" \\\n         -o \"-c maintenance_work_mem=${MaintMem}GB\" \\\n         -o \"-c synchronous_commit=off\" \\\n         -o \"-c archive_mode=off\" \\\n         -o \"-c full_page_writes=off\" \\\n         -o \"-c checkpoint_timeout=${CheckPoint}min\" \\\n         -o \"-c max_wal_size=${MaxWalSize}GB\" \\\n         -o \"-c wal_level=minimal\" \\\n         -o \"-c max_wal_senders=0\" \\\n         -o \"-c wal_buffers=${WalBuffs}MB\" \\\n         -o \"-c autovacuum=off\" \n\n\n\nAfter the pg_restore -Fd\n --jobs=24 and vacuumdb\n --analyze-only --jobs=24:\npg_ctl stop -wt$TimeOut &&\n pg_ctl start -wt$TimeOut\n\n\n\nOf course, these parameter values were for my hardware.\n\n\nMy backups were in progress when all the issues\n happened, so they're not such a good starting point and\n I'd actually prefer the clean reload since this DB has\n been through multiple upgrades (without reloads) until\n now so I know it's not especially clean.  The size has\n always prevented the full reload before but the database\n is relatively low traffic now so I can afford some time\n to reload, but ideally not 10 days.\n\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 18 Jul 2024 11:16:05 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On 15/07/2024 19:47, Thomas Simpson wrote:\n>\n> My problem now is how do I get this space back to return my free space \n> back to where it should be?\n>\n> I tried some scripts to map the data files to relations but this \n> didn't work as removing some files led to startup failure despite them \n> appearing to be unrelated to anything in the database - I had to put \n> them back and then startup worked.\n>\nI don't know what you tried to do\n\nWhat would normally happen on a failed VACUUM FULL that fills up the \ndisk so the server crashes is that there are loads of data files \ncontaining the partially rebuilt table. Nothing 'internal' to PostgreSQL \nwill point to those files as the internal pointers all change to the new \ntable in an ACID way, so you should be able to delete them.\n\nYou can usually find these relatively easily by looking in the relevant \ntablespace directory for the base filename for a new huge table (lots \nand lots of files with the same base name - eg looking for files called \n*.1000 will find you base filenames for relations over about 1TB) and \nchecking to see if pg_filenode_relation() can't turn the filenode into a \nrelation. If that's the case that they're not currently in use for a \nrelation, then you should be able to just delete all those files\n\nIs this what you tried, or did your 'script to map data files to \nrelations' do something else? You were a bit ambiguous about that part \nof things.\n\nPaul\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 16:19:31 +0100", "msg_from": "Paul Smith* <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On 18-Jul-2024 11:19, Paul Smith* wrote:\n> On 15/07/2024 19:47, Thomas Simpson wrote:\n>>\n>> My problem now is how do I get this space back to return my free \n>> space back to where it should be?\n>>\n>> I tried some scripts to map the data files to relations but this \n>> didn't work as removing some files led to startup failure despite \n>> them appearing to be unrelated to anything in the database - I had to \n>> put them back and then startup worked.\n>>\n> I don't know what you tried to do\n>\n> What would normally happen on a failed VACUUM FULL that fills up the \n> disk so the server crashes is that there are loads of data files \n> containing the partially rebuilt table. Nothing 'internal' to \n> PostgreSQL will point to those files as the internal pointers all \n> change to the new table in an ACID way, so you should be able to \n> delete them.\n>\n> You can usually find these relatively easily by looking in the \n> relevant tablespace directory for the base filename for a new huge \n> table (lots and lots of files with the same base name - eg looking for \n> files called *.1000 will find you base filenames for relations over \n> about 1TB) and checking to see if pg_filenode_relation() can't turn \n> the filenode into a relation. If that's the case that they're not \n> currently in use for a relation, then you should be able to just \n> delete all those files\n>\n> Is this what you tried, or did your 'script to map data files to \n> relations' do something else? You were a bit ambiguous about that part \n> of things.\n>\n[BTW, v9.6 which I know is old but this server is stuck there]\n\nYes, I was querying relfilenode from pg_class to get the filename \n(integer) and then comparing a directory listing for files which did not \nmatch the relfilenode as candidates to remove.\n\nI moved these elsewhere (i.e. not delete, just move out the way so I \ncould move them back in case of trouble).\n\nWithout these apparently unrelated files, the database did not start and \ncomplained about them being missing, so I had to put them back.  This \nwas despite not finding any reference to the filename/number in pg_class.\n\nAt that point I gave up since I cannot afford to make the problem worse!\n\nI know I'm stuck with the slow rebuild at this point.  However, I doubt \nI am the only person in the world that needs to dump and reload a large \ndatabase.  My thought is this is a weak point for PostgreSQL so it makes \nsense to consider ways to improve the dump reload process, especially as \nit's the last-resort upgrade path recommended in the upgrade guide and \nthe general fail-safe route to get out of trouble.\n\nThanks\n\nTom\n\n\n> Paul\n>\n>\n>\n\n\n\n\n\n\n\n\nOn 18-Jul-2024 11:19, Paul Smith*\n wrote:\n\nOn\n 15/07/2024 19:47, Thomas Simpson wrote:\n \n\n\n My problem now is how do I get this space back to return my free\n space back to where it should be?\n \n\n I tried some scripts to map the data files to relations but this\n didn't work as removing some files led to startup failure\n despite them appearing to be unrelated to anything in the\n database - I had to put them back and then startup worked.\n \n\n\n I don't know what you tried to do\n \n\n What would normally happen on a failed VACUUM FULL that fills up\n the disk so the server crashes is that there are loads of data\n files containing the partially rebuilt table. Nothing 'internal'\n to PostgreSQL will point to those files as the internal pointers\n all change to the new table in an ACID way, so you should be able\n to delete them.\n \n\n You can usually find these relatively easily by looking in the\n relevant tablespace directory for the base filename for a new huge\n table (lots and lots of files with the same base name - eg looking\n for files called *.1000 will find you base filenames for relations\n over about 1TB) and checking to see if pg_filenode_relation()\n can't turn the filenode into a relation. If that's the case that\n they're not currently in use for a relation, then you should be\n able to just delete all those files\n \n\n Is this what you tried, or did your 'script to map data files to\n relations' do something else? You were a bit ambiguous about that\n part of things.\n \n\n\n[BTW, v9.6 which I know is old but this server is stuck there]\n\nYes, I was querying relfilenode from pg_class to get the filename\n (integer) and then comparing a directory listing for files which\n did not match the relfilenode as candidates to remove.\nI moved these elsewhere (i.e. not delete, just move out the way\n so I could move them back in case of trouble).\nWithout these apparently unrelated files, the database did not\n start and complained about them being missing, so I had to put\n them back.  This was despite not finding any reference to the\n filename/number in pg_class.\n\nAt that point I gave up since I cannot afford to make the problem\n worse!\nI know I'm stuck with the slow rebuild at this point.  However, I\n doubt I am the only person in the world that needs to dump and\n reload a large database.  My thought is this is a weak point for\n PostgreSQL so it makes sense to consider ways to improve the dump\n reload process, especially as it's the last-resort upgrade path\n recommended in the upgrade guide and the general fail-safe route\n to get out of trouble.\n\nThanks\nTom\n\n\n\nPaul", "msg_date": "Thu, 18 Jul 2024 14:59:35 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:\n[snip]\n\n> [BTW, v9.6 which I know is old but this server is stuck there]\n>\n> [snip]\n\n> I know I'm stuck with the slow rebuild at this point. However, I doubt I\n> am the only person in the world that needs to dump and reload a large\n> database. My thought is this is a weak point for PostgreSQL so it makes\n> sense to consider ways to improve the dump reload process, especially as\n> it's the last-resort upgrade path recommended in the upgrade guide and the\n> general fail-safe route to get out of trouble.\n>\n No database does fast single-threaded backups.\n\nOn Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:[snip][BTW, v9.6 which I know is old but this server is stuck there]\n\n[snip] I know I'm stuck with the slow rebuild at this point.  However, I\n doubt I am the only person in the world that needs to dump and\n reload a large database.  My thought is this is a weak point for\n PostgreSQL so it makes sense to consider ways to improve the dump\n reload process, especially as it's the last-resort upgrade path\n recommended in the upgrade guide and the general fail-safe route\n to get out of trouble. No database does fast single-threaded backups.", "msg_date": "Thu, 18 Jul 2024 16:32:49 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "On 18-Jul-2024 16:32, Ron Johnson wrote:\n> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:\n> [snip]\n>\n> [BTW, v9.6 which I know is old but this server is stuck there]\n>\n> [snip]\n>\n> I know I'm stuck with the slow rebuild at this point. However, I\n> doubt I am the only person in the world that needs to dump and\n> reload a large database.  My thought is this is a weak point for\n> PostgreSQL so it makes sense to consider ways to improve the dump\n> reload process, especially as it's the last-resort upgrade path\n> recommended in the upgrade guide and the general fail-safe route\n> to get out of trouble.\n>\n>  No database does fast single-threaded backups.\n\nAgreed.  My thought is that is should be possible for a 'new dumpall' to \nbe multi-threaded.\n\nSomething like :\n\n* Set number of threads on 'source' (perhaps by querying a listening \ndestination for how many threads it is prepared to accept via a control \nport)\n\n* Select each database in turn\n\n* Organize the tables which do not have references themselves\n\n* Send each table separately in each thread (or queue them until a \nthread is available)  ('Stage 1')\n\n* Rendezvous stage 1 completion (pause sending, wait until feedback from \ndestination confirming all completed) so we have a known consistent \nstate that is safe to proceed to subsequent tables\n\n* Work through tables that do refer to the previously sent in the same \nway (since the tables they reference exist and have their data) ('Stage 2')\n\n* Repeat progressively until all tables are done ('Stage 3', 4 etc. as \nnecessary)\n\nThe current dumpall is essentially doing this table organization \ncurrently [minus stage checkpoints/multi-thread] otherwise the dump/load \nwould not work.  It may even be doing a lot of this for 'directory' \nmode?  The change here is organizing n threads to process them \nconcurrently where possible and coordinating the pipes so they only send \ndata which can be accepted.\n\nThe destination would need to have a multi-thread listen and co-ordinate \nwith the sender on some control channel so feed back completion of each \nstage.\n\nSomething like a destination host and control channel port to establish \nthe pipes and create additional netcat pipes on incremental ports above \nthe control port for each thread used.\n\nDumpall seems like it could be a reasonable start point since it is \nalready doing the complicated bits of serializing the dump data so it \ncan be consistently loaded.\n\nProbably not really an admin question at this point, more a feature \nenhancement.\n\nIs there anything fundamentally wrong that someone with more intimate \nknowledge of dumpall could point out?\n\nThanks\n\nTom\n\n\n\n\n\n\n\n\n\nOn 18-Jul-2024 16:32, Ron Johnson\n wrote:\n\n\n\n\nOn Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson\n <[email protected]>\n wrote:\n[snip]\n\n\n\n[BTW, v9.6 which I know is old but this server is stuck\n there]\n\n\n\n[snip] \n\n\nI know I'm stuck with the slow rebuild at this point. \n However, I doubt I am the only person in the world that\n needs to dump and reload a large database.  My thought\n is this is a weak point for PostgreSQL so it makes sense\n to consider ways to improve the dump reload process,\n especially as it's the last-resort upgrade path\n recommended in the upgrade guide and the general\n fail-safe route to get out of trouble.\n\n\n No database does fast single-threaded backups.\n\n\n\nAgreed.  My thought is that is should be possible for a 'new\n dumpall' to be multi-threaded.\nSomething like :\n* Set number of threads on 'source' (perhaps by querying a\n listening destination for how many threads it is prepared to\n accept via a control port)\n\n* Select each database in turn\n* Organize the tables which do not have references themselves\n* Send each table separately in each thread (or queue them until\n a thread is available)  ('Stage 1')\n\n* Rendezvous stage 1 completion (pause sending, wait until\n feedback from destination confirming all completed) so we have a\n known consistent state that is safe to proceed to subsequent\n tables\n\n* Work through tables that do refer to the previously sent in the\n same way (since the tables they reference exist and have their\n data) ('Stage 2')\n\n* Repeat progressively until all tables are done ('Stage 3', 4\n etc. as necessary)\n\nThe current dumpall is essentially doing this table organization\n currently [minus stage checkpoints/multi-thread] otherwise the\n dump/load would not work.  It may even be doing a lot of this for\n 'directory' mode?  The change here is organizing n threads to\n process them concurrently where possible and coordinating the\n pipes so they only send data which can be accepted.\nThe destination would need to have a multi-thread listen and\n co-ordinate with the sender on some control channel so feed back\n completion of each stage.\nSomething like a destination host and control channel port to\n establish the pipes and create additional netcat pipes on\n incremental ports above the control port for each thread used.\nDumpall seems like it could be a reasonable start point since it\n is already doing the complicated bits of serializing the dump data\n so it can be consistently loaded.\n\nProbably not really an admin question at this point, more a\n feature enhancement.\nIs there anything fundamentally wrong that someone with more\n intimate knowledge of dumpall could point out?\nThanks\nTom", "msg_date": "Thu, 18 Jul 2024 16:53:23 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "Multi-threaded writing to the same giant text file won't work too well,\nwhen all the data for one table needs to be together.\n\nJust temporarily add another disk for backups.\n\nOn Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <[email protected]> wrote:\n\n>\n> On 18-Jul-2024 16:32, Ron Johnson wrote:\n>\n> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:\n> [snip]\n>\n>> [BTW, v9.6 which I know is old but this server is stuck there]\n>>\n> [snip]\n>\n>> I know I'm stuck with the slow rebuild at this point. However, I doubt I\n>> am the only person in the world that needs to dump and reload a large\n>> database. My thought is this is a weak point for PostgreSQL so it makes\n>> sense to consider ways to improve the dump reload process, especially as\n>> it's the last-resort upgrade path recommended in the upgrade guide and the\n>> general fail-safe route to get out of trouble.\n>>\n> No database does fast single-threaded backups.\n>\n> Agreed. My thought is that is should be possible for a 'new dumpall' to\n> be multi-threaded.\n>\n> Something like :\n>\n> * Set number of threads on 'source' (perhaps by querying a listening\n> destination for how many threads it is prepared to accept via a control\n> port)\n>\n> * Select each database in turn\n>\n> * Organize the tables which do not have references themselves\n>\n> * Send each table separately in each thread (or queue them until a thread\n> is available) ('Stage 1')\n>\n> * Rendezvous stage 1 completion (pause sending, wait until feedback from\n> destination confirming all completed) so we have a known consistent state\n> that is safe to proceed to subsequent tables\n>\n> * Work through tables that do refer to the previously sent in the same way\n> (since the tables they reference exist and have their data) ('Stage 2')\n>\n> * Repeat progressively until all tables are done ('Stage 3', 4 etc. as\n> necessary)\n>\n> The current dumpall is essentially doing this table organization currently\n> [minus stage checkpoints/multi-thread] otherwise the dump/load would not\n> work. It may even be doing a lot of this for 'directory' mode? The change\n> here is organizing n threads to process them concurrently where possible\n> and coordinating the pipes so they only send data which can be accepted.\n>\n> The destination would need to have a multi-thread listen and co-ordinate\n> with the sender on some control channel so feed back completion of each\n> stage.\n>\n> Something like a destination host and control channel port to establish\n> the pipes and create additional netcat pipes on incremental ports above the\n> control port for each thread used.\n>\n> Dumpall seems like it could be a reasonable start point since it is\n> already doing the complicated bits of serializing the dump data so it can\n> be consistently loaded.\n>\n> Probably not really an admin question at this point, more a feature\n> enhancement.\n>\n> Is there anything fundamentally wrong that someone with more intimate\n> knowledge of dumpall could point out?\n>\n> Thanks\n>\n> Tom\n>\n>\n>\n\nMulti-threaded writing to the same giant text file won't work too well, when all the data for one table needs to be together.Just temporarily add another disk for backups.On Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <[email protected]> wrote:\n\n\n\nOn 18-Jul-2024 16:32, Ron Johnson\n wrote:\n\n\n\nOn Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson\n <[email protected]>\n wrote:\n[snip]\n\n\n\n[BTW, v9.6 which I know is old but this server is stuck\n there]\n\n\n\n[snip] \n\n\nI know I'm stuck with the slow rebuild at this point. \n However, I doubt I am the only person in the world that\n needs to dump and reload a large database.  My thought\n is this is a weak point for PostgreSQL so it makes sense\n to consider ways to improve the dump reload process,\n especially as it's the last-resort upgrade path\n recommended in the upgrade guide and the general\n fail-safe route to get out of trouble.\n\n\n No database does fast single-threaded backups.\n\n\n\nAgreed.  My thought is that is should be possible for a 'new\n dumpall' to be multi-threaded.\nSomething like :\n* Set number of threads on 'source' (perhaps by querying a\n listening destination for how many threads it is prepared to\n accept via a control port)\n\n* Select each database in turn\n* Organize the tables which do not have references themselves\n* Send each table separately in each thread (or queue them until\n a thread is available)  ('Stage 1')\n\n* Rendezvous stage 1 completion (pause sending, wait until\n feedback from destination confirming all completed) so we have a\n known consistent state that is safe to proceed to subsequent\n tables\n\n* Work through tables that do refer to the previously sent in the\n same way (since the tables they reference exist and have their\n data) ('Stage 2')\n\n* Repeat progressively until all tables are done ('Stage 3', 4\n etc. as necessary)\n\nThe current dumpall is essentially doing this table organization\n currently [minus stage checkpoints/multi-thread] otherwise the\n dump/load would not work.  It may even be doing a lot of this for\n 'directory' mode?  The change here is organizing n threads to\n process them concurrently where possible and coordinating the\n pipes so they only send data which can be accepted.\nThe destination would need to have a multi-thread listen and\n co-ordinate with the sender on some control channel so feed back\n completion of each stage.\nSomething like a destination host and control channel port to\n establish the pipes and create additional netcat pipes on\n incremental ports above the control port for each thread used.\nDumpall seems like it could be a reasonable start point since it\n is already doing the complicated bits of serializing the dump data\n so it can be consistently loaded.\n\nProbably not really an admin question at this point, more a\n feature enhancement.\nIs there anything fundamentally wrong that someone with more\n intimate knowledge of dumpall could point out?\nThanks\nTom", "msg_date": "Thu, 18 Jul 2024 18:41:14 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "[Added cross post to [email protected] - background is \nmulti-TB database needs recovered via pgdumpall & reload, thoughts on \nways to make pg_dump scale to multi-thread to expedite loading to a new \ncluster.  Straight dump to a file is impractical as the dump will be \n >200TB; hackers may be a better home for the discussion than current \nadmin list]\n\nHi Ron\n\n\nOn 18-Jul-2024 18:41, Ron Johnson wrote:\n> Multi-threaded writing to the same giant text file won't work too \n> well, when all the data for one table needs to be together.\n>\n> Just temporarily add another disk for backups.\n>\nFor clarity, I'm not proposing multi threaded writing to one file; the \nproposal is a new special mode which specifically makes multiple output \nstreams across *network sockets* to a listener which is listening on the \nother side.  The goal is avoiding any files at all and only using \nmultiple network streams to gain multi-threaded processing with some \nco-ordination to keep things organized and consistent.\n\nThis would really be specifically for the use-case of dump/reload \nupgrade or recreate rather than everyday use.  And particularly for very \nlarge databases.\n\nLooking at pg_dump.c it's doing the baseline organization but the \nextension would be adding the required coordination with the \ndestination.  So, for a huge table (I have many) these would go in \ndifferent streams but if there is a dependency (FK relations etc) the \ncheckpoint needs to ensure those are met before proceeding. Worst case \nscenario it would end up using only 1 thread but it would be very \nunusual to have a database where every table depends on another table \nall the way down.\n\nIn theory at least, some gains should be achieved for typical databases \nwhere a degree of parallelism is possible.\n\nThanks\n\nTom\n\n\n\n> On Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <[email protected]> wrote:\n>\n>\n> On 18-Jul-2024 16:32, Ron Johnson wrote:\n>> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson\n>> <[email protected]> wrote:\n>> [snip]\n>>\n>> [BTW, v9.6 which I know is old but this server is stuck there]\n>>\n>> [snip]\n>>\n>> I know I'm stuck with the slow rebuild at this point. \n>> However, I doubt I am the only person in the world that needs\n>> to dump and reload a large database.  My thought is this is a\n>> weak point for PostgreSQL so it makes sense to consider ways\n>> to improve the dump reload process, especially as it's the\n>> last-resort upgrade path recommended in the upgrade guide and\n>> the general fail-safe route to get out of trouble.\n>>\n>>  No database does fast single-threaded backups.\n>\n> Agreed.  My thought is that is should be possible for a 'new\n> dumpall' to be multi-threaded.\n>\n> Something like :\n>\n> * Set number of threads on 'source' (perhaps by querying a\n> listening destination for how many threads it is prepared to\n> accept via a control port)\n>\n> * Select each database in turn\n>\n> * Organize the tables which do not have references themselves\n>\n> * Send each table separately in each thread (or queue them until a\n> thread is available)  ('Stage 1')\n>\n> * Rendezvous stage 1 completion (pause sending, wait until\n> feedback from destination confirming all completed) so we have a\n> known consistent state that is safe to proceed to subsequent tables\n>\n> * Work through tables that do refer to the previously sent in the\n> same way (since the tables they reference exist and have their\n> data) ('Stage 2')\n>\n> * Repeat progressively until all tables are done ('Stage 3', 4\n> etc. as necessary)\n>\n> The current dumpall is essentially doing this table organization\n> currently [minus stage checkpoints/multi-thread] otherwise the\n> dump/load would not work.  It may even be doing a lot of this for\n> 'directory' mode?  The change here is organizing n threads to\n> process them concurrently where possible and coordinating the\n> pipes so they only send data which can be accepted.\n>\n> The destination would need to have a multi-thread listen and\n> co-ordinate with the sender on some control channel so feed back\n> completion of each stage.\n>\n> Something like a destination host and control channel port to\n> establish the pipes and create additional netcat pipes on\n> incremental ports above the control port for each thread used.\n>\n> Dumpall seems like it could be a reasonable start point since it\n> is already doing the complicated bits of serializing the dump data\n> so it can be consistently loaded.\n>\n> Probably not really an admin question at this point, more a\n> feature enhancement.\n>\n> Is there anything fundamentally wrong that someone with more\n> intimate knowledge of dumpall could point out?\n>\n> Thanks\n>\n> Tom\n>\n>\n\n\n\n\n\n\n[Added cross post to [email protected] -\n background is multi-TB database needs recovered via pgdumpall\n & reload, thoughts on ways to make pg_dump scale to\n multi-thread to expedite loading to a new cluster.  Straight dump\n to a file is impractical as the dump will be >200TB; hackers\n may be a better home for the discussion than current admin list]\n\nHi Ron\n\nOn 18-Jul-2024 18:41, Ron Johnson\n wrote:\n\n\n\n\nMulti-threaded writing to the same giant text\n file won't work too well, when all the data for one table\n needs to be together.\n\n\nJust temporarily add another disk for backups.\n\n\n\nFor clarity, I'm not proposing multi threaded writing to one\n file; the proposal is a new special mode which specifically makes\n multiple output streams across *network sockets* to a listener\n which is listening on the other side.  The goal is avoiding any\n files at all and only using multiple network streams to gain\n multi-threaded processing with some co-ordination to keep things\n organized and consistent.\nThis would really be specifically for the use-case of dump/reload\n upgrade or recreate rather than everyday use.  And particularly\n for very large databases.\n\nLooking at pg_dump.c it's doing the baseline organization but the\n extension would be adding the required coordination with the\n destination.  So, for a huge table (I have many) these would go in\n different streams but if there is a dependency (FK relations etc)\n the checkpoint needs to ensure those are met before proceeding. \n Worst case scenario it would end up using only 1 thread but it\n would be very unusual to have a database where every table depends\n on another table all the way down.\nIn theory at least, some gains should be achieved for typical\n databases where a degree of parallelism is possible.\n\n Thanks\n Tom\n\n\n\n\n\n\n\n\n\nOn Thu, Jul 18, 2024 at\n 4:55 PM Thomas Simpson <[email protected]>\n wrote:\n\n\n\n\n\nOn 18-Jul-2024 16:32, Ron Johnson wrote:\n\n\n\nOn Thu, Jul 18, 2024 at 3:01 PM Thomas\n Simpson <[email protected]>\n wrote:\n[snip]\n\n\n\n[BTW, v9.6 which I know is old but this\n server is stuck there]\n\n\n\n[snip] \n\n\nI know I'm stuck with the slow rebuild at\n this point.  However, I doubt I am the only\n person in the world that needs to dump and\n reload a large database.  My thought is this\n is a weak point for PostgreSQL so it makes\n sense to consider ways to improve the dump\n reload process, especially as it's the\n last-resort upgrade path recommended in the\n upgrade guide and the general fail-safe route\n to get out of trouble.\n\n\n No database does fast single-threaded backups.\n\n\n\nAgreed.  My thought is that is should be possible for a\n 'new dumpall' to be multi-threaded.\nSomething like :\n* Set number of threads on 'source' (perhaps by\n querying a listening destination for how many threads it\n is prepared to accept via a control port)\n\n* Select each database in turn\n* Organize the tables which do not have references\n themselves\n* Send each table separately in each thread (or queue\n them until a thread is available)  ('Stage 1')\n\n* Rendezvous stage 1 completion (pause sending, wait\n until feedback from destination confirming all\n completed) so we have a known consistent state that is\n safe to proceed to subsequent tables\n\n* Work through tables that do refer to the previously\n sent in the same way (since the tables they reference\n exist and have their data) ('Stage 2')\n\n* Repeat progressively until all tables are done\n ('Stage 3', 4 etc. as necessary)\n\nThe current dumpall is essentially doing this table\n organization currently [minus stage\n checkpoints/multi-thread] otherwise the dump/load would\n not work.  It may even be doing a lot of this for\n 'directory' mode?  The change here is organizing n\n threads to process them concurrently where possible and\n coordinating the pipes so they only send data which can\n be accepted.\nThe destination would need to have a multi-thread\n listen and co-ordinate with the sender on some control\n channel so feed back completion of each stage.\nSomething like a destination host and control channel\n port to establish the pipes and create additional netcat\n pipes on incremental ports above the control port for\n each thread used.\nDumpall seems like it could be a reasonable start point\n since it is already doing the complicated bits of\n serializing the dump data so it can be consistently\n loaded.\n\nProbably not really an admin question at this point,\n more a feature enhancement.\nIs there anything fundamentally wrong that someone with\n more intimate knowledge of dumpall could point out?\nThanks\nTom", "msg_date": "Thu, 18 Jul 2024 19:08:06 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem full\n during vacuum - space recovery issues)" }, { "msg_contents": "1) Add new disk, use a new tablespace to move some big tables to it, to get back up and running\n2) Replica server provisioned sufficiently for the db, pg_basebackup to it\n3) Get streaming replication working\n4) Switch over to new server\n\nIn other words, if you don't want terrible downtime, you need yet another server fully provisioned to be able to run your db.\n\n--\nScott Ribe\[email protected]\nhttps://www.linkedin.com/in/scottribe/\n\n\n\n> On Jul 18, 2024, at 4:41 PM, Ron Johnson <[email protected]> wrote:\n> \n> Multi-threaded writing to the same giant text file won't work too well, when all the data for one table needs to be together.\n> \n> Just temporarily add another disk for backups.\n> \n> On Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <[email protected]> wrote:\n> \n> On 18-Jul-2024 16:32, Ron Johnson wrote:\n>> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:\n>> [snip]\n>> [BTW, v9.6 which I know is old but this server is stuck there]\n>> [snip] \n>> I know I'm stuck with the slow rebuild at this point. However, I doubt I am the only person in the world that needs to dump and reload a large database. My thought is this is a weak point for PostgreSQL so it makes sense to consider ways to improve the dump reload process, especially as it's the last-resort upgrade path recommended in the upgrade guide and the general fail-safe route to get out of trouble.\n>> No database does fast single-threaded backups.\n> Agreed. My thought is that is should be possible for a 'new dumpall' to be multi-threaded.\n> Something like :\n> * Set number of threads on 'source' (perhaps by querying a listening destination for how many threads it is prepared to accept via a control port)\n> * Select each database in turn\n> * Organize the tables which do not have references themselves\n> * Send each table separately in each thread (or queue them until a thread is available) ('Stage 1')\n> * Rendezvous stage 1 completion (pause sending, wait until feedback from destination confirming all completed) so we have a known consistent state that is safe to proceed to subsequent tables\n> * Work through tables that do refer to the previously sent in the same way (since the tables they reference exist and have their data) ('Stage 2')\n> * Repeat progressively until all tables are done ('Stage 3', 4 etc. as necessary)\n> The current dumpall is essentially doing this table organization currently [minus stage checkpoints/multi-thread] otherwise the dump/load would not work. It may even be doing a lot of this for 'directory' mode? The change here is organizing n threads to process them concurrently where possible and coordinating the pipes so they only send data which can be accepted.\n> The destination would need to have a multi-thread listen and co-ordinate with the sender on some control channel so feed back completion of each stage.\n> Something like a destination host and control channel port to establish the pipes and create additional netcat pipes on incremental ports above the control port for each thread used.\n> Dumpall seems like it could be a reasonable start point since it is already doing the complicated bits of serializing the dump data so it can be consistently loaded.\n> Probably not really an admin question at this point, more a feature enhancement.\n> Is there anything fundamentally wrong that someone with more intimate knowledge of dumpall could point out?\n> Thanks\n> Tom\n> \n\n\n\n", "msg_date": "Thu, 18 Jul 2024 20:59:23 -0600", "msg_from": "Scott Ribe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem full during vacuum - space recovery issues" }, { "msg_contents": "200TB... how do you currently back up your database?\n\nOn Fri, Jul 19, 2024 at 5:08 AM Thomas Simpson <[email protected]> wrote:\n\n> [Added cross post to [email protected] - background is\n> multi-TB database needs recovered via pgdumpall & reload, thoughts on ways\n> to make pg_dump scale to multi-thread to expedite loading to a new\n> cluster. Straight dump to a file is impractical as the dump will be\n> >200TB; hackers may be a better home for the discussion than current admin\n> list]\n>\n> Hi Ron\n>\n> On 18-Jul-2024 18:41, Ron Johnson wrote:\n>\n> Multi-threaded writing to the same giant text file won't work too well,\n> when all the data for one table needs to be together.\n>\n> Just temporarily add another disk for backups.\n>\n> For clarity, I'm not proposing multi threaded writing to one file; the\n> proposal is a new special mode which specifically makes multiple output\n> streams across *network sockets* to a listener which is listening on the\n> other side. The goal is avoiding any files at all and only using multiple\n> network streams to gain multi-threaded processing with some co-ordination\n> to keep things organized and consistent.\n>\n> This would really be specifically for the use-case of dump/reload upgrade\n> or recreate rather than everyday use. And particularly for very large\n> databases.\n>\n> Looking at pg_dump.c it's doing the baseline organization but the\n> extension would be adding the required coordination with the destination.\n> So, for a huge table (I have many) these would go in different streams but\n> if there is a dependency (FK relations etc) the checkpoint needs to ensure\n> those are met before proceeding. Worst case scenario it would end up using\n> only 1 thread but it would be very unusual to have a database where every\n> table depends on another table all the way down.\n>\n> In theory at least, some gains should be achieved for typical databases\n> where a degree of parallelism is possible.\n> Thanks\n>\n> Tom\n>\n>\n>\n> On Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <[email protected]> wrote:\n>\n>>\n>> On 18-Jul-2024 16:32, Ron Johnson wrote:\n>>\n>> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <[email protected]> wrote:\n>> [snip]\n>>\n>>> [BTW, v9.6 which I know is old but this server is stuck there]\n>>>\n>> [snip]\n>>\n>>> I know I'm stuck with the slow rebuild at this point. However, I doubt\n>>> I am the only person in the world that needs to dump and reload a large\n>>> database. My thought is this is a weak point for PostgreSQL so it makes\n>>> sense to consider ways to improve the dump reload process, especially as\n>>> it's the last-resort upgrade path recommended in the upgrade guide and the\n>>> general fail-safe route to get out of trouble.\n>>>\n>> No database does fast single-threaded backups.\n>>\n>> Agreed. My thought is that is should be possible for a 'new dumpall' to\n>> be multi-threaded.\n>>\n>> Something like :\n>>\n>> * Set number of threads on 'source' (perhaps by querying a listening\n>> destination for how many threads it is prepared to accept via a control\n>> port)\n>>\n>> * Select each database in turn\n>>\n>> * Organize the tables which do not have references themselves\n>>\n>> * Send each table separately in each thread (or queue them until a thread\n>> is available) ('Stage 1')\n>>\n>> * Rendezvous stage 1 completion (pause sending, wait until feedback from\n>> destination confirming all completed) so we have a known consistent state\n>> that is safe to proceed to subsequent tables\n>>\n>> * Work through tables that do refer to the previously sent in the same\n>> way (since the tables they reference exist and have their data) ('Stage 2')\n>>\n>> * Repeat progressively until all tables are done ('Stage 3', 4 etc. as\n>> necessary)\n>>\n>> The current dumpall is essentially doing this table organization\n>> currently [minus stage checkpoints/multi-thread] otherwise the dump/load\n>> would not work. It may even be doing a lot of this for 'directory' mode?\n>> The change here is organizing n threads to process them concurrently where\n>> possible and coordinating the pipes so they only send data which can be\n>> accepted.\n>>\n>> The destination would need to have a multi-thread listen and co-ordinate\n>> with the sender on some control channel so feed back completion of each\n>> stage.\n>>\n>> Something like a destination host and control channel port to establish\n>> the pipes and create additional netcat pipes on incremental ports above the\n>> control port for each thread used.\n>>\n>> Dumpall seems like it could be a reasonable start point since it is\n>> already doing the complicated bits of serializing the dump data so it can\n>> be consistently loaded.\n>>\n>> Probably not really an admin question at this point, more a feature\n>> enhancement.\n>>\n>> Is there anything fundamentally wrong that someone with more intimate\n>> knowledge of dumpall could point out?\n>>\n>> Thanks\n>>\n>> Tom\n>>\n>>\n>>\n\n200TB... how do you currently back up your database?On Fri, Jul 19, 2024 at 5:08 AM Thomas Simpson <[email protected]> wrote:\n\n[Added cross post to [email protected] -\n background is multi-TB database needs recovered via pgdumpall\n & reload, thoughts on ways to make pg_dump scale to\n multi-thread to expedite loading to a new cluster.  Straight dump\n to a file is impractical as the dump will be >200TB; hackers\n may be a better home for the discussion than current admin list]\n\nHi Ron\n\nOn 18-Jul-2024 18:41, Ron Johnson\n wrote:\n\n\n\nMulti-threaded writing to the same giant text\n file won't work too well, when all the data for one table\n needs to be together.\n\n\nJust temporarily add another disk for backups.\n\n\n\nFor clarity, I'm not proposing multi threaded writing to one\n file; the proposal is a new special mode which specifically makes\n multiple output streams across *network sockets* to a listener\n which is listening on the other side.  The goal is avoiding any\n files at all and only using multiple network streams to gain\n multi-threaded processing with some co-ordination to keep things\n organized and consistent.\nThis would really be specifically for the use-case of dump/reload\n upgrade or recreate rather than everyday use.  And particularly\n for very large databases.\n\nLooking at pg_dump.c it's doing the baseline organization but the\n extension would be adding the required coordination with the\n destination.  So, for a huge table (I have many) these would go in\n different streams but if there is a dependency (FK relations etc)\n the checkpoint needs to ensure those are met before proceeding. \n Worst case scenario it would end up using only 1 thread but it\n would be very unusual to have a database where every table depends\n on another table all the way down.\nIn theory at least, some gains should be achieved for typical\n databases where a degree of parallelism is possible.\n\n Thanks\n Tom\n\n\n\n\n\n\n\n\n\nOn Thu, Jul 18, 2024 at\n 4:55 PM Thomas Simpson <[email protected]>\n wrote:\n\n\n\n\n\nOn 18-Jul-2024 16:32, Ron Johnson wrote:\n\n\n\nOn Thu, Jul 18, 2024 at 3:01 PM Thomas\n Simpson <[email protected]>\n wrote:\n[snip]\n\n\n\n[BTW, v9.6 which I know is old but this\n server is stuck there]\n\n\n\n[snip] \n\n\nI know I'm stuck with the slow rebuild at\n this point.  However, I doubt I am the only\n person in the world that needs to dump and\n reload a large database.  My thought is this\n is a weak point for PostgreSQL so it makes\n sense to consider ways to improve the dump\n reload process, especially as it's the\n last-resort upgrade path recommended in the\n upgrade guide and the general fail-safe route\n to get out of trouble.\n\n\n No database does fast single-threaded backups.\n\n\n\nAgreed.  My thought is that is should be possible for a\n 'new dumpall' to be multi-threaded.\nSomething like :\n* Set number of threads on 'source' (perhaps by\n querying a listening destination for how many threads it\n is prepared to accept via a control port)\n\n* Select each database in turn\n* Organize the tables which do not have references\n themselves\n* Send each table separately in each thread (or queue\n them until a thread is available)  ('Stage 1')\n\n* Rendezvous stage 1 completion (pause sending, wait\n until feedback from destination confirming all\n completed) so we have a known consistent state that is\n safe to proceed to subsequent tables\n\n* Work through tables that do refer to the previously\n sent in the same way (since the tables they reference\n exist and have their data) ('Stage 2')\n\n* Repeat progressively until all tables are done\n ('Stage 3', 4 etc. as necessary)\n\nThe current dumpall is essentially doing this table\n organization currently [minus stage\n checkpoints/multi-thread] otherwise the dump/load would\n not work.  It may even be doing a lot of this for\n 'directory' mode?  The change here is organizing n\n threads to process them concurrently where possible and\n coordinating the pipes so they only send data which can\n be accepted.\nThe destination would need to have a multi-thread\n listen and co-ordinate with the sender on some control\n channel so feed back completion of each stage.\nSomething like a destination host and control channel\n port to establish the pipes and create additional netcat\n pipes on incremental ports above the control port for\n each thread used.\nDumpall seems like it could be a reasonable start point\n since it is already doing the complicated bits of\n serializing the dump data so it can be consistently\n loaded.\n\nProbably not really an admin question at this point,\n more a feature enhancement.\nIs there anything fundamentally wrong that someone with\n more intimate knowledge of dumpall could point out?\nThanks\nTom", "msg_date": "Fri, 19 Jul 2024 08:21:36 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Do you actually have 100G networking between the nodes? Because if not, a single CPU should be able to saturate 10G. \n\nLikewise the receiving end would need disk capable of keeping up. Which brings up the question, why not write to disk, but directly to the destination rather than write locally then copy?\n\nDo you require dump-reload because of suspected corruption? That's a tough one. But if not, if the goal is just to get up and running on a new server, why not pg_basebackup, streaming replica, promote? That depends on the level of data modification activity being low enough that pg_basebackup can keep up with WAL as it's generated and apply it faster than new WAL comes in, but given that your server is currently keeping up with writing that much WAL and flushing that many changes, seems likely it would keep up as long as the network connection is fast enough. Anyway, in that scenario, you don't need to care how long pg_basebackup takes.\n\nIf you do need a dump/reload because of suspected corruption, the only thing I can think of is something like doing it a table at a time--partitioning would help here, if practical.\n\n", "msg_date": "Fri, 19 Jul 2024 07:26:47 -0600", "msg_from": "Scott Ribe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Hi Scott,\n\nI realize some of the background was snipped on what I sent to the \nhacker list, I'll try to fill in the details.\n\nShort background is very large database ran out of space during vacuum \nfull taking down the server.  There is a replica which was applying the \nWALs and so it too ran out of space.  On restart after clearing some \nspace, the database came back up but left over the in-progress rebuild \nfiles.  I've cleared that replica and am using it as my rebuild target \njust now.\n\nTrying to identify the 'orphan' files and move them away always led to \nthe database spotting the supposedly unused files having gone and \nrefusing to start, so I had no successful way to clean up and get space \nback.\n\nLast resort after discussion is pg_dumpall & reload.  I'm doing this via \na network pipe (netcat) as I do not have the vast amount of storage \nnecessary for the dump file to be stored (in any format).\n\nOn 19-Jul-2024 09:26, Scott Ribe wrote:\n> Do you actually have 100G networking between the nodes? Because if not, a single CPU should be able to saturate 10G.\nServers connect via 10G WAN; sending is not the issue, it's application \nof the incoming stream on the destination which is bottlenecked.\n>\n> Likewise the receiving end would need disk capable of keeping up. Which brings up the question, why not write to disk, but directly to the destination rather than write locally then copy?\nIn this case, it's not a local write, it's piped via netcat.\n> Do you require dump-reload because of suspected corruption? That's a tough one. But if not, if the goal is just to get up and running on a new server, why not pg_basebackup, streaming replica, promote? That depends on the level of data modification activity being low enough that pg_basebackup can keep up with WAL as it's generated and apply it faster than new WAL comes in, but given that your server is currently keeping up with writing that much WAL and flushing that many changes, seems likely it would keep up as long as the network connection is fast enough. Anyway, in that scenario, you don't need to care how long pg_basebackup takes.\n>\n> If you do need a dump/reload because of suspected corruption, the only thing I can think of is something like doing it a table at a time--partitioning would help here, if practical.\n\nThe basebackup is, to the best of my understanding, essentially just \ncopying the database files.  Since the failed vacuum has left extra \nfiles, my expectation is these too would be copied, leaving me in the \nsame position I started in.  If I'm wrong, please tell me as that would \nbe vastly quicker - it is how I originally set up the replica and it \ntook only a few hours on the 10G link.\n\nThe inability to get a clean start if I move any files out the way leads \nme to be concerned for some underlying corruption/issue and the \nrecommendation earlier in the discussion was opt for dump/reload as the \nfail-safe.\n\nResigned to my fate, my thoughts were to see if there is a way to \nimprove the dump-reload approach for the future.  Since dump-reload is \nthe ultimate upgrade suggestion in the documentation, it seems \nworthwhile to see if there is a way to improve the performance of that \nespecially as very large databases like mine are a thing with \nPostgreSQL.  From a quick review of pg_dump.c (I'm no expert on it \nobviously), it feels like it's already doing most of what needs done and \nthe addition is some sort of multi-thread coordination with a restore \nclient to ensure each thread can successfully complete each task it has \nbefore accepting more work.  I realize that's actually difficult to \nimplement.\n\nThanks\n\nTom\n\n\n\n\n\n\n\nHi Scott,\nI realize some of the background was snipped on what I sent to\n the hacker list, I'll try to fill in the details.\nShort background is very large database ran out of space during\n vacuum full taking down the server.  There is a replica which was\n applying the WALs and so it too ran out of space.  On restart\n after clearing some space, the database came back up but left over\n the in-progress rebuild files.  I've cleared that replica and am\n using it as my rebuild target just now.\n\nTrying to identify the 'orphan' files and move them away always\n led to the database spotting the supposedly unused files having\n gone and refusing to start, so I had no successful way to clean up\n and get space back.\nLast resort after discussion is pg_dumpall & reload.  I'm\n doing this via a network pipe (netcat) as I do not have the vast\n amount of storage necessary for the dump file to be stored (in any\n format).\n\nOn 19-Jul-2024 09:26, Scott Ribe wrote:\n\n\nDo you actually have 100G networking between the nodes? Because if not, a single CPU should be able to saturate 10G. \n\n Servers connect via 10G WAN; sending is not the issue, it's\n application of the incoming stream on the destination which is\n bottlenecked.\n\n\n\nLikewise the receiving end would need disk capable of keeping up. Which brings up the question, why not write to disk, but directly to the destination rather than write locally then copy?\n\n\n In this case, it's not a local write, it's piped via netcat.\n\n\nDo you require dump-reload because of suspected corruption? That's a tough one. But if not, if the goal is just to get up and running on a new server, why not pg_basebackup, streaming replica, promote? That depends on the level of data modification activity being low enough that pg_basebackup can keep up with WAL as it's generated and apply it faster than new WAL comes in, but given that your server is currently keeping up with writing that much WAL and flushing that many changes, seems likely it would keep up as long as the network connection is fast enough. Anyway, in that scenario, you don't need to care how long pg_basebackup takes.\n\nIf you do need a dump/reload because of suspected corruption, the only thing I can think of is something like doing it a table at a time--partitioning would help here, if practical.\n\nThe basebackup is, to the best of my understanding, essentially\n just copying the database files.  Since the failed vacuum has left\n extra files, my expectation is these too would be copied, leaving\n me in the same position I started in.  If I'm wrong, please tell\n me as that would be vastly quicker - it is how I originally set up\n the replica and it took only a few hours on the 10G link.\n\nThe inability to get a clean start if I move any files out the\n way leads me to be concerned for some underlying corruption/issue\n and the recommendation earlier in the discussion was opt for\n dump/reload as the fail-safe.\nResigned to my fate, my thoughts were to see if there is a way to\n improve the dump-reload approach for the future.  Since\n dump-reload is the ultimate upgrade suggestion in the\n documentation, it seems worthwhile to see if there is a way to\n improve the performance of that especially as very large databases\n like mine are a thing with PostgreSQL.  From a quick review of\n pg_dump.c (I'm no expert on it obviously), it feels like it's\n already doing most of what needs done and the addition is some\n sort of multi-thread coordination with a restore client to ensure\n each thread can successfully complete each task it has before\n accepting more work.  I realize that's actually difficult to\n implement.\nThanks\nTom", "msg_date": "Fri, 19 Jul 2024 09:46:14 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "> On Jul 19, 2024, at 7:46 AM, Thomas Simpson <[email protected]> wrote:\n> \n> I realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n\nI was gone from my computer for a day and lost track of the thread.\n\nPerhaps logical replication could help you out here?\n\n", "msg_date": "Fri, 19 Jul 2024 13:34:56 -0600", "msg_from": "Scott Ribe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Hi Scott\n\nOn 19-Jul-2024 15:34, Scott Ribe wrote:\n>> On Jul 19, 2024, at 7:46 AM, Thomas Simpson<[email protected]> wrote:\n>>\n>> I realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n> I was gone from my computer for a day and lost track of the thread.\n>\n> Perhaps logical replication could help you out here?\n\nI'm not sure - perhaps, but at this point, I've got that dump/reload \nrunning and provided it completes ok (in about 20 days time at current \nrate), I'll be fine with this.\n\nThe database itself is essentially an archive of data so is no longer \nbeing added to at this point, so it's an annoyance for the rebuild time \nrather than a disaster.\n\n[But incidentally, I am working on an even larger project which is \nlikely to make this one seem small, so improvement around large \ndatabases is important to me.]\n\nHowever, my thought is around how to avoid this issue in the future and \nto improve the experience for others faced with the dump-reload which is \nalways the fall-back upgrade suggestion between versions.\n\nGetting parallelism should be possible and the current pg_dump does that \nfor directory mode from what I can see - making multiple threads etc.  \naccording to parallel.c in pg_dump, it even looks like most of where my \nthought process was going is actually already there.\n\nThe extension should be adding synchronization/checkpointing between the \ngenerating dump and the receiving reload to ensure objects are not \nprocessed until all their requirements are already present in the new \ndatabase.  This is all based around routing via network streams instead \nof the filesystem as currently happens.\n\nPerhaps this is already in place since the restore can be done in \nparallel, so must need to implement that ordering already?  If someone \nwith a good understanding of dump is able to comment or even give \nsuggestions, I'm not against making an attempt to implement something as \na first attempt.\n\nI see Tom Lane from git blame did a bunch of work around the parallel \ndump back in 2020 - perhaps he could make suggestions either via private \ndirect email or the list ?\n\nThanks\n\nTom\n\n\n\n\n\n\n\nHi Scott\n\nOn 19-Jul-2024 15:34, Scott Ribe wrote:\n\n\n\nOn Jul 19, 2024, at 7:46 AM, Thomas Simpson <[email protected]> wrote:\n\nI realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n\n\n\nI was gone from my computer for a day and lost track of the thread.\n\nPerhaps logical replication could help you out here?\n\nI'm not sure - perhaps, but at this point, I've got that\n dump/reload running and provided it completes ok (in about 20 days\n time at current rate), I'll be fine with this.\nThe database itself is essentially an archive of data so is no\n longer being added to at this point, so it's an annoyance for the\n rebuild time rather than a disaster.\n[But incidentally, I am working on an even larger project which\n is likely to make this one seem small, so improvement around large\n databases is important to me.]\n\nHowever, my thought is around how to avoid this issue in the\n future and to improve the experience for others faced with the\n dump-reload which is always the fall-back upgrade suggestion\n between versions.\nGetting parallelism should be possible and the current pg_dump\n does that for directory mode from what I can see - making multiple\n threads etc.  according to parallel.c in pg_dump, it even looks\n like most of where my thought process was going is actually\n already there.\nThe extension should be adding synchronization/checkpointing\n between the generating dump and the receiving reload to ensure\n objects are not processed until all their requirements are already\n present in the new database.  This is all based around routing via\n network streams instead of the filesystem as currently happens.\nPerhaps this is already in place since the restore can be done in\n parallel, so must need to implement that ordering already?  If\n someone with a good understanding of dump is able to comment or\n even give suggestions, I'm not against making an attempt to\n implement something as a first attempt.\nI see Tom Lane from git blame did a bunch of work around the\n parallel dump back in 2020 - perhaps he could make suggestions\n either via private direct email or the list ?\n\nThanks\nTom", "msg_date": "Fri, 19 Jul 2024 16:23:29 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Thomas—Why are you using logical backups for a database this large?  A solution like PgBackRest?  Obviously, if you are going to upgrade, but for operational use, that seems to be a slow choice.DougOn Jul 19, 2024, at 4:26 PM, Thomas Simpson <[email protected]> wrote:\n\n \n \nHi Scott\n\nOn 19-Jul-2024 15:34, Scott Ribe wrote:\n\n\n\nOn Jul 19, 2024, at 7:46 AM, Thomas Simpson <[email protected]> wrote:\n\nI realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n\n\nI was gone from my computer for a day and lost track of the thread.\n\nPerhaps logical replication could help you out here?\n\nI'm not sure - perhaps, but at this point, I've got that\n dump/reload running and provided it completes ok (in about 20 days\n time at current rate), I'll be fine with this.\nThe database itself is essentially an archive of data so is no\n longer being added to at this point, so it's an annoyance for the\n rebuild time rather than a disaster.\n[But incidentally, I am working on an even larger project which\n is likely to make this one seem small, so improvement around large\n databases is important to me.]\n\nHowever, my thought is around how to avoid this issue in the\n future and to improve the experience for others faced with the\n dump-reload which is always the fall-back upgrade suggestion\n between versions.\nGetting parallelism should be possible and the current pg_dump\n does that for directory mode from what I can see - making multiple\n threads etc.  according to parallel.c in pg_dump, it even looks\n like most of where my thought process was going is actually\n already there.\nThe extension should be adding synchronization/checkpointing\n between the generating dump and the receiving reload to ensure\n objects are not processed until all their requirements are already\n present in the new database.  This is all based around routing via\n network streams instead of the filesystem as currently happens.\nPerhaps this is already in place since the restore can be done in\n parallel, so must need to implement that ordering already?  If\n someone with a good understanding of dump is able to comment or\n even give suggestions, I'm not against making an attempt to\n implement something as a first attempt.\nI see Tom Lane from git blame did a bunch of work around the\n parallel dump back in 2020 - perhaps he could make suggestions\n either via private direct email or the list ?\n\nThanks\nTom", "msg_date": "Fri, 19 Jul 2024 21:21:50 +0000", "msg_from": "Doug Reynolds <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Hi Doug\n\nOn 19-Jul-2024 17:21, Doug Reynolds wrote:\n> Thomas—\n>\n> Why are you using logical backups for a database this large?  A \n> solution like PgBackRest?  Obviously, if you are going to upgrade, but \n> for operational use, that seems to be a slow choice.\n\nIn normal operation the server runs as a primary-replica and pgbackrest \nhandles backups.  Right when disk space was used up, pgbackrest also \ntook a backup during the failed vacuum so going back to it (or anything \nearlier) would also roll forward the WALs for recovery to date and put \nme right back where I am just now by running out of space part way through.\n\nIt's a pragmatic decision that trying various things short of the \ndump-reload would take a number of days for me to try and see if I could \nget them to work with a high likelihood of needing to resort to \ndump-reload anyway.  I'd already tried a few file matching/moving \nexercises by they all prevented the database starting up so I cut my \nlosses and started the dump-reload this week instead of next week since \nthere's a limited window before this becomes a larger problem.\n\nMy thoughts on improving pg_dump are to help make it a better tool for \nworst case scenarios like this for the future or for those that like the \ndump-reload as part of upgrades but have reasonable size databases.\n\nThanks\n\nTom\n\n\n>\n> Doug\n>\n>> On Jul 19, 2024, at 4:26 PM, Thomas Simpson <[email protected]> wrote:\n>>\n>> \n>>\n>> Hi Scott\n>>\n>> On 19-Jul-2024 15:34, Scott Ribe wrote:\n>>>> On Jul 19, 2024, at 7:46 AM, Thomas Simpson<[email protected]> wrote:\n>>>>\n>>>> I realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n>>> I was gone from my computer for a day and lost track of the thread.\n>>>\n>>> Perhaps logical replication could help you out here?\n>>\n>> I'm not sure - perhaps, but at this point, I've got that dump/reload \n>> running and provided it completes ok (in about 20 days time at \n>> current rate), I'll be fine with this.\n>>\n>> The database itself is essentially an archive of data so is no longer \n>> being added to at this point, so it's an annoyance for the rebuild \n>> time rather than a disaster.\n>>\n>> [But incidentally, I am working on an even larger project which is \n>> likely to make this one seem small, so improvement around large \n>> databases is important to me.]\n>>\n>> However, my thought is around how to avoid this issue in the future \n>> and to improve the experience for others faced with the dump-reload \n>> which is always the fall-back upgrade suggestion between versions.\n>>\n>> Getting parallelism should be possible and the current pg_dump does \n>> that for directory mode from what I can see - making multiple threads \n>> etc.  according to parallel.c in pg_dump, it even looks like most of \n>> where my thought process was going is actually already there.\n>>\n>> The extension should be adding synchronization/checkpointing between \n>> the generating dump and the receiving reload to ensure objects are \n>> not processed until all their requirements are already present in the \n>> new database.  This is all based around routing via network streams \n>> instead of the filesystem as currently happens.\n>>\n>> Perhaps this is already in place since the restore can be done in \n>> parallel, so must need to implement that ordering already?  If \n>> someone with a good understanding of dump is able to comment or even \n>> give suggestions, I'm not against making an attempt to implement \n>> something as a first attempt.\n>>\n>> I see Tom Lane from git blame did a bunch of work around the parallel \n>> dump back in 2020 - perhaps he could make suggestions either via \n>> private direct email or the list ?\n>>\n>> Thanks\n>>\n>> Tom\n>>\n>>\n\n\n\n\n\n\nHi Doug\n\nOn 19-Jul-2024 17:21, Doug Reynolds\n wrote:\n\n\n\nThomas—\n\n\nWhy are you using logical backups for a database\n this large?  A solution like PgBackRest?  Obviously, if you are\n going to upgrade, but for operational use, that seems to be a\n slow choice.\n\nIn normal operation the server runs as a primary-replica and\n pgbackrest handles backups.  Right when disk space was used up,\n pgbackrest also took a backup during the failed vacuum so going\n back to it (or anything earlier) would also roll forward the WALs\n for recovery to date and put me right back where I am just now by\n running out of space part way through.\nIt's a pragmatic decision that trying various things short of the\n dump-reload would take a number of days for me to try and see if I\n could get them to work with a high likelihood of needing to resort\n to dump-reload anyway.  I'd already tried a few file\n matching/moving exercises by they all prevented the database\n starting up so I cut my losses and started the dump-reload this\n week instead of next week since there's a limited window before\n this becomes a larger problem.\n\nMy thoughts on improving pg_dump are to help make it a better\n tool for worst case scenarios like this for the future or for\n those that like the dump-reload as part of upgrades but have\n reasonable size databases.\nThanks\nTom\n\n\n\n\n\nDoug\n\nOn Jul 19, 2024, at 4:26 PM, Thomas\n Simpson <[email protected]> wrote:\n\n\n\n\n\n \nHi Scott\n\nOn 19-Jul-2024 15:34, Scott Ribe\n wrote:\n\n\n\nOn Jul 19, 2024, at 7:46 AM, Thomas Simpson <[email protected]> wrote:\n\nI realize some of the background was snipped on what I sent to the hacker list, I'll try to fill in the details.\n\n\nI was gone from my computer for a day and lost track of the thread.\n\nPerhaps logical replication could help you out here?\n\nI'm not sure - perhaps, but at this point, I've got that\n dump/reload running and provided it completes ok (in about\n 20 days time at current rate), I'll be fine with this.\nThe database itself is essentially an archive of data so is\n no longer being added to at this point, so it's an annoyance\n for the rebuild time rather than a disaster.\n[But incidentally, I am working on an even larger project\n which is likely to make this one seem small, so improvement\n around large databases is important to me.]\n\nHowever, my thought is around how to avoid this issue in\n the future and to improve the experience for others faced\n with the dump-reload which is always the fall-back upgrade\n suggestion between versions.\nGetting parallelism should be possible and the current\n pg_dump does that for directory mode from what I can see -\n making multiple threads etc.  according to parallel.c in\n pg_dump, it even looks like most of where my thought process\n was going is actually already there.\nThe extension should be adding\n synchronization/checkpointing between the generating dump\n and the receiving reload to ensure objects are not processed\n until all their requirements are already present in the new\n database.  This is all based around routing via network\n streams instead of the filesystem as currently happens.\nPerhaps this is already in place since the restore can be\n done in parallel, so must need to implement that ordering\n already?  If someone with a good understanding of dump is\n able to comment or even give suggestions, I'm not against\n making an attempt to implement something as a first attempt.\nI see Tom Lane from git blame did a bunch of work around\n the parallel dump back in 2020 - perhaps he could make\n suggestions either via private direct email or the list ?\n\nThanks\nTom", "msg_date": "Fri, 19 Jul 2024 22:17:52 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "\nOn 2024-07-19 Fr 9:46 AM, Thomas Simpson wrote:\n>\n> Hi Scott,\n>\n> I realize some of the background was snipped on what I sent to the \n> hacker list, I'll try to fill in the details.\n>\n> Short background is very large database ran out of space during vacuum \n> full taking down the server.  There is a replica which was applying \n> the WALs and so it too ran out of space.  On restart after clearing \n> some space, the database came back up but left over the in-progress \n> rebuild files.  I've cleared that replica and am using it as my \n> rebuild target just now.\n>\n> Trying to identify the 'orphan' files and move them away always led to \n> the database spotting the supposedly unused files having gone and \n> refusing to start, so I had no successful way to clean up and get \n> space back.\n>\n> Last resort after discussion is pg_dumpall & reload.  I'm doing this \n> via a network pipe (netcat) as I do not have the vast amount of \n> storage necessary for the dump file to be stored (in any format).\n>\n> On 19-Jul-2024 09:26, Scott Ribe wrote:\n>> Do you actually have 100G networking between the nodes? Because if not, a single CPU should be able to saturate 10G.\n> Servers connect via 10G WAN; sending is not the issue, it's \n> application of the incoming stream on the destination which is \n> bottlenecked.\n>> Likewise the receiving end would need disk capable of keeping up. Which brings up the question, why not write to disk, but directly to the destination rather than write locally then copy?\n> In this case, it's not a local write, it's piped via netcat.\n>> Do you require dump-reload because of suspected corruption? That's a tough one. But if not, if the goal is just to get up and running on a new server, why not pg_basebackup, streaming replica, promote? That depends on the level of data modification activity being low enough that pg_basebackup can keep up with WAL as it's generated and apply it faster than new WAL comes in, but given that your server is currently keeping up with writing that much WAL and flushing that many changes, seems likely it would keep up as long as the network connection is fast enough. Anyway, in that scenario, you don't need to care how long pg_basebackup takes.\n>>\n>> If you do need a dump/reload because of suspected corruption, the only thing I can think of is something like doing it a table at a time--partitioning would help here, if practical.\n>\n> The basebackup is, to the best of my understanding, essentially just \n> copying the database files.  Since the failed vacuum has left extra \n> files, my expectation is these too would be copied, leaving me in the \n> same position I started in.  If I'm wrong, please tell me as that \n> would be vastly quicker - it is how I originally set up the replica \n> and it took only a few hours on the 10G link.\n>\n> The inability to get a clean start if I move any files out the way \n> leads me to be concerned for some underlying corruption/issue and the \n> recommendation earlier in the discussion was opt for dump/reload as \n> the fail-safe.\n>\n> Resigned to my fate, my thoughts were to see if there is a way to \n> improve the dump-reload approach for the future.  Since dump-reload is \n> the ultimate upgrade suggestion in the documentation, it seems \n> worthwhile to see if there is a way to improve the performance of that \n> especially as very large databases like mine are a thing with \n> PostgreSQL.  From a quick review of pg_dump.c (I'm no expert on it \n> obviously), it feels like it's already doing most of what needs done \n> and the addition is some sort of multi-thread coordination with a \n> restore client to ensure each thread can successfully complete each \n> task it has before accepting more work.  I realize that's actually \n> difficult to implement.\n>\n>\n\nThere is a plan for a non-text mode for pg_dumpall. I have started work \non it, and hope to have a WIP patch in a month or so. It's not my \nintention to parallelize it for the first cut, but it could definitely \nbe parallelizable in future. However, it will require writing to disk \nsomewhere, albeit that the data will be compressed. It's well nigh \nimpossible to parallelize text format dumps.\n\nRestoration of custom and directory format dumps has long been \nparallelized. Parallel dumps require directory format, and so will \nnon-text pg_dumpall.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 22 Jul 2024 11:50:10 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "On Fri, Jul 19, 2024 at 10:19 PM Thomas Simpson <[email protected]> wrote:\n\n> Hi Doug\n> On 19-Jul-2024 17:21, Doug Reynolds wrote:\n>\n> Thomas—\n>\n> Why are you using logical backups for a database this large? A solution\n> like PgBackRest? Obviously, if you are going to upgrade, but for\n> operational use, that seems to be a slow choice.\n>\n> In normal operation the server runs as a primary-replica and pgbackrest\n> handles backups.\n>\nExpire the oldest pgbackrest, so as to free up space for a multithreaded\npg_dump.\n\n> Right when disk space was used up, pgbackrest also took a backup during\n> the failed vacuum so going back to it (or anything earlier) would also roll\n> forward the WALs for recovery to date and put me right back where I am just\n> now by running out of space part way through.\n>\n\nWho says you have to restore to the failure point? That's what the\n\"--target\" option is for.\n\nFor example, if you took a full backup on 7/14 at midnight, and want to\nrestore to 7/18 23:00, run:\ndeclare LL=detail\ndeclare PGData=/path/to/data\ndeclare -i Threads=`nproc`-2\ndeclare BackupSet=20240714-000003F\ndeclare RestoreUntil=\"2024-07-18 23:00\"\npgbackrest restore \\\n --stanza=localhost \\\n --log-level-file=$LL \\\n --log-level-console=$LL \\\n --process-max=${Threads}\n --pg1-path=$PGData \\\n --set=$BackupSet \\\n --type=time --target=\"${RestoreUntil}\"\n\n\n>\n\nOn Fri, Jul 19, 2024 at 10:19 PM Thomas Simpson <[email protected]> wrote:\n\nHi Doug\n\nOn 19-Jul-2024 17:21, Doug Reynolds\n wrote:\n\n\nThomas—\n\n\nWhy are you using logical backups for a database\n this large?  A solution like PgBackRest?  Obviously, if you are\n going to upgrade, but for operational use, that seems to be a\n slow choice.\n\nIn normal operation the server runs as a primary-replica and\n pgbackrest handles backups.Expire the oldest pgbackrest, so as to free up space for a multithreaded pg_dump.   Right when disk space was used up,\n pgbackrest also took a backup during the failed vacuum so going\n back to it (or anything earlier) would also roll forward the WALs\n for recovery to date and put me right back where I am just now by\n running out of space part way through.Who says you have to restore to the failure point?  That's what the \"--target\" option is for.For example, if you took a full backup on 7/14 at midnight, and want to restore to 7/18 23:00, run:declare LL=detaildeclare PGData=/path/to/datadeclare -i Threads=`nproc`-2declare BackupSet=20240714-000003Fdeclare RestoreUntil=\"2024-07-18 23:00\"pgbackrest restore \\    --stanza=localhost \\    --log-level-file=$LL \\    --log-level-console=$LL \\    --process-max=${Threads}    --pg1-path=$PGData \\    --set=$BackupSet \\    --type=time --target=\"${RestoreUntil}\"", "msg_date": "Tue, 23 Jul 2024 01:21:32 -0400", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" }, { "msg_contents": "Hi Andrew,\n\nThis is very interesting.\n\nI had started looking at pg_dumpall trying to work out an approach.  I \nnoticed parallel.c essentially already does all the thread creation and \ncoordination that I knew would be needed. Given that is a solved \nproblem, I started to look further (continued below).\n\n\nOn 22-Jul-2024 11:50, Andrew Dunstan wrote:\n>\n> On 2024-07-19 Fr 9:46 AM, Thomas Simpson wrote:\n>>\n>> Hi Scott,\n>>\n>> I realize some of the background was snipped on what I sent to the \n>> hacker list, I'll try to fill in the details.\n>>\n>> Short background is very large database ran out of space during \n>> vacuum full taking down the server.  There is a replica which was \n>> applying the WALs and so it too ran out of space.  On restart after \n>> clearing some space, the database came back up but left over the \n>> in-progress rebuild files.  I've cleared that replica and am using it \n>> as my rebuild target just now.\n>>\n>> Trying to identify the 'orphan' files and move them away always led \n>> to the database spotting the supposedly unused files having gone and \n>> refusing to start, so I had no successful way to clean up and get \n>> space back.\n>>\n>> Last resort after discussion is pg_dumpall & reload.  I'm doing this \n>> via a network pipe (netcat) as I do not have the vast amount of \n>> storage necessary for the dump file to be stored (in any format).\n>>\n>> On 19-Jul-2024 09:26, Scott Ribe wrote:\n>>> Do you actually have 100G networking between the nodes? Because if \n>>> not, a single CPU should be able to saturate 10G.\n>> Servers connect via 10G WAN; sending is not the issue, it's \n>> application of the incoming stream on the destination which is \n>> bottlenecked.\n>>> Likewise the receiving end would need disk capable of keeping up. \n>>> Which brings up the question, why not write to disk, but directly to \n>>> the destination rather than write locally then copy?\n>> In this case, it's not a local write, it's piped via netcat.\n>>> Do you require dump-reload because of suspected corruption? That's a \n>>> tough one. But if not, if the goal is just to get up and running on \n>>> a new server, why not pg_basebackup, streaming replica, promote? \n>>> That depends on the level of data modification activity being low \n>>> enough that pg_basebackup can keep up with WAL as it's generated and \n>>> apply it faster than new WAL comes in, but given that your server is \n>>> currently keeping up with writing that much WAL and flushing that \n>>> many changes, seems likely it would keep up as long as the network \n>>> connection is fast enough. Anyway, in that scenario, you don't need \n>>> to care how long pg_basebackup takes.\n>>>\n>>> If you do need a dump/reload because of suspected corruption, the \n>>> only thing I can think of is something like doing it a table at a \n>>> time--partitioning would help here, if practical.\n>>\n>> The basebackup is, to the best of my understanding, essentially just \n>> copying the database files.  Since the failed vacuum has left extra \n>> files, my expectation is these too would be copied, leaving me in the \n>> same position I started in.  If I'm wrong, please tell me as that \n>> would be vastly quicker - it is how I originally set up the replica \n>> and it took only a few hours on the 10G link.\n>>\n>> The inability to get a clean start if I move any files out the way \n>> leads me to be concerned for some underlying corruption/issue and the \n>> recommendation earlier in the discussion was opt for dump/reload as \n>> the fail-safe.\n>>\n>> Resigned to my fate, my thoughts were to see if there is a way to \n>> improve the dump-reload approach for the future.  Since dump-reload \n>> is the ultimate upgrade suggestion in the documentation, it seems \n>> worthwhile to see if there is a way to improve the performance of \n>> that especially as very large databases like mine are a thing with \n>> PostgreSQL.  From a quick review of pg_dump.c (I'm no expert on it \n>> obviously), it feels like it's already doing most of what needs done \n>> and the addition is some sort of multi-thread coordination with a \n>> restore client to ensure each thread can successfully complete each \n>> task it has before accepting more work.  I realize that's actually \n>> difficult to implement.\n>>\n>>\n>\n> There is a plan for a non-text mode for pg_dumpall. I have started \n> work on it, and hope to have a WIP patch in a month or so. It's not my \n> intention to parallelize it for the first cut, but it could definitely \n> be parallelizable in future. However, it will require writing to disk \n> somewhere, albeit that the data will be compressed. It's well nigh \n> impossible to parallelize text format dumps.\n>\n> Restoration of custom and directory format dumps has long been \n> parallelized. Parallel dumps require directory format, and so will \n> non-text pg_dumpall.\n>\n\nMy general approach (which I'm sure is naive) was:\n\nAdd to pg_dumpall the concept of backup phase and I have the basic hooks \nin place.  0 = role grants etc.  The stuff before dumping actual \ndatabases.  I intercepted the fprintf(OPF to a hook function that for \nnormal run just ends up doing the same as fprintf but for my parallel \nmode, it has a hook to send the info via the network (still to be done \nbut I think I may need to alter the fprintf stuff with more granularity \nof what is being processed at each output to help this part, such as \noutputRoleCreate, outputComment etc.).\n\nEach subsequent phase is a whole database - increment at each pg_dump \ncall.  The actual pg_dump is to get a new format, -F N for network; \nbased around directory dump as the base, my intention was to make \nmultiple network pipes to send the data in place of the files within the \ndirectory.  Essentially relying on whatever is already done to organize \nparallel dumps to disk to be sufficient for coordinating network streaming.\n\nThe restore side needs to do network listen plus some handshaking to \nconfirm completion of the incoming phases, any necessary dependency \ntracking on restore etc.\n\nMy goal was to actively avoid the disk usage part through the \ncoordination over the network between dump and restore even though my \nstarting point is the pg_backup_directory code.  Any problem on the \nrestore side would feed back and halt the dump side in error so this is \na new failure mode compared with how it works just now.\n\nI'll hold off a bit as I'm very interested in any feedback you have, \nparticularly if you see serious flaws in my though process here.\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> -- \n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\nThanks\n\nTom\n\n\n\n\n\n\n\nHi Andrew,\nThis is very interesting.\nI had started looking at pg_dumpall trying to work out an\n approach.  I noticed parallel.c essentially already does all the\n thread creation and coordination that I knew would be needed. \n Given that is a solved problem, I started to look further\n (continued below).\n\nOn 22-Jul-2024 11:50, Andrew Dunstan\n wrote:\n\n\n\n On 2024-07-19 Fr 9:46 AM, Thomas Simpson wrote:\n \n\n\n Hi Scott,\n \n\n I realize some of the background was snipped on what I sent to\n the hacker list, I'll try to fill in the details.\n \n\n Short background is very large database ran out of space during\n vacuum full taking down the server.  There is a replica which\n was applying the WALs and so it too ran out of space.  On\n restart after clearing some space, the database came back up but\n left over the in-progress rebuild files.  I've cleared that\n replica and am using it as my rebuild target just now.\n \n\n Trying to identify the 'orphan' files and move them away always\n led to the database spotting the supposedly unused files having\n gone and refusing to start, so I had no successful way to clean\n up and get space back.\n \n\n Last resort after discussion is pg_dumpall & reload.  I'm\n doing this via a network pipe (netcat) as I do not have the vast\n amount of storage necessary for the dump file to be stored (in\n any format).\n \n\n On 19-Jul-2024 09:26, Scott Ribe wrote:\n \nDo you actually have 100G networking\n between the nodes? Because if not, a single CPU should be able\n to saturate 10G.\n \n\n Servers connect via 10G WAN; sending is not the issue, it's\n application of the incoming stream on the destination which is\n bottlenecked.\n \nLikewise the receiving end would need\n disk capable of keeping up. Which brings up the question, why\n not write to disk, but directly to the destination rather than\n write locally then copy?\n \n\n In this case, it's not a local write, it's piped via netcat.\n \nDo you require dump-reload because of\n suspected corruption? That's a tough one. But if not, if the\n goal is just to get up and running on a new server, why not\n pg_basebackup, streaming replica, promote? That depends on the\n level of data modification activity being low enough that\n pg_basebackup can keep up with WAL as it's generated and apply\n it faster than new WAL comes in, but given that your server is\n currently keeping up with writing that much WAL and flushing\n that many changes, seems likely it would keep up as long as\n the network connection is fast enough. Anyway, in that\n scenario, you don't need to care how long pg_basebackup takes.\n \n\n If you do need a dump/reload because of suspected corruption,\n the only thing I can think of is something like doing it a\n table at a time--partitioning would help here, if practical.\n \n\n\n The basebackup is, to the best of my understanding, essentially\n just copying the database files.  Since the failed vacuum has\n left extra files, my expectation is these too would be copied,\n leaving me in the same position I started in.  If I'm wrong,\n please tell me as that would be vastly quicker - it is how I\n originally set up the replica and it took only a few hours on\n the 10G link.\n \n\n The inability to get a clean start if I move any files out the\n way leads me to be concerned for some underlying\n corruption/issue and the recommendation earlier in the\n discussion was opt for dump/reload as the fail-safe.\n \n\n Resigned to my fate, my thoughts were to see if there is a way\n to improve the dump-reload approach for the future.  Since\n dump-reload is the ultimate upgrade suggestion in the\n documentation, it seems worthwhile to see if there is a way to\n improve the performance of that especially as very large\n databases like mine are a thing with PostgreSQL.  From a quick\n review of pg_dump.c (I'm no expert on it obviously), it feels\n like it's already doing most of what needs done and the addition\n is some sort of multi-thread coordination with a restore client\n to ensure each thread can successfully complete each task it has\n before accepting more work.  I realize that's actually difficult\n to implement.\n \n\n\n\n\n There is a plan for a non-text mode for pg_dumpall. I have started\n work on it, and hope to have a WIP patch in a month or so. It's\n not my intention to parallelize it for the first cut, but it could\n definitely be parallelizable in future. However, it will require\n writing to disk somewhere, albeit that the data will be\n compressed. It's well nigh impossible to parallelize text format\n dumps.\n \n\n Restoration of custom and directory format dumps has long been\n parallelized. Parallel dumps require directory format, and so will\n non-text pg_dumpall.\n \n\n\n\nMy general approach (which I'm sure is naive) was:\nAdd to pg_dumpall the concept of backup phase and I have the\n basic hooks in place.  0 = role grants etc.  The stuff before\n dumping actual databases.  I intercepted the fprintf(OPF to a hook\n function that for normal run just ends up doing the same as\n fprintf but for my parallel mode, it has a hook to send the info\n via the network (still to be done but I think I may need to alter\n the fprintf stuff with more granularity of what is being processed\n at each output to help this part, such as outputRoleCreate,\n outputComment etc.).\n\nEach subsequent phase is a whole database - increment at each\n pg_dump call.  The actual pg_dump is to get a new format, -F N for\n network; based around directory dump as the base, my intention was\n to make multiple network pipes to send the data in place of the\n files within the directory.  Essentially relying on whatever is\n already done to organize parallel dumps to disk to be sufficient\n for coordinating network streaming.\n\nThe restore side needs to do network listen plus some handshaking\n to confirm completion of the incoming phases, any necessary\n dependency tracking on restore etc.\n\nMy goal was to actively avoid the disk usage part through the\n coordination over the network between dump and restore even though\n my starting point is the pg_backup_directory code.  Any problem on\n the restore side would feed back and halt the dump side in error\n so this is a new failure mode compared with how it works just now.\n\nI'll hold off a bit as I'm very interested in any feedback you\n have, particularly if you see serious flaws in my though process\n here.\n\n\n\n cheers\n \n\n\n andrew\n \n\n\n\n --\n \n Andrew Dunstan\n \n EDB: https://www.enterprisedb.com\n\n\n\nThanks\nTom", "msg_date": "Tue, 23 Jul 2024 10:39:58 -0400", "msg_from": "Thomas Simpson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enhance pg_dump multi-threaded streaming (WAS: Re: filesystem\n full during vacuum - space recovery issues)" } ]
[ { "msg_contents": "I came across a query that returned incorrect results and I traced it\ndown to being caused by duplicate unique key values in an inheritance\ntable. As a simple example, consider\n\ncreate table p (a int primary key, b int);\ncreate table c () inherits (p);\n\ninsert into p select 1, 1;\ninsert into c select 1, 2;\n\nselect a, b from p;\n a | b\n---+---\n 1 | 1\n 1 | 2\n(2 rows)\n\nexplain (verbose, costs off)\nselect a, b from p group by a;\n QUERY PLAN\n--------------------------------------\n HashAggregate\n Output: p.a, p.b\n Group Key: p.a\n -> Append\n -> Seq Scan on public.p p_1\n Output: p_1.a, p_1.b\n -> Seq Scan on public.c p_2\n Output: p_2.a, p_2.b\n(8 rows)\n\nThe parser considers 'p.b' functionally dependent on the group by\ncolumn 'p.a' because 'p.a' is identified as the primary key for table\n'p'. However, this causes confusion for the executor when determining\nwhich 'p.b' value should be returned for each group. In my case, I\nobserved that sorted and hashed aggregation produce different results\nfor the same query.\n\nReading the doc, it seems that this is a documented limitation of the\ninheritance feature that we would have duplicate unique key values in\ninheritance tables. Even adding a unique constraint to the children\ndoes not prevent duplication compared to the parent.\n\nAs a workaround for this issue, I'm considering whether we can skip\nchecking functional dependency on primary keys for inheritance\nparents, given that we cannot guarantee uniqueness on the keys in this\ncase. Maybe something like below.\n\n@@ -1421,7 +1427,9 @@ check_ungrouped_columns_walker(Node *node,\n Assert(var->varno > 0 &&\n (int) var->varno <= list_length(context->pstate->p_rtable));\n rte = rt_fetch(var->varno, context->pstate->p_rtable);\n- if (rte->rtekind == RTE_RELATION)\n+ if (rte->rtekind == RTE_RELATION &&\n+ !(rte->relkind == RELKIND_RELATION &&\n+ rte->inh && has_subclass(rte->relid)))\n {\n if (check_functional_grouping(rte->relid,\n\nAny thoughts?\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 16 Jul 2024 08:44:57 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Duplicate unique key values in inheritance tables" }, { "msg_contents": "On Tue, 16 Jul 2024 at 12:45, Richard Guo <[email protected]> wrote:\n> As a workaround for this issue, I'm considering whether we can skip\n> checking functional dependency on primary keys for inheritance\n> parents, given that we cannot guarantee uniqueness on the keys in this\n> case. Maybe something like below.\n>\n> @@ -1421,7 +1427,9 @@ check_ungrouped_columns_walker(Node *node,\n> Assert(var->varno > 0 &&\n> (int) var->varno <= list_length(context->pstate->p_rtable));\n> rte = rt_fetch(var->varno, context->pstate->p_rtable);\n> - if (rte->rtekind == RTE_RELATION)\n> + if (rte->rtekind == RTE_RELATION &&\n> + !(rte->relkind == RELKIND_RELATION &&\n> + rte->inh && has_subclass(rte->relid)))\n> {\n> if (check_functional_grouping(rte->relid,\n>\n> Any thoughts?\n\nThe problem with doing that is that it might mean queries that used to\nwork no longer work. CREATE VIEW could also fail where it used to\nwork which could render pg_dumps unrestorable.\n\nBecause it's a parser issue, I don't think we can fix it the same way\nas a5be4062f was fixed.\n\nI don't have any ideas on what we can do about this right now, but\nthought it was worth sharing the above.\n\nDavid\n\n\n", "msg_date": "Tue, 16 Jul 2024 13:01:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate unique key values in inheritance tables" }, { "msg_contents": "On Monday, July 15, 2024, David Rowley <[email protected]> wrote:\n\n> On Tue, 16 Jul 2024 at 12:45, Richard Guo <[email protected]> wrote:\n> > As a workaround for this issue, I'm considering whether we can skip\n> > checking functional dependency on primary keys for inheritance\n> > parents, given that we cannot guarantee uniqueness on the keys in this\n> > case.\n>\n> Because it's a parser issue, I don't think we can fix it the same way\n> as a5be4062f was fixed.\n>\n> I don't have any ideas on what we can do about this right now, but\n> thought it was worth sharing the above.\n>\n\nAdd another note to caveats in the docs and call it a feature. We produce\na valid answer for the data model encountered. The non-determinism isn’t\nwrong, it’s just a poorly written query/model with non-deterministic\nresults. Since v15 we have an any_value aggregate - we basically are\napplying this to the dependent columns implicitly. A bit of revisionist\nhistory but I’d rather do that than break said queries. Especially at\nparse time; I’d be a bit more open to execution-time enforcement if\nfunctional dependency on the id turns out to have actually been violated.\nBut people want, and in other products have, any_value implicit aggregation\nin this situation so it’s hard to say it is wrong even if we otherwise take\nthe position that we will not accept it.\n\nDavid J.\n\nOn Monday, July 15, 2024, David Rowley <[email protected]> wrote:On Tue, 16 Jul 2024 at 12:45, Richard Guo <[email protected]> wrote:\n> As a workaround for this issue, I'm considering whether we can skip\n> checking functional dependency on primary keys for inheritance\n> parents, given that we cannot guarantee uniqueness on the keys in this\n> case.\nBecause it's a parser issue, I don't think we can fix it the same way\nas a5be4062f was fixed.\n\nI don't have any ideas on what we can do about this right now, but\nthought it was worth sharing the above.\nAdd another note to caveats in the docs and call it a feature.  We produce a valid answer for the data model encountered.  The non-determinism isn’t wrong, it’s just a poorly written query/model with non-deterministic results. Since v15 we have an any_value aggregate - we basically are applying this to the dependent columns implicitly.  A bit of revisionist history but I’d rather do that than break said queries.  Especially at parse time; I’d be a bit more open to execution-time enforcement if functional dependency on the id turns out to have actually been violated.  But people want, and in other products have, any_value implicit aggregation in this situation so it’s hard to say it is wrong even if we otherwise take the position that we will not accept it.David J.", "msg_date": "Mon, 15 Jul 2024 18:28:54 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate unique key values in inheritance tables" }, { "msg_contents": "On Tue, 16 Jul 2024 at 13:28, David G. Johnston\n<[email protected]> wrote:\n> Add another note to caveats in the docs and call it a feature. We produce a valid answer for the data model encountered. The non-determinism isn’t wrong, it’s just a poorly written query/model with non-deterministic results. Since v15 we have an any_value aggregate - we basically are applying this to the dependent columns implicitly. A bit of revisionist history but I’d rather do that than break said queries. Especially at parse time; I’d be a bit more open to execution-time enforcement if functional dependency on the id turns out to have actually been violated. But people want, and in other products have, any_value implicit aggregation in this situation so it’s hard to say it is wrong even if we otherwise take the position that we will not accept it.\n\nI think it might be best just to ignore it and do nothing. Maybe it\nwould be worth putting something into the docs about it if people from\nuserland come complaining about a bug as the doc mention might stop\nthem wasting their time reporting something we already know about.\nOtherwise, I feel the docs would just draw attention to something that\nI'd personally rather people didn't do. As you say, using any_value()\nwould be the way we'd encourage people to do it if they don't care\nwhich value of the ungrouped column they want, so documenting\nsomething else doesn't seem quite right to me.\n\nDavid\n\n\n", "msg_date": "Thu, 18 Jul 2024 22:25:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate unique key values in inheritance tables" } ]
[ { "msg_contents": "Hello,\n\nI have been considering adding a user script that performs pre-checks\nbefore executing the start, stop, and restart operations in pg_ctl. I\nbelieve it is necessary for pg_ctl to support an extension that can prevent\nvarious issues that might occur when using start and stop. To this end, I\nhave sought a way for users to define and use their own logic. The existing\nbehavior remains unchanged, and the feature can be used optionally when\nneeded.\n\nThe verification of the code was carried out using the methods described\nbelow, and I would like to request additional opinions or feedback. Tests\nwere conducted using make check and through direct testing under various\nscenarios. As this is my first contribution, there might be aspects I\nmissed or incorrectly designed.\n\nI would appreciate it if you could review this.\n\nThank you.\n\n\nMyoungjun Kim / South Korea", "msg_date": "Tue, 16 Jul 2024 15:39:38 +0900", "msg_from": "=?UTF-8?B?6rmA66qF7KSA?= <[email protected]>", "msg_from_op": true, "msg_subject": "[ pg_ctl ] Review Request for Adding Pre-check User Script Feature" }, { "msg_contents": "Hello,\n\nCan you briefly explain what’s the issues you are going through the patch?\n\n\nOn Tue, 16 Jul 2024 at 11:40 AM, 김명준 <[email protected]> wrote:\n\n> Hello,\n>\n> I have been considering adding a user script that performs pre-checks\n> before executing the start, stop, and restart operations in pg_ctl. I\n> believe it is necessary for pg_ctl to support an extension that can prevent\n> various issues that might occur when using start and stop. To this end, I\n> have sought a way for users to define and use their own logic. The existing\n> behavior remains unchanged, and the feature can be used optionally when\n> needed.\n>\n> The verification of the code was carried out using the methods described\n> below, and I would like to request additional opinions or feedback. Tests\n> were conducted using make check and through direct testing under various\n> scenarios. As this is my first contribution, there might be aspects I\n> missed or incorrectly designed.\n>\n> I would appreciate it if you could review this.\n>\n> Thank you.\n>\n>\n> Myoungjun Kim / South Korea\n>\n\nHello,Can you briefly explain what’s the issues you are going through the patch?On Tue, 16 Jul 2024 at 11:40 AM, 김명준 <[email protected]> wrote:Hello,I have been considering adding a user script that performs pre-checks before executing the start, stop, and restart operations in pg_ctl. I believe it is necessary for pg_ctl to support an extension that can prevent various issues that might occur when using start and stop. To this end, I have sought a way for users to define and use their own logic. The existing behavior remains unchanged, and the feature can be used optionally when needed.The verification of the code was carried out using the methods described below, and I would like to request additional opinions or feedback. Tests were conducted using make check and through direct testing under various scenarios. As this is my first contribution, there might be aspects I missed or incorrectly designed.I would appreciate it if you could review this.Thank you.Myoungjun Kim / South Korea", "msg_date": "Tue, 16 Jul 2024 11:48:44 +0500", "msg_from": "Zaid Shabbir <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ pg_ctl ] Review Request for Adding Pre-check User Script\n Feature" }, { "msg_contents": "Hi,\n\n0. For more understanding, can you give me an example about your patch?\n1. Instead of using chmod itself, it would be better to\nuse chmod_recursive().\n2. It needs to follow the invent convention - it includes 4 spaces now.\n\nThank you,\n\nKisoon Kwon\n\n2024년 7월 16일 (화) 오후 3:40, 김명준 <[email protected]>님이 작성:\n\n> Hello,\n>\n> I have been considering adding a user script that performs pre-checks\n> before executing the start, stop, and restart operations in pg_ctl. I\n> believe it is necessary for pg_ctl to support an extension that can prevent\n> various issues that might occur when using start and stop. To this end, I\n> have sought a way for users to define and use their own logic. The existing\n> behavior remains unchanged, and the feature can be used optionally when\n> needed.\n>\n> The verification of the code was carried out using the methods described\n> below, and I would like to request additional opinions or feedback. Tests\n> were conducted using make check and through direct testing under various\n> scenarios. As this is my first contribution, there might be aspects I\n> missed or incorrectly designed.\n>\n> I would appreciate it if you could review this.\n>\n> Thank you.\n>\n>\n> Myoungjun Kim / South Korea\n>\n\nHi,0. For more understanding, can you give me an example about your patch?1. Instead of using chmod itself, it would be better to use chmod_recursive().2. It needs to follow the invent convention - it includes 4 spaces now.Thank you,Kisoon Kwon2024년 7월 16일 (화) 오후 3:40, 김명준 <[email protected]>님이 작성:Hello,I have been considering adding a user script that performs pre-checks before executing the start, stop, and restart operations in pg_ctl. I believe it is necessary for pg_ctl to support an extension that can prevent various issues that might occur when using start and stop. To this end, I have sought a way for users to define and use their own logic. The existing behavior remains unchanged, and the feature can be used optionally when needed.The verification of the code was carried out using the methods described below, and I would like to request additional opinions or feedback. Tests were conducted using make check and through direct testing under various scenarios. As this is my first contribution, there might be aspects I missed or incorrectly designed.I would appreciate it if you could review this.Thank you.Myoungjun Kim / South Korea", "msg_date": "Tue, 16 Jul 2024 17:25:50 +0900", "msg_from": "Kisoon Kwon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ pg_ctl ] Review Request for Adding Pre-check User Script\n Feature" }, { "msg_contents": "Hello,\n\nThank you for your response.\n\n 0. Here is an example of what I intended.\n What I intended is to add pre-check tasks before executing pg_ctl start,\nstop, and restart using the -A and -Z options.\n\n=========================================\n[test@test]$ cat true.sh\n#!/bin/bash\necho 'true'\nexit 0\n=========================================\n[test@test]$ cat false.sh\n#!/bin/bash\necho 'false'\nexit 1\n=========================================\n[test@test]$ pg_ctl start -A false.sh\nfalse\npg_ctl: pre-check for start failed, aborting start\n=========================================\n[test@test]$ pg_ctl start -A true.sh\ntrue\nwaiting for server to start....2024-07-19 00:16:22.768 UTC [167505] LOG:\n starting PostgreSQL 18devel on\n~\n~\n done\nserver started\n=========================================\n[test@test]$ pg_ctl stop -Z false.sh\nfalse\npg_ctl: pre-check for stop failed, aborting stop\n=========================================\n[test@test]$ pg_ctl stop -Z true.sh\ntrue\nwaiting for server to shut down....2024-07-19 00:21:06.282 UTC [167515]\nLOG: received fast shutdown request\n~\n~\n done\nserver stopped\n=========================================\n[test@test]$ pg_ctl restart -A false.sh -Z false.sh\nfalse\npg_ctl: pre-check script for stop failed, aborting stop\n=========================================\n[test@test]$ pg_ctl restart -A false.sh -Z true.sh\ntrue\nwaiting for server to shut down...2024-07-19 00:24:39.640 UTC [167530] LOG:\n received fast shutdown request\n~\n~\n done\nserver stopped\nfalse\npg_ctl: pre-check script for start failed, aborting start\n=========================================\n\n1. I plan to change it to chmod_recursive() instead of using chmod itself.\n2. I will modify it to use 4 spaces instead of a tab.\n\nThank you,\n\nMyoungjun Kim\n\n2024년 7월 16일 (화) 오후 5:26, Kisoon Kwon <[email protected]>님이 작성:\n\n> Hi,\n>\n> 0. For more understanding, can you give me an example about your patch?\n> 1. Instead of using chmod itself, it would be better to\n> use chmod_recursive().\n> 2. It needs to follow the invent convention - it includes 4 spaces now.\n>\n> Thank you,\n>\n> Kisoon Kwon\n>\n> 2024년 7월 16일 (화) 오후 3:40, 김명준 <[email protected]>님이 작성:\n>\n>> Hello,\n>>\n>> I have been considering adding a user script that performs pre-checks\n>> before executing the start, stop, and restart operations in pg_ctl. I\n>> believe it is necessary for pg_ctl to support an extension that can prevent\n>> various issues that might occur when using start and stop. To this end, I\n>> have sought a way for users to define and use their own logic. The existing\n>> behavior remains unchanged, and the feature can be used optionally when\n>> needed.\n>>\n>> The verification of the code was carried out using the methods described\n>> below, and I would like to request additional opinions or feedback. Tests\n>> were conducted using make check and through direct testing under various\n>> scenarios. As this is my first contribution, there might be aspects I\n>> missed or incorrectly designed.\n>>\n>> I would appreciate it if you could review this.\n>>\n>> Thank you.\n>>\n>>\n>> Myoungjun Kim / South Korea\n>>\n>\n\nHello, Thank you for your response.   0. Here is an example of what I intended. \n What I intended is to add pre-check tasks before executing pg_ctl start, stop, and restart using the -A and -Z options.\n\n=========================================[test@test]$ cat true.sh#!/bin/bashecho 'true'exit 0=========================================[test@test]$ cat false.sh#!/bin/bashecho 'false'exit 1=========================================[test@test]$ pg_ctl start -A false.shfalsepg_ctl: pre-check for start failed, aborting start=========================================[test@test]$ pg_ctl start -A true.shtruewaiting for server to start....2024-07-19 00:16:22.768 UTC [167505] LOG:  starting PostgreSQL 18devel on ~~ doneserver started=========================================[test@test]$ pg_ctl stop -Z false.shfalsepg_ctl: pre-check for stop failed, aborting stop=========================================[test@test]$ pg_ctl stop -Z true.shtruewaiting for server to shut down....2024-07-19 00:21:06.282 UTC [167515] LOG:  received fast shutdown request~~ doneserver stopped=========================================[test@test]$ pg_ctl restart -A false.sh -Z false.shfalsepg_ctl: pre-check script for stop failed, aborting stop=========================================[test@test]$ pg_ctl restart -A false.sh -Z true.shtruewaiting for server to shut down...2024-07-19 00:24:39.640 UTC [167530] LOG:  received fast shutdown request~~ doneserver stoppedfalsepg_ctl: pre-check script for start failed, aborting start=========================================1. I plan to change it to chmod_recursive() instead of using chmod itself.2. I will modify it to use 4 spaces instead of a tab.Thank you,Myoungjun Kim2024년 7월 16일 (화) 오후 5:26, Kisoon Kwon <[email protected]>님이 작성:Hi,0. For more understanding, can you give me an example about your patch?1. Instead of using chmod itself, it would be better to use chmod_recursive().2. It needs to follow the invent convention - it includes 4 spaces now.Thank you,Kisoon Kwon2024년 7월 16일 (화) 오후 3:40, 김명준 <[email protected]>님이 작성:Hello,I have been considering adding a user script that performs pre-checks before executing the start, stop, and restart operations in pg_ctl. I believe it is necessary for pg_ctl to support an extension that can prevent various issues that might occur when using start and stop. To this end, I have sought a way for users to define and use their own logic. The existing behavior remains unchanged, and the feature can be used optionally when needed.The verification of the code was carried out using the methods described below, and I would like to request additional opinions or feedback. Tests were conducted using make check and through direct testing under various scenarios. As this is my first contribution, there might be aspects I missed or incorrectly designed.I would appreciate it if you could review this.Thank you.Myoungjun Kim / South Korea", "msg_date": "Fri, 19 Jul 2024 09:30:52 +0900", "msg_from": "=?UTF-8?B?6rmA66qF7KSA?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ pg_ctl ] Review Request for Adding Pre-check User Script\n Feature" } ]
[ { "msg_contents": "Hi everyone,\nIn the postgrs_fdw deparser code, deparseFromExprForRel() appends an alias\nto the remote query based on the 'use_alias' boolean flag.\n\nFor a simple query, 'use_alias' is determined by\n`bms_membership(scanrel->relids) == BMS_MULTIPLE` condition. Example:\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L1385C2-L1388\n\nBut for JOINs, 'use_alias' is always hardcoded to true. Examples:\n1.\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2067-L2069\n2.\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2336-L2337\n3.\nhttps://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2419-L2420\n\nThis seems like an extra-protection in case of joins. But it could happen\nthat the join is across 2 different foreign postgres-servers (means each\nforeign server will do SCAN only, and the JOIN will happen at the upper\nlayer). In that case, using aliases in the remote queries seem redundant to\nme.\nPlease correct me if I am missing something. Can we note pass\n`bms_membership(foreignrel->relids) == BMS_MULTIPLE` instead?\n\n\n-- \nRegards\nRajan Pandey\nSoftware Developer, AWS\n\nHi everyone,In the postgrs_fdw deparser code, deparseFromExprForRel() appends an alias to the remote query based on the 'use_alias' boolean flag.For a simple query, 'use_alias' is determined by `bms_membership(scanrel->relids) == BMS_MULTIPLE` condition. Example: https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L1385C2-L1388But for JOINs, 'use_alias' is always hardcoded to true. Examples:1. https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2067-L20692. https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2336-L23373. https://github.com/postgres/postgres/blob/master/contrib/postgres_fdw/deparse.c#L2419-L2420This seems like an extra-protection in case of joins. But it could happen that the join is across 2 different foreign postgres-servers (means each foreign server will do SCAN only, and the JOIN will happen at the upper layer). In that case, using aliases in the remote queries seem redundant to me.Please correct me if I am missing something. Can we note pass `bms_membership(foreignrel->relids) == BMS_MULTIPLE` instead?-- RegardsRajan PandeySoftware Developer, AWS", "msg_date": "Tue, 16 Jul 2024 16:19:44 +0530", "msg_from": "Rajan Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "Why is 'use_alias' hardcoded to true in deparseFromExprForRel() for\n some cases" }, { "msg_contents": "Rajan Pandey <[email protected]> writes:\n> This seems like an extra-protection in case of joins. But it could happen\n> that the join is across 2 different foreign postgres-servers (means each\n> foreign server will do SCAN only, and the JOIN will happen at the upper\n> layer). In that case, using aliases in the remote queries seem redundant to\n> me.\n> Please correct me if I am missing something. Can we note pass\n> `bms_membership(foreignrel->relids) == BMS_MULTIPLE` instead?\n\nI'd be very very careful about changing that, because presence\nof a join alias affects the parser's behavior in non-obvious ways.\nUnless you've got some compellingly good reason for messing\nwith this, I doubt it is worth the investigatory effort to\nconvince ourselves that it wouldn't introduce any bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Jul 2024 08:40:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is 'use_alias' hardcoded to true in deparseFromExprForRel()\n for some cases" } ]
[ { "msg_contents": "Hi Hackers,\n\nRecently, I compiled PG17 on the windows. Till PG16 \"ActiveState Perl\", as\ninstructed in the documentation\n<https://www.postgresql.org/docs/16/install-windows-full.html#INSTALL-WINDOWS-FULL-REQUIREMENTS>,\nwas being used successfully on the Windows 10/11 to compile PG.\nHowever, it looks like that \"ActiveState Perl\" is not valid anymore to\ncompile PG17 on Windows 10/11 but documentation\n<https://www.postgresql.org/docs/17/installation-platform-notes.html#WINDOWS-REQUIREMENTS>\nstill\nsuggests it. Therefore, I think documentation needs to be updated.\nMoreover, I had to install \"strawberry's perl\" in order to compile PG17 on\nWindows 10/11. Please check out the thread \"errors building on windows\nusing meson\n<https://www.postgresql.org/message-id/flat/CADK3HHLQ1MNmfXqEvQi36D_MQrheOZPcXv2H3s6otMbSmfwjzg%40mail.gmail.com>\"\nhighlighting the issue.\n\nRegards...\n\n\nYasir Hussain\nBitnine Global Inc.\n\nHi Hackers, Recently, I compiled PG17 on the windows. Till PG16 \"ActiveState Perl\", as instructed in the documentation, was being used successfully on the Windows 10/11 to compile PG. However, it looks like that \"ActiveState Perl\" is not valid anymore to compile PG17 on Windows 10/11 but documentation still suggests it. Therefore, I think documentation needs to be updated. Moreover, I had to install \"strawberry's perl\" in order to compile PG17 on Windows 10/11. Please check out the thread \"errors building on windows using meson\" highlighting the issue. Regards...Yasir HussainBitnine Global Inc.", "msg_date": "Tue, 16 Jul 2024 16:46:46 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "ActiveState Perl is not valid anymore to build PG17 on the Windows\n 10/11 platforms, So Documentation still suggesting it should be updated" }, { "msg_contents": "On 2024-07-16 Tu 7:46 AM, Yasir wrote:\n> Hi Hackers,\n>\n> Recently, I compiled PG17 on the windows. Till PG16 \"ActiveState \n> Perl\", as instructed in the documentation \n> <https://www.postgresql.org/docs/16/install-windows-full.html#INSTALL-WINDOWS-FULL-REQUIREMENTS>, \n> was being used successfully on the Windows 10/11 to compile PG.\n> However, it looks like that \"ActiveState Perl\" is not valid anymore to \n> compile PG17 on Windows 10/11 but documentation \n> <https://www.postgresql.org/docs/17/installation-platform-notes.html#WINDOWS-REQUIREMENTS> still \n> suggests it. Therefore, I think documentation needs to be updated.\n> Moreover, I had to install \"strawberry's perl\" in order to compile \n> PG17 on Windows 10/11. Please check out the thread \"errors building on \n> windows using meson \n> <https://www.postgresql.org/message-id/flat/CADK3HHLQ1MNmfXqEvQi36D_MQrheOZPcXv2H3s6otMbSmfwjzg%40mail.gmail.com>\" \n> highlighting the issue.\n>\n\nSee https://postgr.es/m/[email protected]\n\nI agree we should fix the docco.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-16 Tu 7:46 AM, Yasir wrote:\n\n\n\nHi Hackers, \n\n Recently, I compiled PG17 on the windows. Till PG16 \"ActiveState\n Perl\", as instructed in the documentation, was being used\n successfully on the Windows 10/11 to compile PG. \n However, it looks like that \"ActiveState Perl\" is not valid\n anymore to compile PG17 on Windows 10/11 but documentation still suggests it.\n Therefore, I think documentation needs to be updated. \n Moreover, I had to install \"strawberry's perl\" in order to\n compile PG17 on Windows 10/11. Please check out the thread \"errors building on windows using meson\"\n highlighting the issue. \n\n\n\n\n\nSee \n https://postgr.es/m/[email protected]\n\nI agree we should fix the docco.\n\n\ncheers\n\n\nandrew\n\n\n\n --\n Andrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 16 Jul 2024 08:23:11 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ActiveState Perl is not valid anymore to build PG17 on the\n Windows 10/11 platforms, So Documentation still suggesting it should be\n updated" } ]
[ { "msg_contents": "Hi hackers,\n\nI'm looking for some input on an issue I've observed. A common pattern\nI've seen is using temporary tables to put data in before updating the\nreal tables. Something roughly like:\n\nOn session start:\nCREATE TEMP TABLE temp_t1 (...) ON COMMIT DELETE ROWS;\n\nOn update:\nBEGIN;\nCOPY temp_t1 FROM STDIN (FORMAT BINARY);\nINSERT INTO t1 (SELECT * FROM temp_t1 ...) ON CONFLICT DO UPDATE SET ...;\n-- potentially some other operations on temp table to put data into real table t1\nCOMMIT;\n\nThis pattern starts to break down under certain exceptional circumstances of\nhigh concurrency. The \"ON COMMIT DELETE ROWS\" does a truncate that is\nfairly expensive and doesn't work well in high-concurrency scenarios. It's\nespecially noticeable under following circumstances:\n- high max_connections setting\n- high number of temp tables per session\n- concurrent writers at fairly short intervals\nImpact is on both TPS on primary as well as that the WAL replay process \non replica becomes completely overloaded (100% cpu even though not\na lot of WAL is being generated)\n\nA very simple pgbench example that showcases degradation (taken\nwith max_connections=2000 to clearly show it).\n\ntest_truncate.sql\n\nbegin;\nset client_min_messages=warning;\ncreate temp table if not exists ut0 (a int, b text, c text) on commit delete rows;\ncommit;\n\ntest_no_truncate.sql\n\nbegin;\nset client_min_messages=warning;\ncreate temp table if not exists ut0 (a int, b text, c text); -- do not do anything on commit\ncommit;\n\npgbench -n -f test_truncate.sql -T 10 postgres -c 1 -j 1\npgbench (18devel)\ntps = 2693.806376 (without initial connection time)\n\npgbench -n -f test_truncate.sql -T 10 postgres -c10 -j 10\npgbench (18devel)\ntps = 12119.358458 (without initial connection time)\n\npgbench -n -f test_no_truncate.sql -T 10 postgres -c 1 -j 1\npgbench (18devel)\ntps = 22685.877281 (without initial connection time)\n\npgbench -n -f test_no_truncate.sql -T 10 postgres -c10 -j 10\npgbench (18devel)\ntps = 200359.458154 (without initial connection time)\n\nFor the test_truncate.sql with 10 threads, WAL replay process is spinning at 100% CPU.\n\nThe reason seems to be that for a large part, the 'on commit delete rows'\nfollows the same code path as a regular truncate.\nThis means:\n* taking an exclusive lock on the temp table (WAL logged)\n* invalidating rel cache (WAL logged)\n\nOn replica process, these exclusive locks are very expensive to replay,\nleading to spending replica most of its cycles trying to get LWLocks on non-fast-path.\nOn primary, degradation on concurrency I believes comes from the rel cache invalidation.\n\nThere doesn't seem to be a good reason for this behavior, as the replica\nhas nothing to do for a temporary table truncation, so this lock doesn't\nneed to be WAL logged.\nThe generated WAL is just fully this stuff. LOCK+INVALIDATION+COMMIT.\nAnd it's spinning at 100% CPU for trying to replay this.\n\nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154875, lsn: 0/3AFFE270, prev 0/3AFFE218, desc: LOCK xid 8154875 db 5 rel 18188 \nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154875, lsn: 0/3AFFE2A0, prev 0/3AFFE270, desc: LOCK xid 8154875 db 5 rel 18191 \nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154875, lsn: 0/3AFFE2D0, prev 0/3AFFE2A0, desc: LOCK xid 8154875 db 5 rel 18192 \nrmgr: Transaction len (rec/tot): 62/ 62, tx: 8154875, lsn: 0/3AFFE300, prev 0/3AFFE2D0, desc: INVALIDATION ; inval msgs: relcache 18191 relcache 18192\nrmgr: Transaction len (rec/tot): 82/ 82, tx: 8154875, lsn: 0/3AFFE340, prev 0/3AFFE300, desc: COMMIT 2024-07-16 12:56:42.878147 CEST; inval msgs: relcache 18191 relcache 18192\nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154876, lsn: 0/3AFFE398, prev 0/3AFFE340, desc: LOCK xid 8154876 db 5 rel 18188 \nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154876, lsn: 0/3AFFE3C8, prev 0/3AFFE398, desc: LOCK xid 8154876 db 5 rel 18191 \nrmgr: Standby len (rec/tot): 42/ 42, tx: 8154876, lsn: 0/3AFFE3F8, prev 0/3AFFE3C8, desc: LOCK xid 8154876 db 5 rel 18192 \nrmgr: Transaction len (rec/tot): 62/ 62, tx: 8154876, lsn: 0/3AFFE428, prev 0/3AFFE3F8, desc: INVALIDATION ; inval msgs: relcache 18191 relcache 18192\nrmgr: Transaction len (rec/tot): 82/ 82, tx: 8154876, lsn: 0/3AFFE468, prev 0/3AFFE428, desc: COMMIT 2024-07-16 12:56:42.878544 CEST; inval msgs: relcache 18191 relcache 18192\n\nHave people observed this behavior before? Would the community be\nopen to accepting a patch that changes the behavior of ON COMMIT ROWS\nto not WAL log the lock taken and/or the relcache inval?\nI think:\n* An exclusive lock needs to be taken on primary, but does not\nneed to be WAL logged\n* Rel cache invalidation should not be necessary I think (it currently\nhappens just on the TOAST table+index, not on the regular TEMP table)\n\n-Floris\n\n\n\n", "msg_date": "Tue, 16 Jul 2024 11:47:06 +0000", "msg_from": "Floris Van Nee <[email protected]>", "msg_from_op": true, "msg_subject": "temp table on commit delete rows performance issue" }, { "msg_contents": "Hi,\n\n> I'm looking for some input on an issue I've observed. A common pattern\n> I've seen is using temporary tables to put data in before updating the\n> real tables. Something roughly like:\n>\n> On session start:\n> CREATE TEMP TABLE temp_t1 (...) ON COMMIT DELETE ROWS;\n>\n> On update:\n> BEGIN;\n> COPY temp_t1 FROM STDIN (FORMAT BINARY);\n> INSERT INTO t1 (SELECT * FROM temp_t1 ...) ON CONFLICT DO UPDATE SET ...;\n> -- potentially some other operations on temp table to put data into real table t1\n> COMMIT;\n>\n> This pattern starts to break down under certain exceptional circumstances of\n> high concurrency. The \"ON COMMIT DELETE ROWS\" does a truncate that is\n> fairly expensive and doesn't work well in high-concurrency scenarios. It's\n> especially noticeable under following circumstances:\n> - high max_connections setting\n> - high number of temp tables per session\n> - concurrent writers at fairly short intervals\n> Impact is on both TPS on primary as well as that the WAL replay process\n> on replica becomes completely overloaded (100% cpu even though not\n> a lot of WAL is being generated)\n>\n> [...]\n\nI didn't investigate your particular issue but generally speaking\ncreating a table, even a temporary one, is an expensive operation.\n\nNote that it's far from being a seperate file on the disk. It affects\ncatalog tables, shared buffers, all the corresponding locks, etc. If\nyou have indexes for a temporary table it makes the situation ever\nworse. Sooner or later VACUUM will happen for your bloated catalog,\nand this is not fun under heavy load.\n\nIs there any particular reason why you don't want to simply change the\ntarget table directly? If you do it in a transaction you are safe.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 16 Jul 2024 16:20:57 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: temp table on commit delete rows performance issue" }, { "msg_contents": "\r\n> \r\n> I didn't investigate your particular issue but generally speaking creating a\r\n> table, even a temporary one, is an expensive operation.\r\n> \r\n> Note that it's far from being a seperate file on the disk. It affects catalog\r\n> tables, shared buffers, all the corresponding locks, etc. If you have indexes\r\n> for a temporary table it makes the situation ever worse. Sooner or later\r\n> VACUUM will happen for your bloated catalog, and this is not fun under\r\n> heavy load.\r\n\r\nYes, creation is expensive, but note that this test case does not create \r\nthe temp table for every transaction. It creates it once on startup of the\r\nconnection. So there'll be no catalog bloat as the connections are generally\r\nlong-lived. The part that remains to be expensive (but which was unexpected\r\nto me) is the truncate that does happen for every transaction.\r\n\r\n> \r\n> Is there any particular reason why you don't want to simply change the target\r\n> table directly? If you do it in a transaction you are safe.\r\n> \r\n\r\nThis is a fair point which I didn't explain in my earlier message. We find that for\r\nvarious use cases the temp table approach is a really convenient way of interacting\r\nwith Postgres. You can just easily binary COPY whatever data you want into it without\r\nlooking too much at data size and then handling it from there on the db-side.\r\nDoing the same without this, you'd need to resort to passing the data as query parameters, but \r\nthis has complications like a limit in number of bound params per query. It's definitely possible\r\nto use as well, but the binary COPY to temp table is quite convenient.\r\n\r\n-Floris\r\n\r\n", "msg_date": "Tue, 16 Jul 2024 13:36:33 +0000", "msg_from": "Floris Van Nee <[email protected]>", "msg_from_op": true, "msg_subject": "RE: temp table on commit delete rows performance issue" }, { "msg_contents": "Hi Floris,\n\n> On Jul 16, 2024, at 19:47, Floris Van Nee <[email protected]> wrote:\n> \n> Hi hackers,\n> \n> I'm looking for some input on an issue I've observed. A common pattern\n> I've seen is using temporary tables to put data in before updating the\n> real tables. Something roughly like:\n> \n> On session start:\n> CREATE TEMP TABLE temp_t1 (...) ON COMMIT DELETE ROWS;\n> \n> On update:\n> BEGIN;\n> COPY temp_t1 FROM STDIN (FORMAT BINARY);\n> INSERT INTO t1 (SELECT * FROM temp_t1 ...) ON CONFLICT DO UPDATE SET ...;\n> -- potentially some other operations on temp table to put data into real table t1\n> COMMIT;\n> \n> This pattern starts to break down under certain exceptional circumstances of\n> high concurrency. The \"ON COMMIT DELETE ROWS\" does a truncate that is\n> fairly expensive and doesn't work well in high-concurrency scenarios. It's\n> especially noticeable under following circumstances:\n> - high max_connections setting\n> - high number of temp tables per session\n> - concurrent writers at fairly short intervals\n> Impact is on both TPS on primary as well as that the WAL replay process \n> on replica becomes completely overloaded (100% cpu even though not\n> a lot of WAL is being generated)\n> \n> A very simple pgbench example that showcases degradation (taken\n> with max_connections=2000 to clearly show it).\n\nI also encountered the similar performance issue with temporary tables\nandprovided a patch to optimize the truncate performance during commit\nin [1].\n\nAdditionally, is it possible to lower the lock level held during truncate for\ntemporary tables?\n\n[1] https://www.postgresql.org/message-id/flat/tencent_924E990F0493010E2C8404A5D677C70C9707%40qq.com\n\nBest Regards,\nFei Changhong\n\n\nHi Floris,On Jul 16, 2024, at 19:47, Floris Van Nee <[email protected]> wrote:Hi hackers,I'm looking for some input on an issue I've observed. A common patternI've seen is using temporary tables to put data in before updating thereal tables. Something roughly like:On session start:CREATE TEMP TABLE temp_t1 (...) ON COMMIT DELETE ROWS;On update:BEGIN;COPY temp_t1 FROM STDIN (FORMAT BINARY);INSERT INTO t1 (SELECT * FROM temp_t1 ...) ON CONFLICT DO UPDATE SET ...;-- potentially some other operations on temp table to put data into real table t1COMMIT;This pattern starts to break down under certain exceptional circumstances ofhigh concurrency. The \"ON COMMIT DELETE ROWS\" does a truncate that isfairly expensive and doesn't work well in high-concurrency scenarios. It'sespecially noticeable under following circumstances:- high max_connections setting- high number of temp tables per session- concurrent writers at fairly short intervalsImpact is on both TPS on primary as well as that the WAL replay process on replica becomes completely overloaded (100% cpu even though nota lot of WAL is being generated)A very simple pgbench example that showcases degradation (takenwith max_connections=2000 to clearly show it).I also encountered the similar performance issue with temporary tablesandprovided a patch to optimize the truncate performance during commitin [1].Additionally, is it possible to lower the lock level held during truncate fortemporary tables?[1] https://www.postgresql.org/message-id/flat/tencent_924E990F0493010E2C8404A5D677C70C9707%40qq.com\nBest Regards,Fei Changhong", "msg_date": "Wed, 17 Jul 2024 10:38:30 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: temp table on commit delete rows performance issue" }, { "msg_contents": "> I also encountered the similar performance issue with temporary tables\n> andprovided a patch to optimize the truncate performance during commit\n> in [1].\n\nInteresting, that is definitely another good way to improve the performance,\nespecially with a large number of temp tables. I think the two optimizations\ncan actually work well together.\nYour optimization on only truncating the tables that are actually used.\nCombined with a patch like attached which makes sure that no WAL is generated at all\nfor the ON COMMIT DELETE ROWS operation.\n\nOn my test system this reduces WAL generation for the pgbench test case\nI posted previously to 0 (and therefore brought WAL replay process CPU usage\nfrom 100% CPU and lagging behind to only 0% CPU usage)\n\n-Floris", "msg_date": "Thu, 18 Jul 2024 13:36:12 +0000", "msg_from": "Floris Van Nee <[email protected]>", "msg_from_op": true, "msg_subject": "RE: temp table on commit delete rows performance issue" }, { "msg_contents": "Hi Floris,\n\n> On Jul 18, 2024, at 21:36, Floris Van Nee <[email protected]> wrote:\n> \n> \n>> I also encountered the similar performance issue with temporary tables\n>> andprovided a patch to optimize the truncate performance during commit\n>> in [1].\n> \n> Interesting, that is definitely another good way to improve the performance,\n> especially with a large number of temp tables. I think the two optimizations\n> can actually work well together.\n> Your optimization on only truncating the tables that are actually used.\n> Combined with a patch like attached which makes sure that no WAL is generated at all\n> for the ON COMMIT DELETE ROWS operation.\n\nIt seems that in your patch, WAL logging is skipped for all tables, not just\ntemporary tables.\n\nUpon further consideration, do we really need to acquire AccessExclusiveLocks\nfor temporary tables? Since temporary tables can only be accessed within the\ncurrent session, perhaps we can make the following optimizations:\n```\ndiff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\nindex 00074c8a94..845c9603e2 100644\n--- a/src/backend/catalog/heap.c\n+++ b/src/backend/catalog/heap.c\n@@ -2977,9 +2977,17 @@ RelationTruncateIndexes(Relation heapRelation)\n Oid indexId = lfirst_oid(indlist);\n Relation currentIndex;\n IndexInfo *indexInfo;\n+ LOCKMODE lockmode;\n \n- /* Open the index relation; use exclusive lock, just to be sure */\n- currentIndex = index_open(indexId, AccessExclusiveLock);\n+ /*\n+ * Open the index relation; use exclusive lock, just to be sure.\n+ * AccessExclusiveLock is not necessary for temporary tables.\n+ */\n+ if (heapRelation->rd_rel->relpersistence != RELPERSISTENCE_TEMP)\n+ lockmode = AccessExclusiveLock;\n+ else\n+ lockmode = ExclusiveLock;\n+ currentIndex = index_open(indexId, lockmode);\n \n /*\n * Fetch info needed for index_build. Since we know there are no\n@@ -3026,7 +3034,9 @@ heap_truncate(List *relids)\n Oid rid = lfirst_oid(cell);\n Relation rel;\n \n- rel = table_open(rid, AccessExclusiveLock);\n+ /* AccessExclusiveLock is not necessary for temporary tables. */\n+ rel = table_open(rid, ExclusiveLock);\n+ Assert(rel->rd_rel->relpersistence == RELPERSISTENCE_TEMP);\n relations = lappend(relations, rel);\n }\n \n@@ -3059,6 +3069,7 @@ void\n heap_truncate_one_rel(Relation rel)\n {\n Oid toastrelid;\n+ LOCKMODE lockmode;\n \n /*\n * Truncate the relation. Partitioned tables have no storage, so there is\n@@ -3073,11 +3084,17 @@ heap_truncate_one_rel(Relation rel)\n /* If the relation has indexes, truncate the indexes too */\n RelationTruncateIndexes(rel);\n \n+ /* AccessExclusiveLock is not necessary for temporary tables. */\n+ if (rel->rd_rel->relpersistence != RELPERSISTENCE_TEMP)\n+ lockmode = AccessExclusiveLock;\n+ else\n+ lockmode = ExclusiveLock;\n+\n /* If there is a toast table, truncate that too */\n toastrelid = rel->rd_rel->reltoastrelid;\n if (OidIsValid(toastrelid))\n {\n- Relation toastrel = table_open(toastrelid, AccessExclusiveLock);\n+ Relation toastrel = table_open(toastrelid, lockmode);\n \n table_relation_nontransactional_truncate(toastrel);\n RelationTruncateIndexes(toastrel);\n```\n\nBest Regards,\nFei Changhong\n\n\nHi Floris,On Jul 18, 2024, at 21:36, Floris Van Nee <[email protected]> wrote:I also encountered the similar performance issue with temporary tablesandprovided a patch to optimize the truncate performance during commitin [1].Interesting, that is definitely another good way to improve the performance,especially with a large number of temp tables. I think the two optimizationscan actually work well together.Your optimization on only truncating the tables that are actually used.Combined with a patch like attached which makes sure that no WAL is generated at allfor the ON COMMIT DELETE ROWS operation.It seems that in your patch, WAL logging is skipped for all tables, not just\ntemporary tables.Upon further consideration, do we really need to acquire AccessExclusiveLocksfor temporary tables? Since temporary tables can only be accessed within thecurrent session, perhaps we can make the following optimizations:```diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.cindex 00074c8a94..845c9603e2 100644--- a/src/backend/catalog/heap.c+++ b/src/backend/catalog/heap.c@@ -2977,9 +2977,17 @@ RelationTruncateIndexes(Relation heapRelation)         Oid         indexId = lfirst_oid(indlist);         Relation    currentIndex;         IndexInfo  *indexInfo;+        LOCKMODE    lockmode; -        /* Open the index relation; use exclusive lock, just to be sure */-        currentIndex = index_open(indexId, AccessExclusiveLock);+        /*+         * Open the index relation; use exclusive lock, just to be sure.+         * AccessExclusiveLock is not necessary for temporary tables.+         */+        if (heapRelation->rd_rel->relpersistence != RELPERSISTENCE_TEMP)+            lockmode = AccessExclusiveLock;+        else+            lockmode = ExclusiveLock;+        currentIndex = index_open(indexId, lockmode);          /*          * Fetch info needed for index_build.  Since we know there are no@@ -3026,7 +3034,9 @@ heap_truncate(List *relids)         Oid         rid = lfirst_oid(cell);         Relation    rel; -        rel = table_open(rid, AccessExclusiveLock);+        /* AccessExclusiveLock is not necessary for temporary tables. */+        rel = table_open(rid, ExclusiveLock);+        Assert(rel->rd_rel->relpersistence == RELPERSISTENCE_TEMP);         relations = lappend(relations, rel);     } @@ -3059,6 +3069,7 @@ void heap_truncate_one_rel(Relation rel) {     Oid         toastrelid;+    LOCKMODE    lockmode;      /*      * Truncate the relation.  Partitioned tables have no storage, so there is@@ -3073,11 +3084,17 @@ heap_truncate_one_rel(Relation rel)     /* If the relation has indexes, truncate the indexes too */     RelationTruncateIndexes(rel); +    /* AccessExclusiveLock is not necessary for temporary tables. */+    if (rel->rd_rel->relpersistence != RELPERSISTENCE_TEMP)+        lockmode = AccessExclusiveLock;+    else+        lockmode = ExclusiveLock;+     /* If there is a toast table, truncate that too */     toastrelid = rel->rd_rel->reltoastrelid;     if (OidIsValid(toastrelid))     {-        Relation    toastrel = table_open(toastrelid, AccessExclusiveLock);+        Relation    toastrel = table_open(toastrelid, lockmode);          table_relation_nontransactional_truncate(toastrel);         RelationTruncateIndexes(toastrel);```\nBest Regards,Fei Changhong", "msg_date": "Thu, 18 Jul 2024 23:04:42 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: temp table on commit delete rows performance issue" }, { "msg_contents": "\n> It seems that in your patch, WAL logging is skipped for all tables, not just\n> temporary tables.\n\nThis code path is only used in two cases though:\n* For the temporary tables ON COMMIT DROP\n* For truncating tables that were created in the same transaction, or which\nwere already truncated in the same transaction (this is some special case\nin the TRUNCATE command)\nIn both cases I believe it's not necessary to log the lock, as the table doesn't exist\non replica yet or the exclusive lock has already been obtained and logged previously.\nRegular TRUNCATE commands go through a completely different code path,\nas these need to be rollbackable if the transaction aborts.\n\n> Upon further consideration, do we really need to acquire AccessExclusiveLocks\n> for temporary tables? Since temporary tables can only be accessed within the\n> current session, perhaps we can make the following optimizations:\n\nThis one I'm less sure of if it's correct in all cases. Logically it makes sense that no other\nbackends can access it, however I see some threads [1] that suggest that it's technically\npossible for other backends to take locks on these tables, so it's not *that* obvious there\nare no edge cases.\n\n[1] https://postgrespro.com/list/thread-id/2477885\n\n\n", "msg_date": "Thu, 18 Jul 2024 18:55:25 +0000", "msg_from": "Floris Van Nee <[email protected]>", "msg_from_op": true, "msg_subject": "RE: temp table on commit delete rows performance issue" } ]