threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nit seems there's a fairly annoying memory leak in trigger code,\nintroduced by\n\n commit fc22b6623b6b3bab3cb057ccd282c2bfad1a0b30\n Author: Peter Eisentraut <[email protected]>\n Date: Sat Mar 30 08:13:09 2019 +0100\n\n Generated columns\n ...\n\nwhich added GetAllUpdatedColumns() and uses it many places instead of\nthe original GetUpdatedColumns():\n\n #define GetUpdatedColumns(relinfo, estate) \\\n\t (exec_rt_fetch((relinfo)->ri_RangeTableIndex,\n estate)->updatedCols)\n\n #define GetAllUpdatedColumns(relinfo, estate) \\\n\t(bms_union(exec_rt_fetch((relinfo)->ri_RangeTableIndex,\n estate)->updatedCols, \\\n exec_rt_fetch((relinfo)->ri_RangeTableIndex,\n estate)->extraUpdatedCols))\n\nNotice this creates a new bitmap on every calls. That's a bit of a\nproblem, because we call this from:\n\n - ExecASUpdateTriggers\n - ExecARUpdateTriggers\n - ExecBSUpdateTriggers\n - ExecBRUpdateTriggers\n - ExecUpdateLockMode\n\nThis means that for an UPDATE with triggers, we may end up calling this\nfor each row, possibly multiple bitmaps. And those bitmaps are allocated\nin ExecutorState, so won't be freed until the end of the query :-(\n\nThe bitmaps are typically fairly small (a couple bytes), but for wider\ntables it can be a couple dozen bytes. But it's primarily driven by\nnumber of updated rows.\n\nIt's easy to leak gigabytes when updating ~10M rows. I've seen cases\nwith a couple tens of GBs leaked, though, but in that case it seems to\nbe caused by UPDATE ... FROM missing a join condition (so in fact the\nmemory leak is proportional to number of rows in the join result, not\nthe number we end up updating).\n\nAttached is a patch, restoring the pre-12 behavior for me.\n\n\nWhile looking for other places allocating stuff in ExecutorState (for\nthe UPDATE case) and leaving it there, I found two more cases:\n\n1) copy_plpgsql_datums\n\n2) make_expanded_record_from_tupdesc\n make_expanded_record_from_exprecord\n\nAll of this is calls from plpgsql_exec_trigger.\n\nFixing the copy_plpgsql_datums case seems fairly simple, the space\nallocated for local copies can be freed during the cleanup. That's what\n0002 does.\n\nI'm not sure what to do about the expanded records. My understanding of\nthe expanded record lifecycle is fairly limited, so my (rather naive)\nattempt to free the memory failed ...\n\n\nI wonder how much we should care about these cases. On the one hand we\noften leave the cleanup up to the memory context, but the assumption is\nthe context is not unnecessarily long-lived. And ExecutorState is not\nthat. And leaking memory per-row does not seem great either.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 23 May 2023 18:23:00 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> it seems there's a fairly annoying memory leak in trigger code,\n> introduced by\n> ...\n> Attached is a patch, restoring the pre-12 behavior for me.\n\n> While looking for other places allocating stuff in ExecutorState (for\n> the UPDATE case) and leaving it there, I found two more cases:\n\n> 1) copy_plpgsql_datums\n\n> 2) make_expanded_record_from_tupdesc\n> make_expanded_record_from_exprecord\n\n> All of this is calls from plpgsql_exec_trigger.\n\nNot sure about the expanded-record case, but both of your other two\nfixes feel like poor substitutes for pushing the memory into a\nshorter-lived context. In particular I'm quite surprised that\nplpgsql isn't already allocating that workspace in the \"procedure\"\nmemory context.\n\n> I wonder how much we should care about these cases. On the one hand we\n> often leave the cleanup up to the memory context, but the assumption is\n> the context is not unnecessarily long-lived. And ExecutorState is not\n> that. And leaking memory per-row does not seem great either.\n\nI agree per-row leaks in the ExecutorState context are not cool.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 12:39:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 18:23:00 +0200, Tomas Vondra wrote:\n> This means that for an UPDATE with triggers, we may end up calling this\n> for each row, possibly multiple bitmaps. And those bitmaps are allocated\n> in ExecutorState, so won't be freed until the end of the query :-(\n\nUgh.\n\n\nI've wondered about some form of instrumentation to detect such issues\nbefore. It's obviously a problem that we can have fairly large leaks, like the\none you just discovered, without detecting it for a couple years. It's kinda\nmade worse by the memory context infrastructure, because it hides such issues.\n\nCould it help to have a mode where the executor shutdown hook checks how much\nmemory is allocated in ExecutorState and warns if its too much? There's IIRC a\nfew places that allocate large things directly in it, but most of those\nprobably should be dedicated contexts anyway. Something similar could be\nuseful for some other long-lived contexts.\n\n\n> The bitmaps are typically fairly small (a couple bytes), but for wider\n> tables it can be a couple dozen bytes. But it's primarily driven by\n> number of updated rows.\n\nRandom aside: I've been wondering whether it'd be worth introducing an\nin-place representation of Bitmap (e.g. if the low bit is set, the low 63 bits\nare in-place, if unset, it's a pointer).\n\n\n> Attached is a patch, restoring the pre-12 behavior for me.\n\nHm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\nlived memory context instead? Otherwise we'll just end up with the same\nproblem in a few years.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 May 2023 10:14:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I've wondered about some form of instrumentation to detect such issues\n> before.\n\nYeah.\n\n> Could it help to have a mode where the executor shutdown hook checks how much\n> memory is allocated in ExecutorState and warns if its too much?\n\nIt'd be very hard to set a limit for what's \"too much\", since the amount\nof stuff created initially will depend on the plan size. In any case\nI think that the important issue is not how much absolute space, but is\nthere per-row leakage. I wonder if we could do something involving\nchecking for continued growth after the first retrieved tuple, or\nsomething like that.\n\n> Random aside: I've been wondering whether it'd be worth introducing an\n> in-place representation of Bitmap (e.g. if the low bit is set, the low 63 bits\n> are in-place, if unset, it's a pointer).\n\nWhy? Unlike Lists, those things are already a single palloc chunk.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 13:28:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 13:28:30 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Could it help to have a mode where the executor shutdown hook checks how much\n> > memory is allocated in ExecutorState and warns if its too much?\n>\n> It'd be very hard to set a limit for what's \"too much\", since the amount\n> of stuff created initially will depend on the plan size.\n\nI was thinking of some limit that should really never be reached outside of a\nleak or work_mem based allocations, say 2GB or so.\n\n\n> In any case I think that the important issue is not how much absolute space,\n> but is there per-row leakage. I wonder if we could do something involving\n> checking for continued growth after the first retrieved tuple, or something\n> like that.\n\nAs-is I some nodes do large allocations the query context, but are not\nguaranteed to be reached when gathering the first row. So we would still have\nto move such large allocations out of ExecutorState.\n\n\nI think it might be best to go for a combination of these two\nheuristics. Store the size of es_query_context after standard_ExecutorStart(),\nthat would include the allocation of the executor tree itself. Then in\nstandard_ExecutorEnd(), if the difference in size of ExecutorState is bigger\nthan some constant *and* is bigger than the initial size by some factor, emit\na warning.\n\nThe constant size difference avoids spurious warnings in case of a small plan\nthat just grows due to a few fmgr lookups or such, the factor takes care of\nthe plan complexity?\n\n\n> > Random aside: I've been wondering whether it'd be worth introducing an\n> > in-place representation of Bitmap (e.g. if the low bit is set, the low 63 bits\n> > are in-place, if unset, it's a pointer).\n>\n> Why? Unlike Lists, those things are already a single palloc chunk.\n\nWe do a fair amount of 8 byte allocations - they have quite a bit of overhead,\neven after c6e0fe1f2a0. Not needing allocations for the common case of\nbitmapsets with a max member < 63 seems like it could be worth it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 May 2023 11:01:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-05-23 13:28:30 -0400, Tom Lane wrote:\n>> Why? Unlike Lists, those things are already a single palloc chunk.\n\n> We do a fair amount of 8 byte allocations - they have quite a bit of overhead,\n> even after c6e0fe1f2a0. Not needing allocations for the common case of\n> bitmapsets with a max member < 63 seems like it could be worth it.\n\nOh, now I understand what you meant: use the pointer's bits as data.\nDunno that it's a good idea though. You'd pay for the palloc savings\nby needing two or four code paths in every bitmapset function, because\nthe need to reserve one bit would mean you couldn't readily make the\ntwo cases look alike at the bit-pushing level.\n\nAnother big problem is that we'd have to return to treating bitmapsets\nas a special-purpose thing rather than a kind of Node. While that's\nnot very deeply embedded yet, I recall that the alternatives weren't\nattractive.\n\nAlso, returning to the original topic: we'd never find leaks of the\nsort complained of here, because they wouldn't exist in cases with\nfewer than 64 relations per query (or whatever the bitmap is\nrepresenting).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 14:33:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/23/23 18:39, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> it seems there's a fairly annoying memory leak in trigger code,\n>> introduced by\n>> ...\n>> Attached is a patch, restoring the pre-12 behavior for me.\n> \n>> While looking for other places allocating stuff in ExecutorState (for\n>> the UPDATE case) and leaving it there, I found two more cases:\n> \n>> 1) copy_plpgsql_datums\n> \n>> 2) make_expanded_record_from_tupdesc\n>> make_expanded_record_from_exprecord\n> \n>> All of this is calls from plpgsql_exec_trigger.\n> \n> Not sure about the expanded-record case, but both of your other two\n> fixes feel like poor substitutes for pushing the memory into a\n> shorter-lived context. In particular I'm quite surprised that\n> plpgsql isn't already allocating that workspace in the \"procedure\"\n> memory context.\n> \n\nI don't disagree, but which memory context should this use and\nwhen/where should we switch to it?\n\nI haven't seen any obvious memory context candidate in the code calling\nExecGetAllUpdatedCols, so I guess we'd have to pass it from above. Is\nthat a good idea for backbranches ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 23 May 2023 22:57:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/23/23 19:14, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-23 18:23:00 +0200, Tomas Vondra wrote:\n>> This means that for an UPDATE with triggers, we may end up calling this\n>> for each row, possibly multiple bitmaps. And those bitmaps are allocated\n>> in ExecutorState, so won't be freed until the end of the query :-(\n> \n> Ugh.\n> \n> \n> I've wondered about some form of instrumentation to detect such issues\n> before. It's obviously a problem that we can have fairly large leaks, like the\n> one you just discovered, without detecting it for a couple years. It's kinda\n> made worse by the memory context infrastructure, because it hides such issues.\n> \n> Could it help to have a mode where the executor shutdown hook checks how much\n> memory is allocated in ExecutorState and warns if its too much? There's IIRC a\n> few places that allocate large things directly in it, but most of those\n> probably should be dedicated contexts anyway. Something similar could be\n> useful for some other long-lived contexts.\n> \n\nNot sure such simple instrumentation would help much, unfortunately :-(\n\nWe only discovered this because the user reported OOM crashes, which\nmeans the executor didn't get to the shutdown hook at all. Yeah, maybe\nif we had such instrumentation it'd get triggered for milder cases, but\nI'd bet the amount of noise is going to be significant.\n\nFor example, there's a nearby thread about hashjoin allocating buffiles\netc. in ExecutorState - we ended up moving that to a separate context,\nbut surely there are more such cases (not just for ExecutorState).\n\nThe really hard thing was determining what causes the memory leak - the\nsimple instrumentation doesn't help with that at all. It tells you there\nmight be a leak, but you don't know where did the allocations came from.\n\nWhat I ended up doing is a simple gdb script that sets breakpoints on\nall palloc/pfree variants, and prints info (including the backtrace) for\neach call on ExecutorState. And then a script that aggregate those to\nidentify which backtraces allocated most chunks that were not freed.\n\nThis was super slow (definitely useless outside development environment)\nbut it made it super obvious where the root cause is.\n\nI experimented with instrumenting the code a bit (as alternative to gdb\nbreakpoints) - still slow, but much faster than gdb. But perhaps useful\nfor special (valgrind-like) testing mode ...\n\nWould require testing with more data, though. I doubt we'd find much\nwith our regression tests, which have tiny data sets.\n\n> ...\n>\n>> Attached is a patch, restoring the pre-12 behavior for me.\n> \n> Hm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\n> lived memory context instead? Otherwise we'll just end up with the same\n> problem in a few years.\n>\n\nI agree using a shorter lived memory context would be more elegant, and\nmore in line with how we do things. But it's not clear to me why we'd\nend up with the same problem in a few years with what the patch does.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 23 May 2023 23:26:42 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> The really hard thing was determining what causes the memory leak - the\n> simple instrumentation doesn't help with that at all. It tells you there\n> might be a leak, but you don't know where did the allocations came from.\n\n> What I ended up doing is a simple gdb script that sets breakpoints on\n> all palloc/pfree variants, and prints info (including the backtrace) for\n> each call on ExecutorState. And then a script that aggregate those to\n> identify which backtraces allocated most chunks that were not freed.\n\nFWIW, I've had some success localizing palloc memory leaks with valgrind's\nleak detection mode. The trick is to ask it for a report before the\ncontext gets destroyed. Beats writing your own infrastructure ...\n\n> Would require testing with more data, though. I doubt we'd find much\n> with our regression tests, which have tiny data sets.\n\nYeah, it's not clear whether we could make the still-hypothetical check\nsensitive enough to find leaks using small test cases without getting an\nunworkable number of false positives. Still, might be worth trying.\nIt might be an acceptable tradeoff to have stricter rules for what can\nbe allocated in ExecutorState in order to make this sort of problem\nmore easily detectable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 17:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi, just two cents:\n\nOn Tue, May 23, 2023 at 8:01 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-05-23 13:28:30 -0400, Tom Lane wrote:\n> > Andres Freund <[email protected]> writes:\n> > > Could it help to have a mode where the executor shutdown hook checks how much\n> > > memory is allocated in ExecutorState and warns if its too much?\n> >\n> > It'd be very hard to set a limit for what's \"too much\", since the amount\n> > of stuff created initially will depend on the plan size.\n>\n> I was thinking of some limit that should really never be reached outside of a\n> leak or work_mem based allocations, say 2GB or so.\n\nRE: instrumentation subthread:\nif that helps then below technique can work somewhat good on normal\nbinaries for end users (given there are debug symbols installed), so\nmaybe we don't need that much infrastructure added in to see the hot\ncode path:\n\nperf probe -x /path/to/postgres 'palloc' 'size=%di:u64' # RDI on\nx86_64(palloc size arg0)\nperf record -avg --call-graph dwarf -e probe_postgres:palloc -aR -p\n<pid> sleep 3 # cannot be longer, huge overhead (~3s=~2GB)\n\nit produces:\n 50.27% (563d0e380670) size=24\n |\n ---palloc\n bms_copy\n ExecUpdateLockMode\n ExecBRUpdateTriggers\n ExecUpdate\n[..]\n\n 49.73% (563d0e380670) size=16\n |\n ---palloc\n bms_copy\n RelationGetIndexAttrBitmap\n ExecUpdateLockMode\n ExecBRUpdateTriggers\n ExecUpdate\n[..]\n\nNow we know that those small palloc() are guilty, but we didn't know\nat the time with Tomas. The problem here is that we do not know in\npalloc() - via its own arguments for which MemoryContext this is going\nto be allocated for. This is problematic for perf, because on RHEL8, I\nwas not able to generate an uprobe that was capable of reaching a\nglobal variable (CurrentMemoryContext) at that time.\n\nAdditionally what was even more frustrating on diagnosing that case on\nthe customer end system, was that such OOMs were crashing other\nPostgreSQL clusters on the same OS. Even knowing the exact guilty\nstatement it was impossible to limit RSS memory usage of that\nparticular backend. So, what you are proposing makes a lot of sense.\nAlso it got me thinking of implementing safety-memory-net-GUC\ndebug_query_limit_backend_memory=X MB that would inject\nsetrlimit(RLIMIT_DATA) through external extension via hook(s) and\nun-set it later, but the man page states it works for mmap() only\nafter Linux 4.7+ so it is future proof but won't work e.g. on RHEL7 -\nmaybe that's still good enough?; Or, well maybe try to hack a palloc()\na little, but that has probably too big overhead, right? (just\nthinking loud).\n\n-Jakub Wartak.\n\n\n",
"msg_date": "Wed, 24 May 2023 10:19:26 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/23/23 23:39, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> The really hard thing was determining what causes the memory leak - the\n>> simple instrumentation doesn't help with that at all. It tells you there\n>> might be a leak, but you don't know where did the allocations came from.\n> \n>> What I ended up doing is a simple gdb script that sets breakpoints on\n>> all palloc/pfree variants, and prints info (including the backtrace) for\n>> each call on ExecutorState. And then a script that aggregate those to\n>> identify which backtraces allocated most chunks that were not freed.\n> \n> FWIW, I've had some success localizing palloc memory leaks with valgrind's\n> leak detection mode. The trick is to ask it for a report before the\n> context gets destroyed. Beats writing your own infrastructure ...\n> \n\nI haven't tried valgrind, so can't compare.\n\nWould it be possible to filter which memory contexts to track? Say we\nknow the leak is in ExecutorState, so we don't need to track allocations\nin other contexts. That was a huge speedup for me, maybe it'd help\nvalgrind too.\n\nAlso, how did you ask for the report before context gets destroyed?\n\n>> Would require testing with more data, though. I doubt we'd find much\n>> with our regression tests, which have tiny data sets.\n> \n> Yeah, it's not clear whether we could make the still-hypothetical check\n> sensitive enough to find leaks using small test cases without getting an\n> unworkable number of false positives. Still, might be worth trying.\n\nI'm not against experimenting with that. Were you thinking about\nsomething that'd be cheap enough to just be enabled always/everywhere,\nor something we'd enable during testing?\n\nThis reminded me a strangeloop talk [1] [2] about the Scalene memory\nprofiler from UMass. That's for Python, but they did some smart tricks\nto reduce the cost of profiling - maybe we could do something similar,\npossibly by extending the memory contexts a bit.\n\n[1] https://youtu.be/vVUnCXKuNOg?t=1405\n[2] https://youtu.be/vVUnCXKuNOg?t=1706\n\n> It might be an acceptable tradeoff to have stricter rules for what can\n> be allocated in ExecutorState in order to make this sort of problem\n> more easily detectable.\n> \n\nWould these rules be just customary, or defined/enforced in the code\nsomehow? I can't quite imagine how would that work, TBH.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 May 2023 10:49:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/24/23 10:19, Jakub Wartak wrote:\n> Hi, just two cents:\n> \n> On Tue, May 23, 2023 at 8:01 PM Andres Freund <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 2023-05-23 13:28:30 -0400, Tom Lane wrote:\n>>> Andres Freund <[email protected]> writes:\n>>>> Could it help to have a mode where the executor shutdown hook checks how much\n>>>> memory is allocated in ExecutorState and warns if its too much?\n>>>\n>>> It'd be very hard to set a limit for what's \"too much\", since the amount\n>>> of stuff created initially will depend on the plan size.\n>>\n>> I was thinking of some limit that should really never be reached outside of a\n>> leak or work_mem based allocations, say 2GB or so.\n> \n> RE: instrumentation subthread:\n> if that helps then below technique can work somewhat good on normal\n> binaries for end users (given there are debug symbols installed), so\n> maybe we don't need that much infrastructure added in to see the hot\n> code path:\n> \n> perf probe -x /path/to/postgres 'palloc' 'size=%di:u64' # RDI on\n> x86_64(palloc size arg0)\n> perf record -avg --call-graph dwarf -e probe_postgres:palloc -aR -p\n> <pid> sleep 3 # cannot be longer, huge overhead (~3s=~2GB)\n> \n> it produces:\n> 50.27% (563d0e380670) size=24\n> |\n> ---palloc\n> bms_copy\n> ExecUpdateLockMode\n> ExecBRUpdateTriggers\n> ExecUpdate\n> [..]\n> \n> 49.73% (563d0e380670) size=16\n> |\n> ---palloc\n> bms_copy\n> RelationGetIndexAttrBitmap\n> ExecUpdateLockMode\n> ExecBRUpdateTriggers\n> ExecUpdate\n> [..]\n> \n> Now we know that those small palloc() are guilty, but we didn't know\n> at the time with Tomas. The problem here is that we do not know in\n> palloc() - via its own arguments for which MemoryContext this is going\n> to be allocated for. This is problematic for perf, because on RHEL8, I\n> was not able to generate an uprobe that was capable of reaching a\n> global variable (CurrentMemoryContext) at that time.\n> \n\nI think there are a couple even bigger issues:\n\n(a) There are other methods that allocate memory - repalloc, palloc0,\nMemoryContextAlloc, ... and so on. But presumably we can trace all of\nthem at once? We'd have to trace reset/deletes too.\n\n(b) This doesn't say if/when the allocated chunks get freed - even with\na fix, we'll still do exactly the same number of allocations, except\nthat we'll free the memory shortly after that. I wonder if we could\nprint a bit more info for each probe, to match the alloc/free requests.\n\n> Additionally what was even more frustrating on diagnosing that case on\n> the customer end system, was that such OOMs were crashing other\n> PostgreSQL clusters on the same OS. Even knowing the exact guilty\n> statement it was impossible to limit RSS memory usage of that\n> particular backend. So, what you are proposing makes a lot of sense.\n> Also it got me thinking of implementing safety-memory-net-GUC\n> debug_query_limit_backend_memory=X MB that would inject\n> setrlimit(RLIMIT_DATA) through external extension via hook(s) and\n> un-set it later, but the man page states it works for mmap() only\n> after Linux 4.7+ so it is future proof but won't work e.g. on RHEL7 -\n> maybe that's still good enough?; Or, well maybe try to hack a palloc()\n> a little, but that has probably too big overhead, right? (just\n> thinking loud).\n> \n\nNot sure about setting a hard limit - that seems pretty tricky and may\neasily backfire. It's already possible to set such memory limit using\ne.g. cgroups, or even better use VMs to isolate the instances.\n\nI certainly agree it's annoying that when OOM hits, we end up with no\ninformation about what used the memory. Maybe we could have a threshold\ntriggering a call to MemoryContextStats? So that we have at least some\nmemory usage info in the log in case the OOM killer intervenes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 May 2023 15:17:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/23/23 22:57, Tomas Vondra wrote:\n> \n> \n> On 5/23/23 18:39, Tom Lane wrote:\n>> Tomas Vondra <[email protected]> writes:\n>>> it seems there's a fairly annoying memory leak in trigger code,\n>>> introduced by\n>>> ...\n>>> Attached is a patch, restoring the pre-12 behavior for me.\n>>\n>>> While looking for other places allocating stuff in ExecutorState (for\n>>> the UPDATE case) and leaving it there, I found two more cases:\n>>\n>>> 1) copy_plpgsql_datums\n>>\n>>> 2) make_expanded_record_from_tupdesc\n>>> make_expanded_record_from_exprecord\n>>\n>>> All of this is calls from plpgsql_exec_trigger.\n>>\n>> Not sure about the expanded-record case, but both of your other two\n>> fixes feel like poor substitutes for pushing the memory into a\n>> shorter-lived context. In particular I'm quite surprised that\n>> plpgsql isn't already allocating that workspace in the \"procedure\"\n>> memory context.\n>>\n> \n> I don't disagree, but which memory context should this use and\n> when/where should we switch to it?\n> \n> I haven't seen any obvious memory context candidate in the code\n> calling ExecGetAllUpdatedCols, so I guess we'd have to pass it from\n> above. Is that a good idea for backbranches ...\n> \n\nI looked at this again, and I think GetPerTupleMemoryContext(estate)\nmight do the trick, see the 0002 part. Unfortunately it's not much\nsmaller/simpler than just freeing the chunks, because we end up doing\n\n oldcxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\n updatedCols = ExecGetAllUpdatedCols(relinfo, estate);\n MemoryContextSwitchTo(oldcxt);\n\nand then have to pass updatedCols elsewhere. It's tricky to just switch\nto the context (e.g. in ExecASUpdateTriggers/ExecARUpdateTriggers), as\nAfterTriggerSaveEvent allocates other bits of memory too (in a longer\nlived context). So we'd have to do another switch again. Not sure how\nbackpatch-friendly would that be.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 24 May 2023 15:38:41 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> While looking for other places allocating stuff in ExecutorState (for\n> the UPDATE case) and leaving it there, I found two more cases:\n\n> 1) copy_plpgsql_datums\n\n> 2) make_expanded_record_from_tupdesc\n> make_expanded_record_from_exprecord\n\n> All of this is calls from plpgsql_exec_trigger.\n\nCan you show a test case in which this happens? I added some\ninstrumentation and verified at least within our regression tests,\ncopy_plpgsql_datums' CurrentMemoryContext is always plpgsql's\n\"SPI Proc\" context, so I do not see how there can be a query-lifespan\nleak there, nor how your 0003 would fix it if there is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 11:37:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 5/23/23 23:39, Tom Lane wrote:\n>> FWIW, I've had some success localizing palloc memory leaks with valgrind's\n>> leak detection mode. The trick is to ask it for a report before the\n>> context gets destroyed. Beats writing your own infrastructure ...\n\n> I haven't tried valgrind, so can't compare.\n> Would it be possible to filter which memory contexts to track? Say we\n> know the leak is in ExecutorState, so we don't need to track allocations\n> in other contexts. That was a huge speedup for me, maybe it'd help\n> valgrind too.\n\nI don't think valgrind has a way to do that, but this'd be something you\nset up specially in any case.\n\n> Also, how did you ask for the report before context gets destroyed?\n\nThere are several valgrind client requests that could be helpful:\n\n/* Do a full memory leak check (like --leak-check=full) mid-execution. */\n#define VALGRIND_DO_LEAK_CHECK \\\n VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \\\n 0, 0, 0, 0, 0)\n\n/* Same as VALGRIND_DO_LEAK_CHECK but only showing the entries for\n which there was an increase in leaked bytes or leaked nr of blocks\n since the previous leak search. */\n#define VALGRIND_DO_ADDED_LEAK_CHECK \\\n VALGRIND_DO_CLIENT_REQUEST_STMT(VG_USERREQ__DO_LEAK_CHECK, \\\n 0, 1, 0, 0, 0)\n\n/* Return number of leaked, dubious, reachable and suppressed bytes found by\n all previous leak checks. They must be lvalues. */\n#define VALGRIND_COUNT_LEAK_BLOCKS(leaked, dubious, reachable, suppressed) \\\n\nPutting VALGRIND_DO_ADDED_LEAK_CHECK someplace in the executor loop would\nhelp narrow things down pretty quickly, assuming you had a self-contained\nexample demonstrating the leak. I don't recall exactly how I used these\nbut it was something along that line.\n\n>> Yeah, it's not clear whether we could make the still-hypothetical check\n>> sensitive enough to find leaks using small test cases without getting an\n>> unworkable number of false positives. Still, might be worth trying.\n\n> I'm not against experimenting with that. Were you thinking about\n> something that'd be cheap enough to just be enabled always/everywhere,\n> or something we'd enable during testing?\n\nWe seem to have already paid the overhead of counting all palloc\nallocations, so I don't think it'd be too expensive to occasionally check\nthe ExecutorState's mem_allocated and see if it's growing (especially\nif we only do so in assert-enabled builds). The trick is to define the\nrules for what's worth reporting.\n\n>> It might be an acceptable tradeoff to have stricter rules for what can\n>> be allocated in ExecutorState in order to make this sort of problem\n>> more easily detectable.\n\n> Would these rules be just customary, or defined/enforced in the code\n> somehow? I can't quite imagine how would that work, TBH.\n\nIf the check bleated \"WARNING: possible executor memory leak\" during\nregression testing, people would soon become conditioned to doing\nwhatever they have to do to avoid it ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 11:57:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/24/23 17:37, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> While looking for other places allocating stuff in ExecutorState (for\n>> the UPDATE case) and leaving it there, I found two more cases:\n> \n>> 1) copy_plpgsql_datums\n> \n>> 2) make_expanded_record_from_tupdesc\n>> make_expanded_record_from_exprecord\n> \n>> All of this is calls from plpgsql_exec_trigger.\n> \n> Can you show a test case in which this happens? I added some\n> instrumentation and verified at least within our regression tests,\n> copy_plpgsql_datums' CurrentMemoryContext is always plpgsql's\n> \"SPI Proc\" context, so I do not see how there can be a query-lifespan\n> leak there, nor how your 0003 would fix it if there is.\n> \n\nInteresting. I tried to reproduce it, but without success, and it passes\neven with an assert on the context name. The only explanation I have is\nthat the gdb script I used might have been a bit broken - it used\nconditional breakpoints like this one:\n\n break AllocSetAlloc if strcmp(((MemoryContext) $rdi)->name, \\\n \"ExecutorState\") == 0\n commands\n bt\n cont\n end\n\nbut I just noticed gdb sometimes complains about this:\n\n Error in testing breakpoint condition:\n '__strcmp_avx2' has unknown return type; cast the call to its declared\n return type\n\nThe breakpoint still fires all the commands, which is pretty surprising\nbehavior, but that might explain why I saw copy_plpgsql_data as another\nculprit. And I suspect the make_expanded_record calls might be caused by\nthe same thing.\n\nI'll check deeper tomorrow, when I get access to the original script\netc. We can ignore these cases until then.\n\nSorry for the confusion :-/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 May 2023 19:45:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 23:26:42 +0200, Tomas Vondra wrote:\n> On 5/23/23 19:14, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-05-23 18:23:00 +0200, Tomas Vondra wrote:\n> >> This means that for an UPDATE with triggers, we may end up calling this\n> >> for each row, possibly multiple bitmaps. And those bitmaps are allocated\n> >> in ExecutorState, so won't be freed until the end of the query :-(\n> > \n> > Ugh.\n> > \n> > \n> > I've wondered about some form of instrumentation to detect such issues\n> > before. It's obviously a problem that we can have fairly large leaks, like the\n> > one you just discovered, without detecting it for a couple years. It's kinda\n> > made worse by the memory context infrastructure, because it hides such issues.\n> > \n> > Could it help to have a mode where the executor shutdown hook checks how much\n> > memory is allocated in ExecutorState and warns if its too much? There's IIRC a\n> > few places that allocate large things directly in it, but most of those\n> > probably should be dedicated contexts anyway. Something similar could be\n> > useful for some other long-lived contexts.\n> > \n> \n> Not sure such simple instrumentation would help much, unfortunately :-(\n> \n> We only discovered this because the user reported OOM crashes, which\n> means the executor didn't get to the shutdown hook at all. Yeah, maybe\n> if we had such instrumentation it'd get triggered for milder cases, but\n> I'd bet the amount of noise is going to be significant.\n> \n> For example, there's a nearby thread about hashjoin allocating buffiles\n> etc. in ExecutorState - we ended up moving that to a separate context,\n> but surely there are more such cases (not just for ExecutorState).\n\nYes, that's why I said that we would have to more of those into dedicated\ncontexts - which is a good idea independent of this issue.\n\n\n> The really hard thing was determining what causes the memory leak - the\n> simple instrumentation doesn't help with that at all. It tells you there\n> might be a leak, but you don't know where did the allocations came from.\n> \n> What I ended up doing is a simple gdb script that sets breakpoints on\n> all palloc/pfree variants, and prints info (including the backtrace) for\n> each call on ExecutorState. And then a script that aggregate those to\n> identify which backtraces allocated most chunks that were not freed.\n\nFWIW, for things like this I found \"heaptrack\" to be extremely helpful.\n\nE.g. for a reproducer of the problem here, it gave me the attach \"flame graph\"\nof the peak memory usage - after attaching to a running backend and running an\nUPDATE triggering the leak..\n\nBecause the view I screenshotted shows the stacks contributing to peak memory\nusage, it works nicely to find \"temporary leaks\", even if memory is actually\nfreed after all etc.\n\n\n\n> > Hm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\n> > lived memory context instead? Otherwise we'll just end up with the same\n> > problem in a few years.\n> >\n> \n> I agree using a shorter lived memory context would be more elegant, and\n> more in line with how we do things. But it's not clear to me why we'd\n> end up with the same problem in a few years with what the patch does.\n\nBecause it sets up the pattern of manual memory management and continues to\nrun the relevant code within a query-lifetime context.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 24 May 2023 11:14:08 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 15:38:41 +0200, Tomas Vondra wrote:\n> I looked at this again, and I think GetPerTupleMemoryContext(estate)\n> might do the trick, see the 0002 part.\n\nYea, that seems like the right thing here.\n\n\n> Unfortunately it's not much\n> smaller/simpler than just freeing the chunks, because we end up doing\n> \n> oldcxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\n> updatedCols = ExecGetAllUpdatedCols(relinfo, estate);\n> MemoryContextSwitchTo(oldcxt);\n\nWe could add a variant of ExecGetAllUpdatedCols that switches the context.\n\n\n> and then have to pass updatedCols elsewhere. It's tricky to just switch\n> to the context (e.g. in ExecASUpdateTriggers/ExecARUpdateTriggers), as\n> AfterTriggerSaveEvent allocates other bits of memory too (in a longer\n> lived context).\n\nHm - on a quick look the allocations in trigger.c itself are done with\nMemoryContextAlloc().\n\nI did find a problematic path, namely that ExecGetChildToRootMap() ends up\nbuilding resultRelInfo->ri_ChildToRootMap in CurrentMemoryContext.\n\nThat seems like a flat out bug to me - we can't just store data in a\nResultRelInfo without ensuring the memory is lives long enough. Nearby places\nlike ExecGetRootToChildMap() do make sure to switch to es_query_cxt.\n\n\nDid you see other bits of memory getting allocated in CurrentMemoryContext?\n\n\n> So we'd have to do another switch again. Not sure how\n> backpatch-friendly would that be.\n\nYea, that's a valid concern. I think it might be reasonable to use something\nlike ExecGetAllUpdatedColsCtx() in the backbranches, and switch to a\nshort-lived context for the trigger invocations in >= 16.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 11:55:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/24/23 20:55, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-24 15:38:41 +0200, Tomas Vondra wrote:\n>> I looked at this again, and I think GetPerTupleMemoryContext(estate)\n>> might do the trick, see the 0002 part.\n> \n> Yea, that seems like the right thing here.\n> \n> \n>> Unfortunately it's not much\n>> smaller/simpler than just freeing the chunks, because we end up doing\n>>\n>> oldcxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\n>> updatedCols = ExecGetAllUpdatedCols(relinfo, estate);\n>> MemoryContextSwitchTo(oldcxt);\n> \n> We could add a variant of ExecGetAllUpdatedCols that switches the context.\n> \n\nYeah, we could do that. I was thinking about backpatching, and modifying\n ExecGetAllUpdatedCols signature would be ABI break, but adding a\nvariant should be fine.\n\n> \n>> and then have to pass updatedCols elsewhere. It's tricky to just switch\n>> to the context (e.g. in ExecASUpdateTriggers/ExecARUpdateTriggers), as\n>> AfterTriggerSaveEvent allocates other bits of memory too (in a longer\n>> lived context).\n> \n> Hm - on a quick look the allocations in trigger.c itself are done with\n> MemoryContextAlloc().\n> \n> I did find a problematic path, namely that ExecGetChildToRootMap() ends up\n> building resultRelInfo->ri_ChildToRootMap in CurrentMemoryContext.\n> \n> That seems like a flat out bug to me - we can't just store data in a\n> ResultRelInfo without ensuring the memory is lives long enough. Nearby places\n> like ExecGetRootToChildMap() do make sure to switch to es_query_cxt.\n> \n> \n> Did you see other bits of memory getting allocated in CurrentMemoryContext?\n> \n\nNo, I simply tried to do the context switch and then gave up when it\ncrashed on the ExecGetRootToChildMap info. I haven't looked much\nfurther, but you may be right it's the only bit.\n\nIt didn't occur to me it might be a bug - I think the code simply\nassumes it gets called with suitable memory context, just like we do in\nvarious other places. Maybe it should document the assumption.\n\n> \n>> So we'd have to do another switch again. Not sure how\n>> backpatch-friendly would that be.\n> \n> Yea, that's a valid concern. I think it might be reasonable to use something\n> like ExecGetAllUpdatedColsCtx() in the backbranches, and switch to a\n> short-lived context for the trigger invocations in >= 16.\n> \n\nWFM\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 May 2023 21:49:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-23 18:23:00 +0200, Tomas Vondra wrote:\n> While looking for other places allocating stuff in ExecutorState (for\n> the UPDATE case) and leaving it there, I found two more cases:\n\nFor a bit I thought there was a similar problem in ExecWithCheckOptions() -\nbut we error out immediately afterwards, so there's no danger of accumulating\nmemory.\n\n\nI find it quite depressing that we have at least four copies of:\n\n\t/*\n\t * If the tuple has been routed, it's been converted to the partition's\n\t * rowtype, which might differ from the root table's. We must convert it\n\t * back to the root table's rowtype so that val_desc in the error message\n\t * matches the input tuple.\n\t */\n\tif (resultRelInfo->ri_RootResultRelInfo)\n\t{\n\t\tResultRelInfo *rootrel = resultRelInfo->ri_RootResultRelInfo;\n\t\tTupleDesc\told_tupdesc;\n\t\tAttrMap *map;\n\n\t\troot_relid = RelationGetRelid(rootrel->ri_RelationDesc);\n\t\ttupdesc = RelationGetDescr(rootrel->ri_RelationDesc);\n\n\t\told_tupdesc = RelationGetDescr(resultRelInfo->ri_RelationDesc);\n\t\t/* a reverse map */\n\t\tmap = build_attrmap_by_name_if_req(old_tupdesc, tupdesc, false);\n\n\t\t/*\n\t\t * Partition-specific slot's tupdesc can't be changed, so allocate a\n\t\t * new one.\n\t\t */\n\t\tif (map != NULL)\n\t\t\tslot = execute_attr_map_slot(map, slot,\n\t\t\t\t\t\t\t\t\t\t MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n\t\tmodifiedCols = bms_union(ExecGetInsertedCols(rootrel, estate),\n\t\t\t\t\t\t\t\t ExecGetUpdatedCols(rootrel, estate));\n\t}\n\n\nOnce in ExecPartitionCheckEmitError(), *twice* in ExecConstraints(),\nExecWithCheckOptions(). Some of the partitioning stuff has been added in a\nreally myopic way.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 12:54:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/24/23 20:14, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-23 23:26:42 +0200, Tomas Vondra wrote:\n>> On 5/23/23 19:14, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-05-23 18:23:00 +0200, Tomas Vondra wrote:\n>>>> This means that for an UPDATE with triggers, we may end up calling this\n>>>> for each row, possibly multiple bitmaps. And those bitmaps are allocated\n>>>> in ExecutorState, so won't be freed until the end of the query :-(\n>>>\n>>> Ugh.\n>>>\n>>>\n>>> I've wondered about some form of instrumentation to detect such issues\n>>> before. It's obviously a problem that we can have fairly large leaks, like the\n>>> one you just discovered, without detecting it for a couple years. It's kinda\n>>> made worse by the memory context infrastructure, because it hides such issues.\n>>>\n>>> Could it help to have a mode where the executor shutdown hook checks how much\n>>> memory is allocated in ExecutorState and warns if its too much? There's IIRC a\n>>> few places that allocate large things directly in it, but most of those\n>>> probably should be dedicated contexts anyway. Something similar could be\n>>> useful for some other long-lived contexts.\n>>>\n>>\n>> Not sure such simple instrumentation would help much, unfortunately :-(\n>>\n>> We only discovered this because the user reported OOM crashes, which\n>> means the executor didn't get to the shutdown hook at all. Yeah, maybe\n>> if we had such instrumentation it'd get triggered for milder cases, but\n>> I'd bet the amount of noise is going to be significant.\n>>\n>> For example, there's a nearby thread about hashjoin allocating buffiles\n>> etc. in ExecutorState - we ended up moving that to a separate context,\n>> but surely there are more such cases (not just for ExecutorState).\n> \n> Yes, that's why I said that we would have to more of those into dedicated\n> contexts - which is a good idea independent of this issue.\n> \n\nYeah, I think that's a good idea in principle.\n\n> \n>> The really hard thing was determining what causes the memory leak - the\n>> simple instrumentation doesn't help with that at all. It tells you there\n>> might be a leak, but you don't know where did the allocations came from.\n>>\n>> What I ended up doing is a simple gdb script that sets breakpoints on\n>> all palloc/pfree variants, and prints info (including the backtrace) for\n>> each call on ExecutorState. And then a script that aggregate those to\n>> identify which backtraces allocated most chunks that were not freed.\n> \n> FWIW, for things like this I found \"heaptrack\" to be extremely helpful.\n> \n> E.g. for a reproducer of the problem here, it gave me the attach \"flame graph\"\n> of the peak memory usage - after attaching to a running backend and running an\n> UPDATE triggering the leak..\n> \n> Because the view I screenshotted shows the stacks contributing to peak memory\n> usage, it works nicely to find \"temporary leaks\", even if memory is actually\n> freed after all etc.\n> \n\nThat's a nice visualization, but isn't that useful only once you\ndetermine there's a memory leak? Which I think is the hard problem.\n\n> \n> \n>>> Hm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\n>>> lived memory context instead? Otherwise we'll just end up with the same\n>>> problem in a few years.\n>>>\n>>\n>> I agree using a shorter lived memory context would be more elegant, and\n>> more in line with how we do things. But it's not clear to me why we'd\n>> end up with the same problem in a few years with what the patch does.\n> \n> Because it sets up the pattern of manual memory management and continues to\n> run the relevant code within a query-lifetime context.\n> \n\nOh, you mean someone might add new allocations to this code (or into one\nof the functions executed from it), and that'd leak again? Yeah, true.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 May 2023 21:56:22 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 21:49:13 +0200, Tomas Vondra wrote:\n> >> and then have to pass updatedCols elsewhere. It's tricky to just switch\n> >> to the context (e.g. in ExecASUpdateTriggers/ExecARUpdateTriggers), as\n> >> AfterTriggerSaveEvent allocates other bits of memory too (in a longer\n> >> lived context).\n> > \n> > Hm - on a quick look the allocations in trigger.c itself are done with\n> > MemoryContextAlloc().\n> > \n> > I did find a problematic path, namely that ExecGetChildToRootMap() ends up\n> > building resultRelInfo->ri_ChildToRootMap in CurrentMemoryContext.\n> > \n> > That seems like a flat out bug to me - we can't just store data in a\n> > ResultRelInfo without ensuring the memory is lives long enough. Nearby places\n> > like ExecGetRootToChildMap() do make sure to switch to es_query_cxt.\n> > \n> > \n> > Did you see other bits of memory getting allocated in CurrentMemoryContext?\n> > \n> \n> No, I simply tried to do the context switch and then gave up when it\n> crashed on the ExecGetRootToChildMap info. I haven't looked much\n> further, but you may be right it's the only bit.\n> \n> It didn't occur to me it might be a bug - I think the code simply\n> assumes it gets called with suitable memory context, just like we do in\n> various other places. Maybe it should document the assumption.\n\nI think architecturally, code storing stuff in a ResultRelInfo - which has\nquery lifetime - ought to be careful to allocate memory with such lifetime.\nNote that the nearby ExecGetRootToChildMap() actually is careful to do so.\n\n\n> >> So we'd have to do another switch again. Not sure how\n> >> backpatch-friendly would that be.\n> > \n> > Yea, that's a valid concern. I think it might be reasonable to use something\n> > like ExecGetAllUpdatedColsCtx() in the backbranches, and switch to a\n> > short-lived context for the trigger invocations in >= 16.\n\n\nHm - stepping back a bit, why are we doing the work in ExecGetAllUpdatedCols()\nover and over? Unless I am missing something, the result doesn't change\nacross rows. And it doesn't look that cheap to compute, leaving aside the\nallocation that bms_union() does.\n\nIt's visible in profiles, not as a top entry, but still.\n\nPerhaps the easiest to backpatch fix is to just avoid recomputing the value?\nBut perhaps it'd be just as problmeatic, because callers might modify\nExecGetAllUpdatedCols()'s return value in place...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 13:19:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 21:56:22 +0200, Tomas Vondra wrote:\n> >> The really hard thing was determining what causes the memory leak - the\n> >> simple instrumentation doesn't help with that at all. It tells you there\n> >> might be a leak, but you don't know where did the allocations came from.\n> >>\n> >> What I ended up doing is a simple gdb script that sets breakpoints on\n> >> all palloc/pfree variants, and prints info (including the backtrace) for\n> >> each call on ExecutorState. And then a script that aggregate those to\n> >> identify which backtraces allocated most chunks that were not freed.\n> > \n> > FWIW, for things like this I found \"heaptrack\" to be extremely helpful.\n> > \n> > E.g. for a reproducer of the problem here, it gave me the attach \"flame graph\"\n> > of the peak memory usage - after attaching to a running backend and running an\n> > UPDATE triggering the leak..\n> > \n> > Because the view I screenshotted shows the stacks contributing to peak memory\n> > usage, it works nicely to find \"temporary leaks\", even if memory is actually\n> > freed after all etc.\n> > \n> \n> That's a nice visualization, but isn't that useful only once you\n> determine there's a memory leak? Which I think is the hard problem.\n\nSo is your gdb approach, unless I am misunderstanding? The view I\nscreenshotted shows the \"peak\" allocated memory, if you have a potential leak,\nyou can see where most of the allocated memory was allocated. Which at least\nprovides you with a good idea of where to look for a problem in more detail.\n\n\n> >>> Hm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\n> >>> lived memory context instead? Otherwise we'll just end up with the same\n> >>> problem in a few years.\n> >>>\n> >>\n> >> I agree using a shorter lived memory context would be more elegant, and\n> >> more in line with how we do things. But it's not clear to me why we'd\n> >> end up with the same problem in a few years with what the patch does.\n> > \n> > Because it sets up the pattern of manual memory management and continues to\n> > run the relevant code within a query-lifetime context.\n> > \n> \n> Oh, you mean someone might add new allocations to this code (or into one\n> of the functions executed from it), and that'd leak again? Yeah, true.\n\nYes. It's certainly not obvious this far down that we are called in a\nsemi-long-lived memory context.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 13:22:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 5/24/23 22:22, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-24 21:56:22 +0200, Tomas Vondra wrote:\n>>>> The really hard thing was determining what causes the memory leak - the\n>>>> simple instrumentation doesn't help with that at all. It tells you there\n>>>> might be a leak, but you don't know where did the allocations came from.\n>>>>\n>>>> What I ended up doing is a simple gdb script that sets breakpoints on\n>>>> all palloc/pfree variants, and prints info (including the backtrace) for\n>>>> each call on ExecutorState. And then a script that aggregate those to\n>>>> identify which backtraces allocated most chunks that were not freed.\n>>>\n>>> FWIW, for things like this I found \"heaptrack\" to be extremely helpful.\n>>>\n>>> E.g. for a reproducer of the problem here, it gave me the attach \"flame graph\"\n>>> of the peak memory usage - after attaching to a running backend and running an\n>>> UPDATE triggering the leak..\n>>>\n>>> Because the view I screenshotted shows the stacks contributing to peak memory\n>>> usage, it works nicely to find \"temporary leaks\", even if memory is actually\n>>> freed after all etc.\n>>>\n>>\n>> That's a nice visualization, but isn't that useful only once you\n>> determine there's a memory leak? Which I think is the hard problem.\n> \n> So is your gdb approach, unless I am misunderstanding? The view I\n> screenshotted shows the \"peak\" allocated memory, if you have a potential leak,\n> you can see where most of the allocated memory was allocated. Which at least\n> provides you with a good idea of where to look for a problem in more detail.\n> \n\nRight, it wasn't my ambition to detect memory leaks but to source of the\nleak if there's one. I got a bit distracted by the discussion detecting\nleaks etc.\n\n> \n>>>>> Hm. Somehow this doesn't seem quite right. Shouldn't we try to use a shorter\n>>>>> lived memory context instead? Otherwise we'll just end up with the same\n>>>>> problem in a few years.\n>>>>>\n>>>>\n>>>> I agree using a shorter lived memory context would be more elegant, and\n>>>> more in line with how we do things. But it's not clear to me why we'd\n>>>> end up with the same problem in a few years with what the patch does.\n>>>\n>>> Because it sets up the pattern of manual memory management and continues to\n>>> run the relevant code within a query-lifetime context.\n>>>\n>>\n>> Oh, you mean someone might add new allocations to this code (or into one\n>> of the functions executed from it), and that'd leak again? Yeah, true.\n> \n> Yes. It's certainly not obvious this far down that we are called in a\n> semi-long-lived memory context.\n> \n\nThat's true, but I don't see how adding a ExecGetAllUpdatedCols()\nvariant that allocates stuff in a short-lived context improves this.\nThat'll only cover memory allocated in ExecGetAllUpdatedCols() and\nnothing else.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 25 May 2023 16:27:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/24/23 21:49, Tomas Vondra wrote:\n> \n> \n> On 5/24/23 20:55, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-05-24 15:38:41 +0200, Tomas Vondra wrote:\n>>> I looked at this again, and I think GetPerTupleMemoryContext(estate)\n>>> might do the trick, see the 0002 part.\n>>\n>> Yea, that seems like the right thing here.\n>>\n>>\n>>> Unfortunately it's not much\n>>> smaller/simpler than just freeing the chunks, because we end up doing\n>>>\n>>> oldcxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\n>>> updatedCols = ExecGetAllUpdatedCols(relinfo, estate);\n>>> MemoryContextSwitchTo(oldcxt);\n>>\n>> We could add a variant of ExecGetAllUpdatedCols that switches the context.\n>>\n> \n> Yeah, we could do that. I was thinking about backpatching, and modifying\n> ExecGetAllUpdatedCols signature would be ABI break, but adding a\n> variant should be fine.\n> \n>>\n>>> and then have to pass updatedCols elsewhere. It's tricky to just switch\n>>> to the context (e.g. in ExecASUpdateTriggers/ExecARUpdateTriggers), as\n>>> AfterTriggerSaveEvent allocates other bits of memory too (in a longer\n>>> lived context).\n>>\n>> Hm - on a quick look the allocations in trigger.c itself are done with\n>> MemoryContextAlloc().\n>>\n>> I did find a problematic path, namely that ExecGetChildToRootMap() ends up\n>> building resultRelInfo->ri_ChildToRootMap in CurrentMemoryContext.\n>>\n>> That seems like a flat out bug to me - we can't just store data in a\n>> ResultRelInfo without ensuring the memory is lives long enough. Nearby places\n>> like ExecGetRootToChildMap() do make sure to switch to es_query_cxt.\n>>\n>>\n>> Did you see other bits of memory getting allocated in CurrentMemoryContext?\n>>\n> \n> No, I simply tried to do the context switch and then gave up when it\n> crashed on the ExecGetRootToChildMap info. I haven't looked much\n> further, but you may be right it's the only bit.\n> \n> It didn't occur to me it might be a bug - I think the code simply\n> assumes it gets called with suitable memory context, just like we do in\n> various other places. Maybe it should document the assumption.\n> \n>>\n>>> So we'd have to do another switch again. Not sure how\n>>> backpatch-friendly would that be.\n>>\n>> Yea, that's a valid concern. I think it might be reasonable to use something\n>> like ExecGetAllUpdatedColsCtx() in the backbranches, and switch to a\n>> short-lived context for the trigger invocations in >= 16.\n>>\n> \n\nThe attached patch does this - I realized we actually have estate in\nExecGetAllUpdatedCols(), so we don't even need a variant with a\ndifferent signature. That makes the patch much simpler.\n\nThe question is whether we need the signature anyway. There might be a\ncaller expecting the result to be in CurrentMemoryContext (be it\nExecutorState or something else), and this change might break it. But\nI'm not aware of any callers, so maybe that's fine.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 25 May 2023 16:41:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/24/23 22:19, Andres Freund wrote:\n>\n> ...\n> \n> Hm - stepping back a bit, why are we doing the work in ExecGetAllUpdatedCols()\n> over and over? Unless I am missing something, the result doesn't change\n> across rows. And it doesn't look that cheap to compute, leaving aside the\n> allocation that bms_union() does.\n> \n> It's visible in profiles, not as a top entry, but still.\n> \n> Perhaps the easiest to backpatch fix is to just avoid recomputing the value?\n> But perhaps it'd be just as problmeatic, because callers might modify\n> ExecGetAllUpdatedCols()'s return value in place...\n> \n\nYes, I think that's a perfectly valid point - I was actually wondering\nabout that too, but I was focused on fixing this in backbranches so I\nleft this as a future optimization (assuming it can be cached).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 25 May 2023 16:43:55 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 5/25/23 16:41, Tomas Vondra wrote:\n> ...\n>\n> The attached patch does this - I realized we actually have estate in\n> ExecGetAllUpdatedCols(), so we don't even need a variant with a\n> different signature. That makes the patch much simpler.\n> \n> The question is whether we need the signature anyway. There might be a\n> caller expecting the result to be in CurrentMemoryContext (be it\n> ExecutorState or something else), and this change might break it. But\n> I'm not aware of any callers, so maybe that's fine.\n> \n\nI've just pushed a fix along these lines, with a comment explaining that\nthe caller is expected to copy the bitmap into a different context if\nper-tuple context is not sufficient.\n\nIMHO this is the simplest backpatchable solution, and I haven't found\nany callers that'd need to do that.\n\nI agree with the idea not to calculate the bitmap over and over, but\nthat's clearly not backpatchable so it's a matter for separate patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Jun 2023 19:05:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra писал 2023-05-25 17:41:\n\n> The attached patch does this - I realized we actually have estate in\n> ExecGetAllUpdatedCols(), so we don't even need a variant with a\n> different signature. That makes the patch much simpler.\n> \n> The question is whether we need the signature anyway. There might be a\n> caller expecting the result to be in CurrentMemoryContext (be it\n> ExecutorState or something else), and this change might break it. But\n> I'm not aware of any callers, so maybe that's fine.\n> \n\nHi.\nUnfortunately, this patch has broken trigdata->tg_updatedcols usage in \nAFTER UPDATE triggers.\nShould it be if not fixed, but at least mentioned in documentation?\n\nAttaching sample code. After creating trigger, an issue can be \nreproduced as this:\n\ncreate table test (i int, j int);\ncreate function report_update_fields() RETURNS TRIGGER AS \n'/location/to/trig_test.so' language C;\ncreate trigger test_update after update ON test FOR EACH ROW EXECUTE \nFUNCTION report_update_fields();\ninsert into test values (1, 12);\nupdate test set j=2;\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Thu, 22 Jun 2023 14:07:42 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 6/22/23 13:07, Alexander Pyhalov wrote:\n> Tomas Vondra писал 2023-05-25 17:41:\n> \n>> The attached patch does this - I realized we actually have estate in\n>> ExecGetAllUpdatedCols(), so we don't even need a variant with a\n>> different signature. That makes the patch much simpler.\n>>\n>> The question is whether we need the signature anyway. There might be a\n>> caller expecting the result to be in CurrentMemoryContext (be it\n>> ExecutorState or something else), and this change might break it. But\n>> I'm not aware of any callers, so maybe that's fine.\n>>\n> \n> Hi.\n> Unfortunately, this patch has broken trigdata->tg_updatedcols usage in\n> AFTER UPDATE triggers.\n> Should it be if not fixed, but at least mentioned in documentation?\n> \n\nThat's clearly a bug, we can't just break stuff like that.\n\n> Attaching sample code. After creating trigger, an issue can be\n> reproduced as this:\n> \n> create table test (i int, j int);\n> create function report_update_fields() RETURNS TRIGGER AS\n> '/location/to/trig_test.so' language C;\n> create trigger test_update after update ON test FOR EACH ROW EXECUTE\n> FUNCTION report_update_fields();\n> insert into test values (1, 12);\n> update test set j=2;\n> \n\nI haven't tried the reproducer, but I think I see the issue - we store\nthe bitmap as part of the event to be executed later, but the bitmap is\nin per-tuple context and gets reset. So I guess we need to copy it into\nthe proper long-lived context (e.g. AfterTriggerEvents).\n\nI'll get that fixed.\n\nAnyway, this probably hints we should not recalculate the bitmap over\nand over, but calculate it once and keep it for all events. Not\nsomething we can do in backbranches, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 22 Jun 2023 13:46:02 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "On 6/22/23 13:46, Tomas Vondra wrote:\n> ...\n> \n> I haven't tried the reproducer, but I think I see the issue - we store\n> the bitmap as part of the event to be executed later, but the bitmap is\n> in per-tuple context and gets reset. So I guess we need to copy it into\n> the proper long-lived context (e.g. AfterTriggerEvents).\n> \n> I'll get that fixed.\n> \n\nAlexander, can you try if this fixes the issue for you?\n\n\nregard\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 22 Jun 2023 16:16:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "Tomas Vondra писал 2023-06-22 17:16:\n> On 6/22/23 13:46, Tomas Vondra wrote:\n>> ...\n>> \n>> I haven't tried the reproducer, but I think I see the issue - we store\n>> the bitmap as part of the event to be executed later, but the bitmap \n>> is\n>> in per-tuple context and gets reset. So I guess we need to copy it \n>> into\n>> the proper long-lived context (e.g. AfterTriggerEvents).\n>> \n>> I'll get that fixed.\n>> \n> \n> Alexander, can you try if this fixes the issue for you?\n> \n> \n> regard\n\nHi.\nThe patch fixes the problem and looks good to me.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Fri, 23 Jun 2023 09:03:24 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
},
{
"msg_contents": "\n\nOn 6/23/23 08:03, Alexander Pyhalov wrote:\n> Tomas Vondra писал 2023-06-22 17:16:\n>> On 6/22/23 13:46, Tomas Vondra wrote:\n>>> ...\n>>>\n>>> I haven't tried the reproducer, but I think I see the issue - we store\n>>> the bitmap as part of the event to be executed later, but the bitmap is\n>>> in per-tuple context and gets reset. So I guess we need to copy it into\n>>> the proper long-lived context (e.g. AfterTriggerEvents).\n>>>\n>>> I'll get that fixed.\n>>>\n>>\n>> Alexander, can you try if this fixes the issue for you?\n>>\n>>\n>> regard\n> \n> Hi.\n> The patch fixes the problem and looks good to me.\n\nThanks, I've pushed the fix, including backpatch to 13+ (12 is not\naffected by the oversight, the bitmap was added by 71d60e2aa0).\n\nI think it'd be good to investigate if it's possible to compute the\nbitmap only once - as already suggested by Andres, but that's a matter\nfor separate patch, not a bugfix.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 2 Jul 2023 22:26:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: memory leak in trigger handling (since PG12)"
}
] |
[
{
"msg_contents": "Hi!\n\nI noticed that errors due to writable CTEs in read-only or non-volatile context say the offensive command is SELECT.\n\nFor example a writeable CTE in a IMMUTABLE function:\n\n CREATE TABLE t (x INTEGER);\n\n CREATE FUNCTION immutable_func()\n RETURNS INTEGER\n LANGUAGE SQL\n IMMUTABLE\n AS $$\n WITH x AS (\n INSERT INTO t (x) VALUES (1) RETURNING x\n ) SELECT * FROM x;\n $$;\n\n SELECT immutable_func();\n\n ERROR: SELECT is not allowed in a non-volatile function\n\nOr a writeable CTE in read-only transaction:\n\n START TRANSACTION READ ONLY;\n WITH x AS (\n INSERT INTO t (x) VALUES (1) RETURNING x\n )\n SELECT * FROM x;\n\n ERROR: cannot execute SELECT in a read-only transaction\n\nMy first thought was that these error messages should mention INSERT, but after looking into the source I’m not sure anymore. The name of the command is obtained from CreateCommandName(). After briefly looking around it doesn’t seem to be trivial to introduce something along the line of CreateModifyingCommandName().\n\nSo I started by using a different error message at those places where I think it should. I’ve attached a patch for reference, but I’m not happy with it. In particular I’m unsure about the SPI stuff (how to test?) and if there are more cases as those covered by the patch. Ultimately getting hold of the command name might also be beneficial for a new error message.\n\n A WITH clause containing a data-modifying statement is not allowed in a read-only transaction\n\nIt wouldn’t make me sad if somebody who touches the code more often than once every few years can take care of it.\n\n-markus",
"msg_date": "Tue, 23 May 2023 19:12:23 +0200",
"msg_from": "Markus Winand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong command name in writeable-CTE related error messages"
},
{
"msg_contents": "Markus Winand <[email protected]> writes:\n> I noticed that errors due to writable CTEs in read-only or non-volatile context say the offensive command is SELECT.\n\nGood point.\n\n> My first thought was that these error messages should mention INSERT, but after looking into the source I’m not sure anymore. The name of the command is obtained from CreateCommandName(). After briefly looking around it doesn’t seem to be trivial to introduce something along the line of CreateModifyingCommandName().\n\nYeah, you would have to inspect the plan tree pretty carefully to\ndetermine that.\n\nGiven the way the test is written, maybe it'd make sense to forget about\nmentioning the command name, and instead identify the table we are\ncomplaining about:\n\nERROR: table \"foo\" cannot be modified in a read-only transaction\n\nI don't see any huge point in using PreventCommandIfReadOnly if we\ngo that way, so no refactoring is needed: just test XactReadOnly\ndirectly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 May 2023 13:40:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong command name in writeable-CTE related error messages"
},
{
"msg_contents": "> On 23.05.2023, at 19:40, Tom Lane <[email protected]> wrote:\n> \n> Markus Winand <[email protected]> writes:\n>> I noticed that errors due to writable CTEs in read-only or non-volatile context say the offensive command is SELECT.\n> \n> Good point.\n> \n>> My first thought was that these error messages should mention INSERT, but after looking into the source I’m not sure anymore. The name of the command is obtained from CreateCommandName(). After briefly looking around it doesn’t seem to be trivial to introduce something along the line of CreateModifyingCommandName().\n> \n> Yeah, you would have to inspect the plan tree pretty carefully to\n> determine that.\n> \n> Given the way the test is written, maybe it'd make sense to forget about\n> mentioning the command name, and instead identify the table we are\n> complaining about:\n> \n> ERROR: table \"foo\" cannot be modified in a read-only transaction\n\nAttached patch takes the active form:\n\n cannot modify table ”foo\" in a read-only transaction\n\nIt obtains the table name by searching rtable for an RTE_RELATION with rellockmode == RowExclusiveLock. Not sure if there are any cases where that falls apart.\n\n> I don't see any huge point in using PreventCommandIfReadOnly if we\n> go that way, so no refactoring is needed: just test XactReadOnly\n> directly.\n\nAs there are several places where this is needed, the patch introduces some utility functions.\n\nMore interestingly, I found that BEGIN ATOMIC bodies of non-volatile functions happily accept data-modifying statements and FOR UPDATE. While they fail at runtime it was my expectation that this would be caught at CREATE time. The attached patch also takes care of this by walking the Query tree and looking for resultRelation and hasForUpdate — assuming that non-volatile functions should have neither. Let me know if this is desired behavior or not.\n\n-markus",
"msg_date": "Thu, 7 Sep 2023 19:51:24 +0200",
"msg_from": "Markus Winand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong command name in writeable-CTE related error messages"
}
] |
[
{
"msg_contents": "Hello,\n\nWe (Neon) have noticed that pgbench can be quite slow to populate data\nin regard to higher latency connections. Higher scale factors exacerbate\nthis problem. Some employees work on a different continent than the\ndatabases they might be benchmarking. By moving pgbench to use COPY for\npopulating all tables, we can reduce some of the time pgbench takes for\nthis particular step. \n\nI wanted to come with benchmarks, but unfortunately I won't have them\nuntil next month. I can follow-up in a future email.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 23 May 2023 12:33:21 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Tue May 23, 2023 at 12:33 PM CDT, Tristan Partin wrote:\n> I wanted to come with benchmarks, but unfortunately I won't have them\n> until next month. I can follow-up in a future email.\n\nI finally got around to benchmarking.\n\nmaster:\n\n$ ./build/src/bin/pgbench/pgbench -i -s 500 CONNECTION_STRING\ndropping old tables...\nNOTICE: table \"pgbench_accounts\" does not exist, skipping\nNOTICE: table \"pgbench_branches\" does not exist, skipping\nNOTICE: table \"pgbench_history\" does not exist, skipping\nNOTICE: table \"pgbench_tellers\" does not exist, skipping\ncreating tables...\ngenerating data (client-side)...\n50000000 of 50000000 tuples (100%) done (elapsed 260.93 s, remaining 0.00 s))\nvacuuming...\ncreating primary keys...\ndone in 1414.26 s (drop tables 0.20 s, create tables 0.82 s, client-side generate 1280.43 s, vacuum 2.55 s, primary keys 130.25 s).\n\npatchset:\n\n$ ./build/src/bin/pgbench/pgbench -i -s 500 CONNECTION_STRING\ndropping old tables...\nNOTICE: table \"pgbench_accounts\" does not exist, skipping\nNOTICE: table \"pgbench_branches\" does not exist, skipping\nNOTICE: table \"pgbench_history\" does not exist, skipping\nNOTICE: table \"pgbench_tellers\" does not exist, skipping\ncreating tables...\ngenerating data (client-side)...\n50000000 of 50000000 tuples (100%) of pgbench_accounts done (elapsed 243.82 s, remaining 0.00 s))\nvacuuming...\ncreating primary keys...\ndone in 375.66 s (drop tables 0.14 s, create tables 0.73 s, client-side generate 246.27 s, vacuum 2.77 s, primary keys 125.75 s).\n\nI am located in Austin, Texas. The server was located in Ireland. Just\nwanted to put some distance between the client and server. To summarize,\nthat is about an 80% reduction in client-side data generation times.\n\nNote that I used Neon to collect these results.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 07 Jun 2023 14:16:00 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 07:16, Tristan Partin <[email protected]> wrote:\n>\n> master:\n>\n> 50000000 of 50000000 tuples (100%) done (elapsed 260.93 s, remaining 0.00 s))\n> vacuuming...\n> creating primary keys...\n> done in 1414.26 s (drop tables 0.20 s, create tables 0.82 s, client-side generate 1280.43 s, vacuum 2.55 s, primary keys 130.25 s).\n>\n> patchset:\n>\n> 50000000 of 50000000 tuples (100%) of pgbench_accounts done (elapsed 243.82 s, remaining 0.00 s))\n> vacuuming...\n> creating primary keys...\n> done in 375.66 s (drop tables 0.14 s, create tables 0.73 s, client-side generate 246.27 s, vacuum 2.77 s, primary keys 125.75 s).\n\nI've also previously found pgbench -i to be slow. It was a while ago,\nand IIRC, it was due to the printfPQExpBuffer() being a bottleneck\ninside pgbench.\n\nOn seeing your email, it makes me wonder if PG16's hex integer\nliterals might help here. These should be much faster to generate in\npgbench and also parse on the postgres side.\n\nI wrote a quick and dirty patch to try that and I'm not really getting\nthe same performance increases as I'd have expected. I also tested\nwith your patch too and it does not look that impressive either when\nrunning pgbench on the same machine as postgres.\n\npgbench copy speedup\n\n** master\ndrowley@amd3990x:~$ pgbench -i -s 1000 postgres\n100000000 of 100000000 tuples (100%) done (elapsed 74.15 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone in 95.71 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 74.45 s, vacuum 0.12 s, primary keys 21.13 s).\n\n** David's Patched\ndrowley@amd3990x:~$ pgbench -i -s 1000 postgres\n100000000 of 100000000 tuples (100%) done (elapsed 69.64 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone in 90.22 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 69.91 s, vacuum 0.12 s, primary keys 20.18 s).\n\n** Tristan's patch\ndrowley@amd3990x:~$ pgbench -i -s 1000 postgres\n100000000 of 100000000 tuples (100%) of pgbench_accounts done (elapsed\n77.44 s, remaining 0.00 s)\nvacuuming...\ncreating primary keys...\ndone in 98.64 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 77.47 s, vacuum 0.12 s, primary keys 21.04 s).\n\nI'm interested to see what numbers you get. You'd need to test on\nPG16 however. I left the old code in place to generate the decimal\nnumbers for versions < 16.\n\nDavid",
"msg_date": "Thu, 8 Jun 2023 17:33:57 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "I guess that COPY will still be slower than generating the data\nserver-side ( --init-steps=...G... ) ?\n\nWhat I'd really like to see is providing all the pgbench functions\nalso on the server. Specifically the various random(...) functions -\nrandom_exponential(...), random_gaussian(...), random_zipfian(...) so\nthat also custom data generationm could be easily done server-side\nwith matching distributions.\n\nOn Thu, Jun 8, 2023 at 7:34 AM David Rowley <[email protected]> wrote:\n>\n> On Thu, 8 Jun 2023 at 07:16, Tristan Partin <[email protected]> wrote:\n> >\n> > master:\n> >\n> > 50000000 of 50000000 tuples (100%) done (elapsed 260.93 s, remaining 0.00 s))\n> > vacuuming...\n> > creating primary keys...\n> > done in 1414.26 s (drop tables 0.20 s, create tables 0.82 s, client-side generate 1280.43 s, vacuum 2.55 s, primary keys 130.25 s).\n> >\n> > patchset:\n> >\n> > 50000000 of 50000000 tuples (100%) of pgbench_accounts done (elapsed 243.82 s, remaining 0.00 s))\n> > vacuuming...\n> > creating primary keys...\n> > done in 375.66 s (drop tables 0.14 s, create tables 0.73 s, client-side generate 246.27 s, vacuum 2.77 s, primary keys 125.75 s).\n>\n> I've also previously found pgbench -i to be slow. It was a while ago,\n> and IIRC, it was due to the printfPQExpBuffer() being a bottleneck\n> inside pgbench.\n>\n> On seeing your email, it makes me wonder if PG16's hex integer\n> literals might help here. These should be much faster to generate in\n> pgbench and also parse on the postgres side.\n>\n> I wrote a quick and dirty patch to try that and I'm not really getting\n> the same performance increases as I'd have expected. I also tested\n> with your patch too and it does not look that impressive either when\n> running pgbench on the same machine as postgres.\n>\n> pgbench copy speedup\n>\n> ** master\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) done (elapsed 74.15 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 95.71 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 74.45 s, vacuum 0.12 s, primary keys 21.13 s).\n>\n> ** David's Patched\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) done (elapsed 69.64 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 90.22 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 69.91 s, vacuum 0.12 s, primary keys 20.18 s).\n>\n> ** Tristan's patch\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) of pgbench_accounts done (elapsed\n> 77.44 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 98.64 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 77.47 s, vacuum 0.12 s, primary keys 21.04 s).\n>\n> I'm interested to see what numbers you get. You'd need to test on\n> PG16 however. I left the old code in place to generate the decimal\n> numbers for versions < 16.\n>\n> David\n\n\n",
"msg_date": "Thu, 8 Jun 2023 13:37:11 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Thu Jun 8, 2023 at 12:33 AM CDT, David Rowley wrote:\n> On Thu, 8 Jun 2023 at 07:16, Tristan Partin <[email protected]> wrote:\n> >\n> > master:\n> >\n> > 50000000 of 50000000 tuples (100%) done (elapsed 260.93 s, remaining 0.00 s))\n> > vacuuming...\n> > creating primary keys...\n> > done in 1414.26 s (drop tables 0.20 s, create tables 0.82 s, client-side generate 1280.43 s, vacuum 2.55 s, primary keys 130.25 s).\n> >\n> > patchset:\n> >\n> > 50000000 of 50000000 tuples (100%) of pgbench_accounts done (elapsed 243.82 s, remaining 0.00 s))\n> > vacuuming...\n> > creating primary keys...\n> > done in 375.66 s (drop tables 0.14 s, create tables 0.73 s, client-side generate 246.27 s, vacuum 2.77 s, primary keys 125.75 s).\n>\n> I've also previously found pgbench -i to be slow. It was a while ago,\n> and IIRC, it was due to the printfPQExpBuffer() being a bottleneck\n> inside pgbench.\n>\n> On seeing your email, it makes me wonder if PG16's hex integer\n> literals might help here. These should be much faster to generate in\n> pgbench and also parse on the postgres side.\n\nDo you have a link to some docs? I have not heard of the feature.\nDefinitely feels like a worthy cause.\n\n> I wrote a quick and dirty patch to try that and I'm not really getting\n> the same performance increases as I'd have expected. I also tested\n> with your patch too and it does not look that impressive either when\n> running pgbench on the same machine as postgres.\n\nI didn't expect my patch to increase performance in all workloads. I was\nmainly aiming to fix high-latency connections. Based on your results\nthat looks like a 4% reduction in performance of client-side data\ngeneration. I had thought maybe it is worth having a flag to keep the\nold way too, but I am not sure a 4% hit is really that big of a deal.\n\n> pgbench copy speedup\n>\n> ** master\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) done (elapsed 74.15 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 95.71 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 74.45 s, vacuum 0.12 s, primary keys 21.13 s).\n>\n> ** David's Patched\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) done (elapsed 69.64 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 90.22 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 69.91 s, vacuum 0.12 s, primary keys 20.18 s).\n>\n> ** Tristan's patch\n> drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> 100000000 of 100000000 tuples (100%) of pgbench_accounts done (elapsed\n> 77.44 s, remaining 0.00 s)\n> vacuuming...\n> creating primary keys...\n> done in 98.64 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> generate 77.47 s, vacuum 0.12 s, primary keys 21.04 s).\n>\n> I'm interested to see what numbers you get. You'd need to test on\n> PG16 however. I left the old code in place to generate the decimal\n> numbers for versions < 16.\n\nI will try to test this soon and follow up on the thread. I definitely\nsee no problems with your patch as is though. I would be more than happy\nto rebase my patches on yours.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 08 Jun 2023 11:38:01 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Tue, May 23, 2023 at 1:33 PM Tristan Partin <[email protected]> wrote:\n\n> We (Neon) have noticed that pgbench can be quite slow to populate data\n> in regard to higher latency connections. Higher scale factors exacerbate\n> this problem. Some employees work on a different continent than the\n> databases they might be benchmarking. By moving pgbench to use COPY for\n> populating all tables, we can reduce some of the time pgbench takes for\n> this particular step.\n>\n\nWhen latency is continent size high, pgbench should be run with server-side\ntable generation instead of using COPY at all, for any table. The default\nCOPY based pgbench generation is only intended for testing where the client\nand server are very close on the network.\n\nUnfortunately there's no simple command line option to change just that one\nthing about how pgbench runs. You have to construct a command line that\ndocuments each and every step you want instead. You probably just want\nthis form:\n\n $ pgbench -i -I dtGvp -s 500\n\nThat's server-side table generation with all the usual steps. I use this\ninstead of COPY in pgbench-tools so much now, basically whenever I'm\ntalking to a cloud system, that I have a simple 0/1 config option to switch\nbetween the modes, and this long weird one is the default now.\n\nTry that out, and once you see the numbers my bet is you'll see extending\nwhich tables get COPY isn't needed by your use case anymore. Basically, if\nyou are close enough to use COPY instead of server-side generation, you are\nclose enough that every table besides accounts will not add up to enough\ntime to worry about optimizing the little ones.\n\n--\nGreg Smith [email protected]\nDirector of Open Source Strategy\n\nOn Tue, May 23, 2023 at 1:33 PM Tristan Partin <[email protected]> wrote:We (Neon) have noticed that pgbench can be quite slow to populate data\nin regard to higher latency connections. Higher scale factors exacerbate\nthis problem. Some employees work on a different continent than the\ndatabases they might be benchmarking. By moving pgbench to use COPY for\npopulating all tables, we can reduce some of the time pgbench takes for\nthis particular step. When latency is continent size high, pgbench should be run with \nserver-side table generation instead of using COPY at all, for any \ntable. The default COPY based pgbench generation is only intended for \ntesting where the client and server are very close on the network.Unfortunately\n there's no simple command line option to change just that one thing \nabout how pgbench runs. You have to construct a command line that \ndocuments each and every step you want instead. You probably just want \nthis form: $ pgbench -i -I dtGvp -s 500That's\n server-side table generation with all the usual steps. I use this \ninstead of COPY in pgbench-tools so much now, basically whenever I'm \ntalking to a cloud system, that I have a simple 0/1 config option to \nswitch between the modes, and this long weird one is the default now.Try\n that out, and once you see the numbers my bet is you'll see extending \nwhich tables get COPY isn't needed by your use case anymore. Basically,\n if you are close enough to use COPY instead of server-side generation, \nyou are close enough that every table besides accounts will not add up \nto enough time to worry about optimizing the little ones.--Greg Smith [email protected] of Open Source Strategy",
"msg_date": "Fri, 9 Jun 2023 09:24:31 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri Jun 9, 2023 at 8:24 AM CDT, Gregory Smith wrote:\n> On Tue, May 23, 2023 at 1:33 PM Tristan Partin <[email protected]> wrote:\n>\n> > We (Neon) have noticed that pgbench can be quite slow to populate data\n> > in regard to higher latency connections. Higher scale factors exacerbate\n> > this problem. Some employees work on a different continent than the\n> > databases they might be benchmarking. By moving pgbench to use COPY for\n> > populating all tables, we can reduce some of the time pgbench takes for\n> > this particular step.\n> >\n>\n> When latency is continent size high, pgbench should be run with server-side\n> table generation instead of using COPY at all, for any table. The default\n> COPY based pgbench generation is only intended for testing where the client\n> and server are very close on the network.\n>\n> Unfortunately there's no simple command line option to change just that one\n> thing about how pgbench runs. You have to construct a command line that\n> documents each and every step you want instead. You probably just want\n> this form:\n>\n> $ pgbench -i -I dtGvp -s 500\n>\n> That's server-side table generation with all the usual steps. I use this\n> instead of COPY in pgbench-tools so much now, basically whenever I'm\n> talking to a cloud system, that I have a simple 0/1 config option to switch\n> between the modes, and this long weird one is the default now.\n>\n> Try that out, and once you see the numbers my bet is you'll see extending\n> which tables get COPY isn't needed by your use case anymore. Basically, if\n> you are close enough to use COPY instead of server-side generation, you are\n> close enough that every table besides accounts will not add up to enough\n> time to worry about optimizing the little ones.\n\nThanks for your input Greg. I'm sure you're correct that server-side data\ngeneration would probably fix the problem. I guess I am trying to\nunderstand if there are any downsides to just committing this anyway. I\nhaven't done any more testing, but David's email only showed a 4%\nperformance drop in the workload he tested. Combine this with his hex\npatch and we would see an overall performance improvement when\neverything is said and done.\n\nIt seems like this patch would still be good for client-side high scale\nfactor data generation (when server and client are close), but I would\nneed to do testing to confirm.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 09 Jun 2023 09:55:25 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "David,\n\nI think you should submit this patch standalone. I don't see any reason\nthis shouldn't be reviewed and committed when fully fleshed out.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 09 Jun 2023 09:59:47 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 6:24 AM Gregory Smith <[email protected]> wrote:\n>\n> Unfortunately there's no simple command line option to change just that one thing about how pgbench runs. You have to construct a command line that documents each and every step you want instead. You probably just want this form:\n>\n> $ pgbench -i -I dtGvp -s 500\n\nThe steps are severely under-documented in pgbench --help output.\nGrepping that output I could not find any explanation of these steps,\nso I dug through the code and found them in runInitSteps(). Just as I\nwas thinking of writing a patch to remedy that, just to be sure, I\nchecked online docs and sure enough, they are well-documented under\npgbench [1].\n\nI think at least a pointer to the the pgbench docs should be mentioned\nin the pgbench --help output; an average user may not rush to read the\ncode to find the explanation, but a hint to where to find more details\nabout what the letters in --init-steps mean, would save them a lot of\ntime.\n\n[1]: https://www.postgresql.org/docs/15/pgbench.html\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 9 Jun 2023 10:25:14 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri Jun 9, 2023 at 12:25 PM CDT, Gurjeet Singh wrote:\n> On Fri, Jun 9, 2023 at 6:24 AM Gregory Smith <[email protected]> wrote:\n> >\n> > Unfortunately there's no simple command line option to change just that one thing about how pgbench runs. You have to construct a command line that documents each and every step you want instead. You probably just want this form:\n> >\n> > $ pgbench -i -I dtGvp -s 500\n>\n> The steps are severely under-documented in pgbench --help output.\n> Grepping that output I could not find any explanation of these steps,\n> so I dug through the code and found them in runInitSteps(). Just as I\n> was thinking of writing a patch to remedy that, just to be sure, I\n> checked online docs and sure enough, they are well-documented under\n> pgbench [1].\n>\n> I think at least a pointer to the the pgbench docs should be mentioned\n> in the pgbench --help output; an average user may not rush to read the\n> code to find the explanation, but a hint to where to find more details\n> about what the letters in --init-steps mean, would save them a lot of\n> time.\n>\n> [1]: https://www.postgresql.org/docs/15/pgbench.html\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n\nI think this would be nice to have if you wanted to submit a patch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 09 Jun 2023 12:58:31 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 1:25 PM Gurjeet Singh <[email protected]> wrote:\n\n> > $ pgbench -i -I dtGvp -s 500\n>\n> The steps are severely under-documented in pgbench --help output.\n>\n\nI agree it's not easy to find information. I just went through double\nchecking I had the order recently enough to remember what I did. The man\npages have this:\n\n> Each step is invoked in the specified order. The default is dtgvp.\n\nWhich was what I wanted to read. Meanwhile the --help output says:\n\n> -I, --init-steps=[dtgGvpf]+ (default \"dtgvp\")\n\n%T$%%Which has the information without a lot of context for what it's used\nfor. I'd welcome some text expansion that added a minimal but functional\nimprovement to that the existing help output; I don't have such text.\n\nOn Fri, Jun 9, 2023 at 1:25 PM Gurjeet Singh <[email protected]> wrote:> $ pgbench -i -I dtGvp -s 500\n\nThe steps are severely under-documented in pgbench --help output.\nI agree it's not easy to find information. I just went through double checking I had the order recently enough to remember what I did. The man pages have this:> Each step is invoked in the specified order. The default is dtgvp.Which was what I wanted to read. Meanwhile the --help output says:> -I, --init-steps=[dtgGvpf]+ (default \"dtgvp\")%T$%%Which has the information without a lot of context for what it's used for. I'd welcome some text expansion that added a minimal but functional improvement to that the existing help output; I don't have such text.",
"msg_date": "Fri, 9 Jun 2023 20:41:50 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 5:42 PM Gregory Smith <[email protected]> wrote:\n>\n> On Fri, Jun 9, 2023 at 1:25 PM Gurjeet Singh <[email protected]> wrote:\n>>\n>> > $ pgbench -i -I dtGvp -s 500\n>>\n>> The steps are severely under-documented in pgbench --help output.\n>\n>\n> I agree it's not easy to find information. I just went through double checking I had the order recently enough to remember what I did. The man pages have this:\n>\n> > Each step is invoked in the specified order. The default is dtgvp.\n>\n> Which was what I wanted to read. Meanwhile the --help output says:\n>\n> > -I, --init-steps=[dtgGvpf]+ (default \"dtgvp\")\n>\n> %T$%%Which has the information without a lot of context for what it's used for. I'd welcome some text expansion that added a minimal but functional improvement to that the existing help output; I don't have such text.\n\nPlease see attached 2 variants of the patch. Variant 1 simply tells\nthe reader to consult pgbench documentation. The second variant\nprovides a description for each of the letters, as the documentation\ndoes. The committer can pick the one they find suitable.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Fri, 9 Jun 2023 18:20:12 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 6:20 PM Gurjeet Singh <[email protected]> wrote:\n> On Fri, Jun 9, 2023 at 5:42 PM Gregory Smith <[email protected]> wrote:\n> > On Fri, Jun 9, 2023 at 1:25 PM Gurjeet Singh <[email protected]> wrote:\n> >>\n> >> > $ pgbench -i -I dtGvp -s 500\n> >>\n> >> The steps are severely under-documented in pgbench --help output.\n> >\n> > I agree it's not easy to find information. I just went through double checking I had the order recently enough to remember what I did. The man pages have this:\n> >\n> > > Each step is invoked in the specified order. The default is dtgvp.\n> >\n> > Which was what I wanted to read. Meanwhile the --help output says:\n> >\n> > > -I, --init-steps=[dtgGvpf]+ (default \"dtgvp\")\n> >\n> > %T$%%Which has the information without a lot of context for what it's used for. I'd welcome some text expansion that added a minimal but functional improvement to that the existing help output; I don't have such text.\n>\n> Please see attached 2 variants of the patch. Variant 1 simply tells\n> the reader to consult pgbench documentation. The second variant\n> provides a description for each of the letters, as the documentation\n> does. The committer can pick the one they find suitable.\n\n(I was short on time, so had to keep it short in the above email. To\njustify this additional email, I have added 2 more options to choose\nfrom. :)\n\nThe text \", in the specified order\" is an important detail, that\nshould be included irrespective of the rest of the patch.\n\nMy preference would be to use the first variant, since the second one\nfeels too wordy for --help output. I believe we'll have to choose\nbetween these two; the alternatives will not make anyone happy.\n\nThese two variants show the two extremes; bare minimum vs. everything\nbut the kitchen sink. So one may feel the need to find a middle ground\nand provide a \"sufficient, but not too much\" variant. I have attempted\nthat in variants 3 and 4; see attached.\n\nThe third variant does away with the list of steps, and uses a\nparagraph to describe the letters. And the fourth variant makes that\nparagraph terse.\n\nIn the order of preference I'd choose variant 1, then 2. Variants 3\nand 4 feel like a significant degradation from variant 2.\n\nAttached samples.txt shows the snippets of --help output of each of\nthe variants/patches, for comparison.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Fri, 9 Jun 2023 20:52:29 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "I think I am partial to number 2. Removing a context switch always leads\nto more productivity.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 13 Jun 2023 10:51:48 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Thu Jun 8, 2023 at 11:38 AM CDT, Tristan Partin wrote:\n> On Thu Jun 8, 2023 at 12:33 AM CDT, David Rowley wrote:\n> > On Thu, 8 Jun 2023 at 07:16, Tristan Partin <[email protected]> wrote:\n> > >\n> > > master:\n> > >\n> > > 50000000 of 50000000 tuples (100%) done (elapsed 260.93 s, remaining 0.00 s))\n> > > vacuuming...\n> > > creating primary keys...\n> > > done in 1414.26 s (drop tables 0.20 s, create tables 0.82 s, client-side generate 1280.43 s, vacuum 2.55 s, primary keys 130.25 s).\n> > >\n> > > patchset:\n> > >\n> > > 50000000 of 50000000 tuples (100%) of pgbench_accounts done (elapsed 243.82 s, remaining 0.00 s))\n> > > vacuuming...\n> > > creating primary keys...\n> > > done in 375.66 s (drop tables 0.14 s, create tables 0.73 s, client-side generate 246.27 s, vacuum 2.77 s, primary keys 125.75 s).\n> >\n> > I've also previously found pgbench -i to be slow. It was a while ago,\n> > and IIRC, it was due to the printfPQExpBuffer() being a bottleneck\n> > inside pgbench.\n> >\n> > On seeing your email, it makes me wonder if PG16's hex integer\n> > literals might help here. These should be much faster to generate in\n> > pgbench and also parse on the postgres side.\n> >\n> > I wrote a quick and dirty patch to try that and I'm not really getting\n> > the same performance increases as I'd have expected. I also tested\n> > with your patch too and it does not look that impressive either when\n> > running pgbench on the same machine as postgres.\n>\n> I didn't expect my patch to increase performance in all workloads. I was\n> mainly aiming to fix high-latency connections. Based on your results\n> that looks like a 4% reduction in performance of client-side data\n> generation. I had thought maybe it is worth having a flag to keep the\n> old way too, but I am not sure a 4% hit is really that big of a deal.\n>\n> > pgbench copy speedup\n> >\n> > ** master\n> > drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> > 100000000 of 100000000 tuples (100%) done (elapsed 74.15 s, remaining 0.00 s)\n> > vacuuming...\n> > creating primary keys...\n> > done in 95.71 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> > generate 74.45 s, vacuum 0.12 s, primary keys 21.13 s).\n> >\n> > ** David's Patched\n> > drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> > 100000000 of 100000000 tuples (100%) done (elapsed 69.64 s, remaining 0.00 s)\n> > vacuuming...\n> > creating primary keys...\n> > done in 90.22 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> > generate 69.91 s, vacuum 0.12 s, primary keys 20.18 s).\n> >\n> > ** Tristan's patch\n> > drowley@amd3990x:~$ pgbench -i -s 1000 postgres\n> > 100000000 of 100000000 tuples (100%) of pgbench_accounts done (elapsed\n> > 77.44 s, remaining 0.00 s)\n> > vacuuming...\n> > creating primary keys...\n> > done in 98.64 s (drop tables 0.00 s, create tables 0.01 s, client-side\n> > generate 77.47 s, vacuum 0.12 s, primary keys 21.04 s).\n> >\n> > I'm interested to see what numbers you get. You'd need to test on\n> > PG16 however. I left the old code in place to generate the decimal\n> > numbers for versions < 16.\n>\n> I will try to test this soon and follow up on the thread. I definitely\n> see no problems with your patch as is though. I would be more than happy\n> to rebase my patches on yours.\n\nFinally got around to doing more benchmarking. Using an EC2 instance\nhosted in Ireland, and my client laptop in Austin, Texas.\n\nWorkload: pgbench -i -s 500\n\nmaster (9aee26a491)\ndone in 1369.41 s (drop tables 0.21 s, create tables 0.72 s, client-side generate 1336.44 s, vacuum 1.02 s, primary keys 31.03 s).\ndone in 1318.31 s (drop tables 0.21 s, create tables 0.72 s, client-side generate 1282.67 s, vacuum 1.02 s, primary keys 33.69 s).\n\ncopy\ndone in 307.42 s (drop tables 0.21 s, create tables 0.82 s, client-side generate 270.95 s, vacuum 1.02 s, primary keys 34.42 s).\n\ndavid\ndone in 1311.14 s (drop tables 0.72 s, create tables 0.72 s, client-side generate 1274.98 s, vacuum 0.94 s, primary keys 33.79 s).\ndone in 1340.18 s (drop tables 0.14 s, create tables 0.59 s, client-side generate 1304.78 s, vacuum 0.92 s, primary keys 33.75 s).\n\ncopy + david\ndone in 348.70 s (drop tables 0.23 s, create tables 0.72 s, client-side generate 312.94 s, vacuum 0.92 s, primary keys 33.90 s).\n\nI ran two tests for master and your patch David. For the last test, I\nadapted your patch onto mine. I am still seeing the huge performance\ngains on my branch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:47:33 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "Here is a v2. It cleans up the output when printing to a tty. The\nlast \"x of y tuples\" line gets overwritten now, so the final output\nlooks like:\n\ndropping old tables...\ncreating tables...\ngenerating data (client-side)...\nvacuuming...\ncreating primary keys...\ndone in 0.14 s (drop tables 0.01 s, create tables 0.01 s, client-side generate 0.05 s, vacuum 0.03 s, primary keys 0.03 s).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 14 Jun 2023 09:41:14 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "Again, I forget to actually attach. Holy guacamole.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 14 Jun 2023 10:58:06 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 10:58:06AM -0500, Tristan Partin wrote:\n> Again, I forget to actually attach. Holy guacamole.\n\nLooks rather OK seen from here, applied 0001 while browsing the\nseries. I have a few comments about 0002.\n\n static void\n-initGenerateDataClientSide(PGconn *con)\n+initBranch(PGconn *con, PQExpBufferData *sql, int64 curr)\n+{\n+ /* \"filler\" column defaults to NULL */\n+ printfPQExpBuffer(sql,\n+ INT64_FORMAT \"\\t0\\t\\n\",\n+ curr + 1);\n+}\n+\n+static void\n+initTeller(PGconn *con, PQExpBufferData *sql, int64 curr)\n+{\n+ /* \"filler\" column defaults to NULL */\n+ printfPQExpBuffer(sql,\n+ INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t0\\t\\n\",\n+ curr + 1, curr / ntellers + 1);\n+}\n+\n+static void\n+initAccount(PGconn *con, PQExpBufferData *sql, int64 curr)\n+{\n+ /* \"filler\" column defaults to blank padded empty string */\n+ printfPQExpBuffer(sql,\n+ INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t0\\t\\n\",\n+ curr + 1, curr / naccounts + 1);\n+}\n\nI was looking at that, and while this strikes me as a duplication for\nthe second and third ones, I'm OK with the use of a callback to fill\nin the data, even if naccounts and ntellers are equivalent to the\n\"base\" number given to initPopulateTable(), because the \"filler\"\ncolumn has different expectations for each table. These routines\ndon't need a PGconn argument, by the way.\n\n /* use COPY with FREEZE on v14 and later without partitioning */\n if (partitions == 0 && PQserverVersion(con) >= 140000)\n- copy_statement = \"copy pgbench_accounts from stdin with (freeze on)\";\n+ copy_statement_fmt = \"copy %s from stdin with (freeze on)\";\n else\n- copy_statement = \"copy pgbench_accounts from stdin\";\n+ copy_statement_fmt = \"copy %s from stdin\";\n\nThis seems a bit incorrect because partitioning only applies to\npgbench_accounts, no? This change means that the teller and branch\ntables would not benefit from FREEZE under a pgbench compiled with\nthis patch and a backend version of 14 or newer if --partitions is\nused. So, perhaps \"partitions\" should be an argument of\ninitPopulateTable() specified for each table? \n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 14:03:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Tue Jul 11, 2023 at 12:03 AM CDT, Michael Paquier wrote:\n> On Wed, Jun 14, 2023 at 10:58:06AM -0500, Tristan Partin wrote:\n> static void\n> -initGenerateDataClientSide(PGconn *con)\n> +initBranch(PGconn *con, PQExpBufferData *sql, int64 curr)\n> +{\n> + /* \"filler\" column defaults to NULL */\n> + printfPQExpBuffer(sql,\n> + INT64_FORMAT \"\\t0\\t\\n\",\n> + curr + 1);\n> +}\n> +\n> +static void\n> +initTeller(PGconn *con, PQExpBufferData *sql, int64 curr)\n> +{\n> + /* \"filler\" column defaults to NULL */\n> + printfPQExpBuffer(sql,\n> + INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t0\\t\\n\",\n> + curr + 1, curr / ntellers + 1);\n> +}\n> +\n> +static void\n> +initAccount(PGconn *con, PQExpBufferData *sql, int64 curr)\n> +{\n> + /* \"filler\" column defaults to blank padded empty string */\n> + printfPQExpBuffer(sql,\n> + INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t0\\t\\n\",\n> + curr + 1, curr / naccounts + 1);\n> +}\n>\n> These routines don't need a PGconn argument, by the way.\n\nNice catch!\n\n> /* use COPY with FREEZE on v14 and later without partitioning */\n> if (partitions == 0 && PQserverVersion(con) >= 140000)\n> - copy_statement = \"copy pgbench_accounts from stdin with (freeze on)\";\n> + copy_statement_fmt = \"copy %s from stdin with (freeze on)\";\n> else\n> - copy_statement = \"copy pgbench_accounts from stdin\";\n> + copy_statement_fmt = \"copy %s from stdin\";\n>\n> This seems a bit incorrect because partitioning only applies to\n> pgbench_accounts, no? This change means that the teller and branch\n> tables would not benefit from FREEZE under a pgbench compiled with\n> this patch and a backend version of 14 or newer if --partitions is\n> used. So, perhaps \"partitions\" should be an argument of\n> initPopulateTable() specified for each table? \n\nWhoops, looks like I didn't read the comment for what the partitions\nvariable means. It only applies to pgbench_accounts as you said. I don't\nthink passing it in as an argument would make much sense. Let me know\nwhat you think about the change in this new version, which only hits the\nfirst portion of the `if` statement if the table is pgbench_accounts.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 11 Jul 2023 09:46:43 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 09:46:43AM -0500, Tristan Partin wrote:\n> On Tue Jul 11, 2023 at 12:03 AM CDT, Michael Paquier wrote:\n>> This seems a bit incorrect because partitioning only applies to\n>> pgbench_accounts, no? This change means that the teller and branch\n>> tables would not benefit from FREEZE under a pgbench compiled with\n>> this patch and a backend version of 14 or newer if --partitions is\n>> used. So, perhaps \"partitions\" should be an argument of\n>> initPopulateTable() specified for each table? \n> \n> Whoops, looks like I didn't read the comment for what the partitions\n> variable means. It only applies to pgbench_accounts as you said. I don't\n> think passing it in as an argument would make much sense. Let me know\n> what you think about the change in this new version, which only hits the\n> first portion of the `if` statement if the table is pgbench_accounts.\n\n- /* use COPY with FREEZE on v14 and later without partitioning */\n- if (partitions == 0 && PQserverVersion(con) >= 140000)\n- copy_statement = \"copy pgbench_accounts from stdin with (freeze on)\";\n+ if (partitions == 0 && strcmp(table, \"pgbench_accounts\") == 0 && PQserverVersion(con) >= 140000)\n+ copy_statement_fmt = \"copy %s from stdin with (freeze on)\";\n\nThis would use the freeze option only on pgbench_accounts when no\npartitioning is defined, but my point was a bit different. We could\nuse the FREEZE option on the teller and branch tables as well, no?\nOkay, the impact is limited compared to accounts in terms of amount of\ndata loaded, but perhaps some people like playing with large scaling\nfactors where this could show a benefit in the initial data loading.\n\nIn passing, I have noticed the following sentence in pgbench.sgml:\n Using <literal>g</literal> causes logging to print one message\n every 100,000 rows while generating data for the \n <structname>pgbench_accounts</structname> table.\nIt would become incorrect as the same code path would be used for the\nteller and branch tables.\n--\nMichael",
"msg_date": "Wed, 12 Jul 2023 15:06:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed Jul 12, 2023 at 1:06 AM CDT, Michael Paquier wrote:\n> On Tue, Jul 11, 2023 at 09:46:43AM -0500, Tristan Partin wrote:\n> > On Tue Jul 11, 2023 at 12:03 AM CDT, Michael Paquier wrote:\n> >> This seems a bit incorrect because partitioning only applies to\n> >> pgbench_accounts, no? This change means that the teller and branch\n> >> tables would not benefit from FREEZE under a pgbench compiled with\n> >> this patch and a backend version of 14 or newer if --partitions is\n> >> used. So, perhaps \"partitions\" should be an argument of\n> >> initPopulateTable() specified for each table? \n> > \n> > Whoops, looks like I didn't read the comment for what the partitions\n> > variable means. It only applies to pgbench_accounts as you said. I don't\n> > think passing it in as an argument would make much sense. Let me know\n> > what you think about the change in this new version, which only hits the\n> > first portion of the `if` statement if the table is pgbench_accounts.\n>\n> - /* use COPY with FREEZE on v14 and later without partitioning */\n> - if (partitions == 0 && PQserverVersion(con) >= 140000)\n> - copy_statement = \"copy pgbench_accounts from stdin with (freeze on)\";\n> + if (partitions == 0 && strcmp(table, \"pgbench_accounts\") == 0 && PQserverVersion(con) >= 140000)\n> + copy_statement_fmt = \"copy %s from stdin with (freeze on)\";\n>\n> This would use the freeze option only on pgbench_accounts when no\n> partitioning is defined, but my point was a bit different. We could\n> use the FREEZE option on the teller and branch tables as well, no?\n> Okay, the impact is limited compared to accounts in terms of amount of\n> data loaded, but perhaps some people like playing with large scaling\n> factors where this could show a benefit in the initial data loading.\n\nPerhaps, should they all be keyed off the same option? Seemed like in\nyour previous comment you wanted multiple options. Sorry for not reading\nyour comment correctly.\n\n> In passing, I have noticed the following sentence in pgbench.sgml:\n> Using <literal>g</literal> causes logging to print one message\n> every 100,000 rows while generating data for the \n> <structname>pgbench_accounts</structname> table.\n> It would become incorrect as the same code path would be used for the\n> teller and branch tables.\n\nI will amend the documentation to mention all tables rather than being\nspecific to pgbench_accounts in the next patch revision pending what you\nwant to do about the partition CLI argument.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 12 Jul 2023 09:29:35 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 09:29:35AM -0500, Tristan Partin wrote:\n> On Wed Jul 12, 2023 at 1:06 AM CDT, Michael Paquier wrote:\n>> This would use the freeze option only on pgbench_accounts when no\n>> partitioning is defined, but my point was a bit different. We could\n>> use the FREEZE option on the teller and branch tables as well, no?\n>> Okay, the impact is limited compared to accounts in terms of amount of\n>> data loaded, but perhaps some people like playing with large scaling\n>> factors where this could show a benefit in the initial data loading.\n> \n> Perhaps, should they all be keyed off the same option? Seemed like in\n> your previous comment you wanted multiple options. Sorry for not reading\n> your comment correctly.\n\nI would have though that --partition should only apply to the\npgbench_accounts table, while FREEZE should apply where it is possible\nto use it, aka all the COPY queries except when pgbench_accounts is a\npartition. Would you do something different, like not applying FREEZE\nto pgbench_tellers and pgbench_branches as these have much less tuples\nthan pgbench_accounts?\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 12:52:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed Jul 12, 2023 at 10:52 PM CDT, Michael Paquier wrote:\n> On Wed, Jul 12, 2023 at 09:29:35AM -0500, Tristan Partin wrote:\n> > On Wed Jul 12, 2023 at 1:06 AM CDT, Michael Paquier wrote:\n> >> This would use the freeze option only on pgbench_accounts when no\n> >> partitioning is defined, but my point was a bit different. We could\n> >> use the FREEZE option on the teller and branch tables as well, no?\n> >> Okay, the impact is limited compared to accounts in terms of amount of\n> >> data loaded, but perhaps some people like playing with large scaling\n> >> factors where this could show a benefit in the initial data loading.\n> > \n> > Perhaps, should they all be keyed off the same option? Seemed like in\n> > your previous comment you wanted multiple options. Sorry for not reading\n> > your comment correctly.\n>\n> I would have though that --partition should only apply to the\n> pgbench_accounts table, while FREEZE should apply where it is possible\n> to use it, aka all the COPY queries except when pgbench_accounts is a\n> partition. Would you do something different, like not applying FREEZE\n> to pgbench_tellers and pgbench_branches as these have much less tuples\n> than pgbench_accounts?\n\nMichael,\n\nI think I completely misunderstood you. From what I can tell, you want\nto use FREEZE wherever possible. I think the new patch covers your\ncomment and fixes the documentation. I am hoping that I have finally\ngotten what you are looking for.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 19 Jul 2023 13:01:29 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "Didn't actually include the changes in the previous patch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 19 Jul 2023 13:03:21 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 01:03:21PM -0500, Tristan Partin wrote:\n> Didn't actually include the changes in the previous patch.\n\n-initGenerateDataClientSide(PGconn *con)\n+initBranch(PQExpBufferData *sql, int64 curr)\n {\n- PQExpBufferData sql;\n+ /* \"filler\" column defaults to NULL */\n+ printfPQExpBuffer(sql,\n+ INT64_FORMAT \"\\t0\\t\\n\",\n+ curr + 1);\n+}\n+\n+static void\n+initTeller(PQExpBufferData *sql, int64 curr)\n+{\n+ /* \"filler\" column defaults to NULL */\n+ printfPQExpBuffer(sql,\n+ INT64_FORMAT \"\\t\" INT64_FORMAT \"\\t0\\t\\n\",\n+ curr + 1, curr / ntellers + 1);\n\nHmm. Something's not right here, see:\n=# select count(*) has_nulls from pgbench_accounts where filler is null;\n has_nulls \n-----------\n 0\n(1 row)\n=# select count(*) > 0 has_nulls from pgbench_tellers where filler is null;\n has_nulls \n-----------\n f\n(1 row)\n=# select count(*) > 0 has_nulls from pgbench_branches where filler is null;\n has_nulls \n-----------\n f\n(1 row)\n\nNote as well this comment in initCreateTables():\n /*\n * Note: TPC-B requires at least 100 bytes per row, and the \"filler\"\n * fields in these table declarations were intended to comply with that.\n * The pgbench_accounts table complies with that because the \"filler\"\n * column is set to blank-padded empty string. But for all other tables\n * the columns default to NULL and so don't actually take any space. We\n * could fix that by giving them non-null default values. However, that\n * would completely break comparability of pgbench results with prior\n * versions. Since pgbench has never pretended to be fully TPC-B compliant\n * anyway, we stick with the historical behavior.\n */\n\nSo this patch causes pgbench to not stick with its historical\nbehavior, and the change is incompatible with the comments because the\ntellers and branches tables don't use NULL for their filler attribute\nanymore.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 12:07:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Wed Jul 19, 2023 at 10:07 PM CDT, Michael Paquier wrote:\n> So this patch causes pgbench to not stick with its historical\n> behavior, and the change is incompatible with the comments because the\n> tellers and branches tables don't use NULL for their filler attribute\n> anymore.\n\nGreat find. This was a problem of me just not understanding the COPY \ncommand properly. Relevant documentation snippet:\n\n> Specifies the string that represents a null value. The default is \\N \n> (backslash-N) in text format, and an unquoted empty string in CSV \n> format. You might prefer an empty string even in text format for cases \n> where you don't want to distinguish nulls from empty strings. This \n> option is not allowed when using binary format.\n\nThis new revision populates the column with the NULL value.\n\n> psql (17devel)\n> Type \"help\" for help.\n> \n> tristan957=# select count(1) from pgbench_branches;\n> count \n> -------\n> 1\n> (1 row)\n> \n> tristan957=# select count(1) from pgbench_branches where filler is null;\n> count \n> -------\n> 1\n> (1 row)\n\nThanks for your testing Michael. I went ahead and added a test to make \nsure that this behavior doesn't regress accidentally, but I am \nstruggling to get the test to fail using the previous version of this \npatch. Do you have any advice? This is my first time writing a test for \nPostgres. I can recreate the issue outside of the test script, but not \nwithin it for whatever reason.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Thu, 20 Jul 2023 14:22:51 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 02:22:51PM -0500, Tristan Partin wrote:\n> Thanks for your testing Michael. I went ahead and added a test to make sure\n> that this behavior doesn't regress accidentally, but I am struggling to get\n> the test to fail using the previous version of this patch. Do you have any\n> advice? This is my first time writing a test for Postgres. I can recreate\n> the issue outside of the test script, but not within it for whatever reason.\n\nWe're all here to learn, and writing TAP tests is important these days\nfor a lot of patches.\n\n+# Check that the pgbench_branches and pgbench_tellers filler columns are filled\n+# with NULLs\n+foreach my $table ('pgbench_branches', 'pgbench_tellers') {\n+ my ($ret, $out, $err) = $node->psql(\n+ 'postgres',\n+ \"SELECT COUNT(1) FROM $table;\n+ SELECT COUNT(1) FROM $table WHERE filler is NULL;\",\n+ extra_params => ['--no-align', '--tuples-only']);\n+\n+ is($ret, 0, \"psql $table counts status is 0\");\n+ is($err, '', \"psql $table counts stderr is empty\");\n+ if ($out =~ m/^(\\d+)\\n(\\d+)$/g) {\n+ is($1, $2, \"psql $table filler column filled with NULLs\");\n+ } else {\n+ fail(\"psql $table stdout m/^(\\\\d+)\\n(\\\\d+)$/g\");\n+ }\n+}\n\nThis is IMO hard to parse, and I'd rather add some checks for the\naccounts and history tables as well. Let's use four simple SQL\nqueries like what I posted upthread (no data for history inserted\nafter initialization), as of the attached. I'd be tempted to apply\nthat first as a separate patch, because we've never add coverage for\nthat and we have specific expectations in the code from this filler\ncolumn for tpc-b. And this needs to cover both client-side and\nserver-side data generation.\n\nNote that the indentation was a bit incorrect, so fixed while on it.\n\nAttached is a v7, with these tests (should be a patch on its own but\nI'm lazy to split this morning) and some more adjustments that I have\ndone while going through the patch. What do you think?\n--\nMichael",
"msg_date": "Fri, 21 Jul 2023 11:14:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Thu Jul 20, 2023 at 9:14 PM CDT, Michael Paquier wrote:\n> Attached is a v7, with these tests (should be a patch on its own but\n> I'm lazy to split this morning) and some more adjustments that I have\n> done while going through the patch. What do you think?\n\nv7 looks good from my perspective. Thanks for working through this patch \nwith me. Much appreciated.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 21 Jul 2023 12:22:06 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 12:22:06PM -0500, Tristan Partin wrote:\n> v7 looks good from my perspective. Thanks for working through this patch\n> with me. Much appreciated.\n\nCool. I have applied the new tests for now to move on with this\nthread.\n--\nMichael",
"msg_date": "Sun, 23 Jul 2023 20:21:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "On Sun, Jul 23, 2023 at 08:21:51PM +0900, Michael Paquier wrote:\n> Cool. I have applied the new tests for now to move on with this\n> thread.\n\nI have done a few more things on this patch today, including\nmeasurements with a local host and large scaling numbers. One of my\nhosts was taking for instance around 36.8s with scale=500 using the\nINSERTs and 36.5s with the COPY when loading data (average of 4\nruns) with -I dtg.\n\nGreg's upthread point is true as well that for high latency the\nserver-side generation is the most adapted option, but I don't see a\nreason to not switch to a COPY as this option is hidden behind -I,\njust for the sake of improving the default option set. So, at the\nend, applied.\n--\nMichael",
"msg_date": "Mon, 24 Jul 2023 14:09:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
},
{
"msg_contents": "Michael,\n\nOnce again I appreciate your patience with this patchset. Thanks for \nyour help and reviews.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 24 Jul 2023 08:54:17 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use COPY for populating all pgbench tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs touched on in past threads, our SCRAM implementation is slightly\nnonstandard and doesn't always protect the entirety of the\nauthentication handshake:\n\n- the username in the startup packet is not covered by the SCRAM\ncrypto and can be tampered with if channel binding is not in effect,\nor by an attacker holding only the server's key\n\n- low iteration counts accepted by the client make it easier than it\nprobably should be for a MITM to brute-force passwords (note that\nPG16's scram_iterations GUC, being server-side, does not mitigate\nthis)\n\n- by default, a SCRAM exchange can be exited by the server prior to\nsending its verifier, skipping the client's server authentication step\n(this is mitigated by requiring channel binding, and PG16 adds\nrequire_auth=scram-sha-256 to help as well)\n\nThese aren't currently considered security vulnerabilities, but it'd\nbe good for the documentation to call them out, considering mutual\nauthentication is one of the design goals of the SCRAM RFC. (I'd also\nlike to shore up these problems, eventually, to make SCRAM-based\nmutual authn viable with Postgres. But that work has stalled a bit on\nmy end.)\n\nHere's a patch to explicitly warn people away from SCRAM as a form of\nserver authentication, and nudge them towards a combination with\nverified TLS or gssenc. I've tried to keep the text version-agnostic,\nto make a potential backport easier. Is this a good place for the\nwarning to go? Should I call out that GSS can't use channel binding,\nor promote the use of TLS versus GSS for SCRAM, or just keep it\nsimple?\n\nThanks,\n--Jacob",
"msg_date": "Tue, 23 May 2023 12:26:52 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "Greetings,\n\n* Jacob Champion ([email protected]) wrote:\n> As touched on in past threads, our SCRAM implementation is slightly\n> nonstandard and doesn't always protect the entirety of the\n> authentication handshake:\n> \n> - the username in the startup packet is not covered by the SCRAM\n> crypto and can be tampered with if channel binding is not in effect,\n> or by an attacker holding only the server's key\n\nEncouraging channel binding when using TLS certainly makes a lot of\nsense, particularly in environments where the trust anchors are not\nstrongly managed (that is- if you trust the huge number of root\ncertificates that a system may have installed). Perhaps explicitly\nencouraging users to *not* trust every root server installed on their\nclient for their PG connections would also be a good idea. We should\nprobably add language to that effect to this section.\n\nWe do default to having channel binding being used though. Breaking\ndown exactly the cases at risk (perhaps including what version certain\nthings were introduced) and what direct action administrators can take\nto ensure their systems are as secure as possible (and maybe mention\nwhat things that doesn't address still, so they're aware of what risks\nstill exist) would be good.\n\n> - low iteration counts accepted by the client make it easier than it\n> probably should be for a MITM to brute-force passwords (note that\n> PG16's scram_iterations GUC, being server-side, does not mitigate\n> this)\n\nThis would be good to improve on.\n\n> - by default, a SCRAM exchange can be exited by the server prior to\n> sending its verifier, skipping the client's server authentication step\n> (this is mitigated by requiring channel binding, and PG16 adds\n> require_auth=scram-sha-256 to help as well)\n\nYeah, encouraging this would also be good and should be mentioned as\nsomething to do when one is using SCRAM. Clearly that would go into a\nPG16 version of this.\n\n> These aren't currently considered security vulnerabilities, but it'd\n> be good for the documentation to call them out, considering mutual\n> authentication is one of the design goals of the SCRAM RFC. (I'd also\n> like to shore up these problems, eventually, to make SCRAM-based\n> mutual authn viable with Postgres. But that work has stalled a bit on\n> my end.)\n\nImproving the documentation certainly is a good plan.\n\n> Here's a patch to explicitly warn people away from SCRAM as a form of\n> server authentication, and nudge them towards a combination with\n> verified TLS or gssenc. I've tried to keep the text version-agnostic,\n> to make a potential backport easier. Is this a good place for the\n> warning to go? Should I call out that GSS can't use channel binding,\n> or promote the use of TLS versus GSS for SCRAM, or just keep it\n> simple?\n\nAdding channel binding to GSS (which does support it, we just don't\nimplement it today..) when using TLS or another encryption method for\ntransport would be a good improvement to make, particularly right now as\nwe don't yet support encryption with SSPI, meaning that you need to use\nTLS to get session encryption when one of the systems is on Windows. I\ndo hope to add support for encryption with SSPI during this next release\ncycle and would certainly welcome help from anyone else who is\ninterested in this.\n\nI have to admit that the patch as presented strikes me as a warning\nwithout really providing steps for how to address the issues mentioned\nthough; there's a reference to what was just covered at the bottom but\nnothing really new. The current documentation leads with the warnings\nand then goes into steps to take to address the risk, and I'd say it\nwould make more sense to put this wording about SCRAM not being a\ncomplete solution to this issue above the steps that one can take to\nreduce the spoofing risk, similar to how the documentation currently is.\n\nPerhaps more succinctly- maybe we should be making adjustments to the\ncurrent language instead of just adding a new paragraph.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 23 May 2023 17:02:50 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Tue, May 23, 2023 at 05:02:50PM -0400, Stephen Frost wrote:\n> * Jacob Champion ([email protected]) wrote:\n>> As touched on in past threads, our SCRAM implementation is slightly\n>> nonstandard and doesn't always protect the entirety of the\n>> authentication handshake:\n>>\n>> - the username in the startup packet is not covered by the SCRAM\n>> crypto and can be tampered with if channel binding is not in effect,\n>> or by an attacker holding only the server's key\n>\n> Encouraging channel binding when using TLS certainly makes a lot of\n> sense, particularly in environments where the trust anchors are not\n> strongly managed (that is- if you trust the huge number of root\n> certificates that a system may have installed). Perhaps explicitly\n> encouraging users to *not* trust every root server installed on their\n> client for their PG connections would also be a good idea. We should\n> probably add language to that effect to this section.\n\nCurrently the user name is not treated by the backend, like that in\nauth-scram.c:\n\n /*\n * Read username. Note: this is ignored. We use the username from the\n * startup message instead, still it is kept around if provided as it\n * proves to be useful for debugging purposes.\n */\n state->client_username = read_attr_value(&p, 'n');\n\nCould it make sense to cross-check that with the contents of the\nstartup package instead, with or without channel binding?\n\n>> - low iteration counts accepted by the client make it easier than it\n>> probably should be for a MITM to brute-force passwords (note that\n>> PG16's scram_iterations GUC, being server-side, does not mitigate\n>> this)\n>\n> This would be good to improve on.\n\nHmm. Interesting.\n\n> > - by default, a SCRAM exchange can be exited by the server prior to\n> > sending its verifier, skipping the client's server authentication step\n> > (this is mitigated by requiring channel binding, and PG16 adds\n> > require_auth=scram-sha-256 to help as well)\n>\n> Yeah, encouraging this would also be good and should be mentioned as\n> something to do when one is using SCRAM. Clearly that would go into a\n> PG16 version of this.\n\nAgreed to improve the docs about ways to mitigate any risks.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 10:37:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier ([email protected]) wrote:\n> On Tue, May 23, 2023 at 05:02:50PM -0400, Stephen Frost wrote:\n> > * Jacob Champion ([email protected]) wrote:\n> >> As touched on in past threads, our SCRAM implementation is slightly\n> >> nonstandard and doesn't always protect the entirety of the\n> >> authentication handshake:\n> >>\n> >> - the username in the startup packet is not covered by the SCRAM\n> >> crypto and can be tampered with if channel binding is not in effect,\n> >> or by an attacker holding only the server's key\n> >\n> > Encouraging channel binding when using TLS certainly makes a lot of\n> > sense, particularly in environments where the trust anchors are not\n> > strongly managed (that is- if you trust the huge number of root\n> > certificates that a system may have installed). Perhaps explicitly\n> > encouraging users to *not* trust every root server installed on their\n> > client for their PG connections would also be a good idea. We should\n> > probably add language to that effect to this section.\n> \n> Currently the user name is not treated by the backend, like that in\n> auth-scram.c:\n> \n> /*\n> * Read username. Note: this is ignored. We use the username from the\n> * startup message instead, still it is kept around if provided as it\n> * proves to be useful for debugging purposes.\n> */\n> state->client_username = read_attr_value(&p, 'n');\n> \n> Could it make sense to cross-check that with the contents of the\n> startup package instead, with or without channel binding?\n\nNot without breaking things we support today and for what seems like an\nunclear benefit given that we've got channel binding today (though\nperhaps we need to make sure there's ways to force it on both sides to\nbe on and to encourage everyone to do that- which is what this change is\ngenerally about, I think).\n\nAs I recall, the reason we do it the way we do is because the SASL spec\nthat SCRAM is implemented under requires the username to be utf8 encoded\nwhile we support other encodings, and I don't think we want to break\nnon-utf8 usage.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 23 May 2023 21:46:58 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Tue, May 23, 2023 at 09:46:58PM -0400, Stephen Frost wrote:\n> Not without breaking things we support today and for what seems like an\n> unclear benefit given that we've got channel binding today (though\n> perhaps we need to make sure there's ways to force it on both sides to\n> be on and to encourage everyone to do that- which is what this change is\n> generally about, I think).\n> \n> As I recall, the reason we do it the way we do is because the SASL spec\n> that SCRAM is implemented under requires the username to be utf8 encoded\n> while we support other encodings, and I don't think we want to break\n> non-utf8 usage.\n\nYup, I remember this one, the encoding not being enforced by the\nprotocol has been quite an issue when this was implemented, still I\nwas wondering whether that's something that could be worth revisiting\nat some degree.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 10:56:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier ([email protected]) wrote:\n> On Tue, May 23, 2023 at 09:46:58PM -0400, Stephen Frost wrote:\n> > Not without breaking things we support today and for what seems like an\n> > unclear benefit given that we've got channel binding today (though\n> > perhaps we need to make sure there's ways to force it on both sides to\n> > be on and to encourage everyone to do that- which is what this change is\n> > generally about, I think).\n> > \n> > As I recall, the reason we do it the way we do is because the SASL spec\n> > that SCRAM is implemented under requires the username to be utf8 encoded\n> > while we support other encodings, and I don't think we want to break\n> > non-utf8 usage.\n> \n> Yup, I remember this one, the encoding not being enforced by the\n> protocol has been quite an issue when this was implemented, still I\n> was wondering whether that's something that could be worth revisiting\n> at some degree.\n\nTo the extent that there was an issue when it was implemented ... it's\nnow been implemented and so that was presumably overcome (though I don't\nreally specifically recall what the issues were there? Seems like it\nwouldn't matter at this point though), so I don't understand why we\nwould wish to revisit it.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 23 May 2023 22:01:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Tue, May 23, 2023 at 10:01:03PM -0400, Stephen Frost wrote:\n> To the extent that there was an issue when it was implemented ... it's\n> now been implemented and so that was presumably overcome (though I don't\n> really specifically recall what the issues were there? Seems like it\n> wouldn't matter at this point though), so I don't understand why we\n> would wish to revisit it.\n\nWell, there are IMO two sides issues here:\n1) libpq sends an empty username, which can be an issue if attempting\nto make this layer more pluggable with other things that are more\ncompilation than PG with the SCRAM exchange (OpenLDAP is one, there\ncould be others).\n2) The backend ignoring the username means that it is not mixed in the\nhashes.\n\nRelying on channel binding mitigates the range of problems for 2), now\none thing is that the default sslmode does not enforce an SSL\nconnection, either, though this default has been discussed a lot. So\nI am really wondering whether there wouldn't be some room for a more\ncompliant, strict HBA option with scram-sha-256 where we'd only allow\nUTF-8 compliant strings. Just some food for thoughts.\n\nAnd we could do better about the current state that's 1), anyway.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 13:37:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "> On 23 May 2023, at 23:02, Stephen Frost <[email protected]> wrote:\n> * Jacob Champion ([email protected]) wrote:\n\n>> - low iteration counts accepted by the client make it easier than it\n>> probably should be for a MITM to brute-force passwords (note that\n>> PG16's scram_iterations GUC, being server-side, does not mitigate\n>> this)\n> \n> This would be good to improve on.\n\nThe mechanics of this are quite straighforward, the problem IMHO lies in how to\ninform and educate users what a reasonable iteration count is, not to mention\nwhat an iteration count is in the first place.\n\n> Perhaps more succinctly- maybe we should be making adjustments to the\n> current language instead of just adding a new paragraph.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 24 May 2023 14:04:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 5/24/23 05:04, Daniel Gustafsson wrote:\n>> On 23 May 2023, at 23:02, Stephen Frost <[email protected]> wrote:\n>> * Jacob Champion ([email protected]) wrote:\n> \n>>> - low iteration counts accepted by the client make it easier than it\n>>> probably should be for a MITM to brute-force passwords (note that\n>>> PG16's scram_iterations GUC, being server-side, does not mitigate\n>>> this)\n>>\n>> This would be good to improve on.\n> \n> The mechanics of this are quite straighforward, the problem IMHO lies in how to\n> inform and educate users what a reasonable iteration count is, not to mention\n> what an iteration count is in the first place.\n\nYeah, the education around this is tricky.\n>> Perhaps more succinctly- maybe we should be making adjustments to the\n>> current language instead of just adding a new paragraph.\n> \n> +1\n\nI'm 100% on board for doing that as well, but the \"instead of\"\nsuggestion makes me think I didn't explain my goal very well. For\nexample, Stephen, you said\n\n> I have to admit that the patch as presented strikes me as a warning\n> without really providing steps for how to address the issues mentioned\n> though; there's a reference to what was just covered at the bottom but\n> nothing really new.\n\nbut the last sentence of my patch *is* the suggested step:\n\n> + ... It's recommended to employ strong server\n> + authentication, using one of the above anti-spoofing measures, to prevent\n> + these attacks.\n\nIn other words: if you're thinking about authenticating the server via\nSCRAM, over an otherwise anonymous key exchange, you should probably\nreconsider. The current architecture of libpq doesn't really seem to be\nset up for this, and my conversations with security@ have been focusing\naround the argument that people who want strong guarantees should be\nauthenticating the server using a TLS certificate before doing anything\nelse, so we don't need to consider our departures from the spec to be\nvulnerabilities. I think that's a reasonable stance to take, as long as\nwe're also explicitly warning people away from the mutual-auth use case.\n\nTo change this, aspects of the code that we don't currently consider\nsecurity problems would probably need to be reclassified. If we're\ndepending on SCRAM for server authentication, we can't trust a single\nbyte that the server sends to us until after the SCRAM verifier comes\nback and checks out. Compared to all the other authentication types,\nthat's unusual; I don't think it's really been a focus for the project\nbefore.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 25 May 2023 09:40:16 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 5/23/23 21:37, Michael Paquier wrote:\n> On Tue, May 23, 2023 at 10:01:03PM -0400, Stephen Frost wrote:\n>> To the extent that there was an issue when it was implemented ... it's\n>> now been implemented and so that was presumably overcome (though I don't\n>> really specifically recall what the issues were there? Seems like it\n>> wouldn't matter at this point though), so I don't understand why we\n>> would wish to revisit it.\n> \n> Well, there are IMO two sides issues here:\n> 1) libpq sends an empty username, which can be an issue if attempting\n> to make this layer more pluggable with other things that are more\n> compilation than PG with the SCRAM exchange (OpenLDAP is one, there\n> could be others).\n> 2) The backend ignoring the username means that it is not mixed in the\n> hashes.\n\n+1\n\n> Relying on channel binding mitigates the range of problems for 2),\n\n(Except for username tampering with possession of the server key alone.\nAs spec'd, that shouldn't be possible. But for the vast majority of\nusers, I think that's probably not on the list of plausible concerns.)\n\n> now\n> one thing is that the default sslmode does not enforce an SSL\n> connection, either, though this default has been discussed a lot. So\n> I am really wondering whether there wouldn't be some room for a more\n> compliant, strict HBA option with scram-sha-256 where we'd only allow\n> UTF-8 compliant strings. Just some food for thoughts.\n> \n> And we could do better about the current state that's 1), anyway.\n\nI would definitely like to see improvements in this area. I don't think\nwe'd need to break compatibility with clients, either (unless the server\noperator explicitly wanted to). The hard part is mainly on the server\nside, when dealing with a mismatch between the startup packet and the\nSCRAM header.\n\n--Jacob\n\n\n",
"msg_date": "Thu, 25 May 2023 09:54:35 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "Greetings,\n\n* Jacob Champion ([email protected]) wrote:\n> On 5/24/23 05:04, Daniel Gustafsson wrote:\n> >> On 23 May 2023, at 23:02, Stephen Frost <[email protected]> wrote:\n> >> Perhaps more succinctly- maybe we should be making adjustments to the\n> >> current language instead of just adding a new paragraph.\n> > \n> > +1\n> \n> I'm 100% on board for doing that as well, but the \"instead of\"\n> suggestion makes me think I didn't explain my goal very well. For\n> example, Stephen, you said\n> \n> > I have to admit that the patch as presented strikes me as a warning\n> > without really providing steps for how to address the issues mentioned\n> > though; there's a reference to what was just covered at the bottom but\n> > nothing really new.\n> \n> but the last sentence of my patch *is* the suggested step:\n> \n> > + ... It's recommended to employ strong server\n> > + authentication, using one of the above anti-spoofing measures, to prevent\n> > + these attacks.\n\nI was referring specifically to that ordering as not being ideal or in\nline with the rest of the flow of that section. We should integrate the\nconcerns higher in the section where we outline the reason these things\nmatter and then follow those with the specific steps for how to address\nthem, not give a somewhat unclear reference from the very bottom back up\nto something above.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 25 May 2023 13:29:12 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Thu, May 25, 2023 at 10:29 AM Stephen Frost <[email protected]> wrote:\n> I was referring specifically to that ordering as not being ideal or in\n> line with the rest of the flow of that section. We should integrate the\n> concerns higher in the section where we outline the reason these things\n> matter and then follow those with the specific steps for how to address\n> them, not give a somewhat unclear reference from the very bottom back up\n> to something above.\n\nAh, I misunderstood. I'll give that a shot.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Thu, 25 May 2023 10:45:14 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 5/25/23 1:29 PM, Stephen Frost wrote:\r\n> Greetings,\r\n> \r\n> * Jacob Champion ([email protected]) wrote:\r\n>> On 5/24/23 05:04, Daniel Gustafsson wrote:\r\n>>>> On 23 May 2023, at 23:02, Stephen Frost <[email protected]> wrote:\r\n>>>> Perhaps more succinctly- maybe we should be making adjustments to the\r\n>>>> current language instead of just adding a new paragraph.\r\n>>>\r\n>>> +1\r\n>>\r\n>> I'm 100% on board for doing that as well, but the \"instead of\"\r\n>> suggestion makes me think I didn't explain my goal very well. For\r\n>> example, Stephen, you said\r\n>>\r\n>>> I have to admit that the patch as presented strikes me as a warning\r\n>>> without really providing steps for how to address the issues mentioned\r\n>>> though; there's a reference to what was just covered at the bottom but\r\n>>> nothing really new.\r\n>>\r\n>> but the last sentence of my patch *is* the suggested step:\r\n>>\r\n>>> + ... It's recommended to employ strong server\r\n>>> + authentication, using one of the above anti-spoofing measures, to prevent\r\n>>> + these attacks.\r\n> \r\n> I was referring specifically to that ordering as not being ideal or in\r\n> line with the rest of the flow of that section. We should integrate the\r\n> concerns higher in the section where we outline the reason these things\r\n> matter and then follow those with the specific steps for how to address\r\n> them, not give a somewhat unclear reference from the very bottom back up\r\n> to something above.\r\n\r\nCaught up on this exchange. Some of my comments may be out-of-order.\r\n\r\nOverall, +1 to tightening the language around the docs in this area.\r\n\r\nHowever, to paraphrase Stephen, I think the language, as currently \r\nwritten, makes the problem sound scarier than it actually is, and I \r\nagree that we should just inline it above. There may also be some \r\nfollow-up development work we can do to mitigate the issues.\r\n\r\nI think it's fine for us to recommend using channel binding, but if \r\nwe're concerned about server spoofing / rogue servers, we should also \r\nrecommend using sslmode=verify-full to ensure server-identity. That \r\ndoesn't necessarily help in the case the server itself has gone rogue, \r\nbut that mitigates the MITM risk. The SCRAM RFC is also very clear that \r\nyou should be using TLS[1].\r\n\r\nI really don't think the \"startup packet\" is an actual issue, but I \r\nthink recommending good TLS / channel binding mitigates this.\r\n\r\nFor the iteration count, I'm generally ambivalent here. I think again, \r\nif you're using good TLS, this is most likely mitigated. If you're \r\nconnecting to a rogue server using good TLS, you likely have other \r\nissues to deal with. However, there may be a libpq feature here that \r\nlets a client specify a minimum iteration count it will accept for \r\npurposes of SCRAM.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.rfc-editor.org/rfc/rfc5802#section-9",
"msg_date": "Thu, 25 May 2023 13:48:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Thu, May 25, 2023 at 10:48 AM Jonathan S. Katz <[email protected]> wrote:\n> Overall, +1 to tightening the language around the docs in this area.\n>\n> However, to paraphrase Stephen, I think the language, as currently\n> written, makes the problem sound scarier than it actually is, and I\n> agree that we should just inline it above.\n\nHow does v2 look? I more or less divided the current text into a local\nsection and a network section. (I'm still not clear on where in the\ncurrent text you're wanting me to inline a sudden aside on SCRAM; it\ndoesn't really seem to fit in any of the existing paragraphs.)\n\n--Jacob",
"msg_date": "Thu, 25 May 2023 12:27:06 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 5/25/23 3:27 PM, Jacob Champion wrote:\r\n> On Thu, May 25, 2023 at 10:48 AM Jonathan S. Katz <[email protected]> wrote:\r\n>> Overall, +1 to tightening the language around the docs in this area.\r\n>>\r\n>> However, to paraphrase Stephen, I think the language, as currently\r\n>> written, makes the problem sound scarier than it actually is, and I\r\n>> agree that we should just inline it above.\r\n> \r\n> How does v2 look? I more or less divided the current text into a local\r\n> section and a network section. (I'm still not clear on where in the\r\n> current text you're wanting me to inline a sudden aside on SCRAM; it\r\n> doesn't really seem to fit in any of the existing paragraphs.)\r\n\r\nI read through the proposal and like this much better. I missed \r\nStephen's point on the \"where\" to put it in this section; I actually \r\ndon't know if I agree with that (he says while painting the bikeshed), \r\ngiven the we spend two paragraphs describing how to prevent spoofing in \r\ngeneral over the network, vs. just during SCRAM authentication.\r\n\r\nI rewrote this to just focus on server spoofing that can occur with \r\nSCRAM authentication and did some wordsmithing. I was torn on keeping in \r\nthe part of offline analysis of an intercepted hash, given one can do \r\nthis with md5 as well, but perhaps it helps elaborate on the consequences.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 25 May 2023 21:09:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Thu, May 25, 2023 at 6:10 PM Jonathan S. Katz <[email protected]> wrote:\n> I read through the proposal and like this much better.\n\nGreat!\n\n> I rewrote this to just focus on server spoofing that can occur with\n> SCRAM authentication and did some wordsmithing. I was torn on keeping in\n> the part of offline analysis of an intercepted hash, given one can do\n> this with md5 as well, but perhaps it helps elaborate on the consequences.\n\nThis part:\n\n> + To prevent server spoofing from occurring when using\n> + <link linkend=\"auth-password\">scram-sha-256</link> password authentication\n> + over a network, you should ensure you are connecting using SSL.\n\nseems to backtrack on the recommendation -- you have to use\nsslmode=verify-full, not just SSL, to avoid handing a weak(er) hash to\nan untrusted party.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 26 May 2023 15:47:18 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 5/26/23 6:47 PM, Jacob Champion wrote:\r\n> On Thu, May 25, 2023 at 6:10 PM Jonathan S. Katz <[email protected]> wrote:\r\n\r\n>> + To prevent server spoofing from occurring when using\r\n>> + <link linkend=\"auth-password\">scram-sha-256</link> password authentication\r\n>> + over a network, you should ensure you are connecting using SSL.\r\n> \r\n> seems to backtrack on the recommendation -- you have to use\r\n> sslmode=verify-full, not just SSL, to avoid handing a weak(er) hash to\r\n> an untrusted party.\r\n\r\nThe above assumes that the reader reviewed the previous paragraph and \r\nfollowed the guidelines there. However, we can make it explicit. Please \r\nsee attached.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 28 May 2023 14:21:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Sun, May 28, 2023 at 02:21:53PM -0400, Jonathan S. Katz wrote:\n> The above assumes that the reader reviewed the previous paragraph and\n> followed the guidelines there. However, we can make it explicit. Please see\n> attached.\n\nYeah, I was under the same impression as Jacob that we don't insist\nenough in this new paragraph about what sslmode ought to be when I\ninitially read the patch. However, looking at the html page produced,\nwhat you have to refer to the previous paragraph looks OK to me.\n--\nMichael",
"msg_date": "Mon, 29 May 2023 14:38:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Sun, May 28, 2023 at 2:22 PM Jonathan S. Katz <[email protected]> wrote:\n> The above assumes that the reader reviewed the previous paragraph and\n> followed the guidelines there. However, we can make it explicit. Please\n> see attached.\n\nLGTM!\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 31 May 2023 10:08:39 -0400",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Wed, May 31, 2023 at 10:08:39AM -0400, Jacob Champion wrote:\n> LGTM!\n\nOkay. Does anybody have any comments and/or objections? \n--\nMichael",
"msg_date": "Wed, 31 May 2023 17:14:33 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "> On 31 May 2023, at 23:14, Michael Paquier <[email protected]> wrote:\n> On Wed, May 31, 2023 at 10:08:39AM -0400, Jacob Champion wrote:\n\n>> LGTM!\n> \n> Okay. Does anybody have any comments and/or objections? \n\nLGTM. As a small nitpick, I think this sentence is a little bit misleading:\n\n\t\"..can use offline analysis to determine the hashed password from\n\t the client\"\n\nIt's true that an attacker kan use offline analysis but it makes it sound\neasier than it might be in practice. I would have written \"to potentially\ndetermine\".\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 1 Jun 2023 10:22:28 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 10:22:28AM +0200, Daniel Gustafsson wrote:\n> It's true that an attacker kan use offline analysis but it makes it sound\n> easier than it might be in practice. I would have written \"to potentially\n> determine\".\n\nHmm. Okay by me.\n--\nMichael",
"msg_date": "Thu, 1 Jun 2023 08:28:32 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 08:28:32AM -0400, Michael Paquier wrote:\n> Hmm. Okay by me.\n\nTook me some time to get back to it, but applied this way.\n--\nMichael",
"msg_date": "Sat, 3 Jun 2023 17:50:38 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On Sat, Jun 3, 2023 at 2:50 PM Michael Paquier <[email protected]> wrote:\n> Took me some time to get back to it, but applied this way.\n\nThanks all!\n\n--Jacob\n\n\n",
"msg_date": "Mon, 5 Jun 2023 08:22:01 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
},
{
"msg_contents": "On 6/5/23 11:22 AM, Jacob Champion wrote:\r\n> On Sat, Jun 3, 2023 at 2:50 PM Michael Paquier <[email protected]> wrote:\r\n>> Took me some time to get back to it, but applied this way.\r\n> \r\n> Thanks all!\r\n\r\n+1; thank you!\r\n\r\nJonathan",
"msg_date": "Mon, 5 Jun 2023 13:04:21 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Docs: Encourage strong server verification with SCRAM"
}
] |
[
{
"msg_contents": "This is a short patch which cleans up code for unlogged LSNs. It replaces the existing spinlock with\natomic ops. It could provide a performance benefit for future uses of\nunlogged LSNS, but for now it is simply a cleaner implementation.",
"msg_date": "Tue, 23 May 2023 20:24:45 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Tue, May 23, 2023 at 08:24:45PM +0000, John Morris wrote:\n> This is a short patch which cleans up code for unlogged LSNs. It\n> replaces the existing spinlock with atomic ops. It could provide a\n> performance benefit for future uses of unlogged LSNS, but for now\n> it is simply a cleaner implementation.\n\nSeeing the code paths where gistGetFakeLSN() is called, I guess that\nthe answer will be no, still are you seeing a measurable difference in\nsome cases?\n\n- /* increment the unloggedLSN counter, need SpinLock */\n- SpinLockAcquire(&XLogCtl->ulsn_lck);\n- nextUnloggedLSN = XLogCtl->unloggedLSN++;\n- SpinLockRelease(&XLogCtl->ulsn_lck);\n-\n- return nextUnloggedLSN;\n+ return pg_atomic_fetch_add_u64(&XLogCtl->unloggedLSN, 1);\n\nSpinlocks provide a full memory barrier, which may not the case with\nadd_u64() or read_u64(). Could memory ordering be a problem in these\ncode paths?\n--\nMichael",
"msg_date": "Wed, 24 May 2023 07:26:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Tue, May 23, 2023 at 6:26 PM Michael Paquier <[email protected]> wrote:\n> Spinlocks provide a full memory barrier, which may not the case with\n> add_u64() or read_u64(). Could memory ordering be a problem in these\n> code paths?\n\nIt could be a huge problem if what you say were true, but unless I'm\nmissing something, the comments in atomics.h say that it isn't. The\ncomments for the 64-bit functions say to look at the 32-bit functions,\nand there it says:\n\n/*\n * pg_atomic_add_fetch_u32 - atomically add to variable\n *\n * Returns the value of ptr after the arithmetic operation.\n *\n * Full barrier semantics.\n */\n\nWhich is probably a good thing, because I'm not sure what good it\nwould be to have an operation that both reads and modifies an atomic\nvariable but has no barrier semantics.\n\nThat's not to say that I entirely understand the point of this patch.\nIt seems like a generally reasonable thing to do AFAICT but somehow I\nwonder if there's more to the story here, because it doesn't feel like\nit would be easy to measure the benefit of this patch in isolation.\nThat's not a reason to reject it, though, just something that makes me\ncurious.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 May 2023 08:22:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 08:22:08 -0400, Robert Haas wrote:\n> On Tue, May 23, 2023 at 6:26 PM Michael Paquier <[email protected]> wrote:\n> > Spinlocks provide a full memory barrier, which may not the case with\n> > add_u64() or read_u64(). Could memory ordering be a problem in these\n> > code paths?\n> \n> It could be a huge problem if what you say were true, but unless I'm\n> missing something, the comments in atomics.h say that it isn't. The\n> comments for the 64-bit functions say to look at the 32-bit functions,\n> and there it says:\n> \n> /*\n> * pg_atomic_add_fetch_u32 - atomically add to variable\n> *\n> * Returns the value of ptr after the arithmetic operation.\n> *\n> * Full barrier semantics.\n> */\n> \n> Which is probably a good thing, because I'm not sure what good it\n> would be to have an operation that both reads and modifies an atomic\n> variable but has no barrier semantics.\n\nI was a bit confused by Michael's comment as well, due to the section of code\nquoted. But he does have a point: pg_atomic_read_u32() does indeed *not* have\nbarrier semantics (it'd be way more expensive), and the patch does contain\nthis hunk:\n\n> @@ -6784,9 +6775,7 @@ CreateCheckPoint(int flags)\n> \t * unused on non-shutdown checkpoints, but seems useful to store it always\n> \t * for debugging purposes.\n> \t */\n> -\tSpinLockAcquire(&XLogCtl->ulsn_lck);\n> -\tControlFile->unloggedLSN = XLogCtl->unloggedLSN;\n> -\tSpinLockRelease(&XLogCtl->ulsn_lck);\n> +\tControlFile->unloggedLSN = pg_atomic_read_u64(&XLogCtl->unloggedLSN);\n> \n> \tUpdateControlFile();\n> \tLWLockRelease(ControlFileLock);\n\nSo we indeed loose some \"barrier strength\" - but I think that's fine. For one,\nit's a debugging-only value. But more importantly, I don't see what reordering\nthe barrier could prevent - a barrier is useful for things like sequencing two\nmemory accesses to happen in the intended order - but unloggedLSN doesn't\ninteract with another variable that's accessed within the ControlFileLock\nafaict.\n\n\n> That's not to say that I entirely understand the point of this patch.\n> It seems like a generally reasonable thing to do AFAICT but somehow I\n> wonder if there's more to the story here, because it doesn't feel like\n> it would be easy to measure the benefit of this patch in isolation.\n> That's not a reason to reject it, though, just something that makes me\n> curious.\n\nI doubt it's a meaningful, if even measurable win. But removing atomic ops and\nreducing shared memory space isn't a bad thing, even if there's no immediate\nbenefit...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 14:49:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Wed, May 24, 2023 at 02:49:58PM -0700, Andres Freund wrote:\n> I was a bit confused by Michael's comment as well, due to the section of code\n> quoted. But he does have a point: pg_atomic_read_u32() does indeed *not* have\n> barrier semantics (it'd be way more expensive), and the patch does contain\n> this hunk:\n\nThanks for the correction. The part about _add was incorrect.\n\n> So we indeed loose some \"barrier strength\" - but I think that's fine. For one,\n> it's a debugging-only value. But more importantly, I don't see what reordering\n> the barrier could prevent - a barrier is useful for things like sequencing two\n> memory accesses to happen in the intended order - but unloggedLSN doesn't\n> interact with another variable that's accessed within the ControlFileLock\n> afaict.\n\nThis stuff is usually tricky enough that I am never completely sure\nwhether it is fine without reading again README.barrier, which is\nwhere unloggedLSN is saved in the control file under its LWLock.\nSomething that I find confusing in the patch is that it does not\ndocument the reason why this is OK.\n--\nMichael",
"msg_date": "Thu, 25 May 2023 07:41:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Thu, May 25, 2023 at 07:41:21AM +0900, Michael Paquier wrote:\n> On Wed, May 24, 2023 at 02:49:58PM -0700, Andres Freund wrote:\n>> So we indeed loose some \"barrier strength\" - but I think that's fine. For one,\n>> it's a debugging-only value.\n\nIs it? I see uses in GiST indexing (62401db), so it's not immediately\nobvious to me how it is debugging-only. If it is, then I think this patch\nought to clearly document it so that nobody else tries to use it for\nnon-debugging-only stuff.\n\n>> But more importantly, I don't see what reordering\n>> the barrier could prevent - a barrier is useful for things like sequencing two\n>> memory accesses to happen in the intended order - but unloggedLSN doesn't\n>> interact with another variable that's accessed within the ControlFileLock\n>> afaict.\n> \n> This stuff is usually tricky enough that I am never completely sure\n> whether it is fine without reading again README.barrier, which is\n> where unloggedLSN is saved in the control file under its LWLock.\n> Something that I find confusing in the patch is that it does not\n> document the reason why this is OK.\n\nMy concern would be whether GetFakeLSNForUnloggedRel or CreateCheckPoint\nmight see an old value of unloggedLSN. From the following note in\nREADME.barrier, it sounds like this would be a problem even if we ensured\nfull barrier semantics:\n\n 3. No ordering guarantees. While memory barriers ensure that any given\n process performs loads and stores to shared memory in order, they don't\n guarantee synchronization. In the queue example above, we can use memory\n barriers to be sure that readers won't see garbage, but there's nothing to\n say whether a given reader will run before or after a given writer. If this\n matters in a given situation, some other mechanism must be used instead of\n or in addition to memory barriers.\n\nIIUC we know that shared memory accesses cannot be reordered to precede\naquisition or follow release of a spinlock (thanks to 0709b7e), which is\nwhy this isn't a problem in the current implementation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 16:05:39 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart ([email protected]) wrote:\n> On Thu, May 25, 2023 at 07:41:21AM +0900, Michael Paquier wrote:\n> > On Wed, May 24, 2023 at 02:49:58PM -0700, Andres Freund wrote:\n> >> So we indeed loose some \"barrier strength\" - but I think that's fine. For one,\n> >> it's a debugging-only value.\n> \n> Is it? I see uses in GiST indexing (62401db), so it's not immediately\n> obvious to me how it is debugging-only. If it is, then I think this patch\n> ought to clearly document it so that nobody else tries to use it for\n> non-debugging-only stuff.\n\nI don't see it as a debugging value. I'm not sure where that came\nfrom..? We do use it in places and if anything, I expect it to be used\nmore, not less, in the future as a persistant generally increasing\nvalue (could go backwards on a crash-restart but in such case all\nunlogged tables are truncated).\n\n> >> But more importantly, I don't see what reordering\n> >> the barrier could prevent - a barrier is useful for things like sequencing two\n> >> memory accesses to happen in the intended order - but unloggedLSN doesn't\n> >> interact with another variable that's accessed within the ControlFileLock\n> >> afaict.\n> > \n> > This stuff is usually tricky enough that I am never completely sure\n> > whether it is fine without reading again README.barrier, which is\n> > where unloggedLSN is saved in the control file under its LWLock.\n> > Something that I find confusing in the patch is that it does not\n> > document the reason why this is OK.\n> \n> My concern would be whether GetFakeLSNForUnloggedRel or CreateCheckPoint\n> might see an old value of unloggedLSN. From the following note in\n> README.barrier, it sounds like this would be a problem even if we ensured\n> full barrier semantics:\n> \n> 3. No ordering guarantees. While memory barriers ensure that any given\n> process performs loads and stores to shared memory in order, they don't\n> guarantee synchronization. In the queue example above, we can use memory\n> barriers to be sure that readers won't see garbage, but there's nothing to\n> say whether a given reader will run before or after a given writer. If this\n> matters in a given situation, some other mechanism must be used instead of\n> or in addition to memory barriers.\n> \n> IIUC we know that shared memory accesses cannot be reordered to precede\n> aquisition or follow release of a spinlock (thanks to 0709b7e), which is\n> why this isn't a problem in the current implementation.\n\nThe concern around unlogged LSN, imv anyway, is less about shared memory\naccess and making sure that all callers understand that it can move\nbackwards on a crash/restart. I don't think that's an issue for current\nusers but we just need to make sure to try and comment sufficiently\nregarding that such that new users understand that about this particular\nvalue.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 12 Jun 2023 19:24:18 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 07:24:18PM -0400, Stephen Frost wrote:\n> * Nathan Bossart ([email protected]) wrote:\n>> Is it? I see uses in GiST indexing (62401db), so it's not immediately\n>> obvious to me how it is debugging-only. If it is, then I think this patch\n>> ought to clearly document it so that nobody else tries to use it for\n>> non-debugging-only stuff.\n> \n> I don't see it as a debugging value. I'm not sure where that came\n> from..? We do use it in places and if anything, I expect it to be used\n> more, not less, in the future as a persistant generally increasing\n> value (could go backwards on a crash-restart but in such case all\n> unlogged tables are truncated).\n\nThis is my understanding as well.\n\n>> My concern would be whether GetFakeLSNForUnloggedRel or CreateCheckPoint\n>> might see an old value of unloggedLSN. From the following note in\n>> README.barrier, it sounds like this would be a problem even if we ensured\n>> full barrier semantics:\n\nNever mind. I think I'm forgetting that the atomics support in Postgres\ndeals with cache coherency.\n\n> The concern around unlogged LSN, imv anyway, is less about shared memory\n> access and making sure that all callers understand that it can move\n> backwards on a crash/restart. I don't think that's an issue for current\n> users but we just need to make sure to try and comment sufficiently\n> regarding that such that new users understand that about this particular\n> value.\n\nRight.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 21:35:46 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart ([email protected]) wrote:\n> On Mon, Jun 12, 2023 at 07:24:18PM -0400, Stephen Frost wrote:\n> > * Nathan Bossart ([email protected]) wrote:\n> >> Is it? I see uses in GiST indexing (62401db), so it's not immediately\n> >> obvious to me how it is debugging-only. If it is, then I think this patch\n> >> ought to clearly document it so that nobody else tries to use it for\n> >> non-debugging-only stuff.\n> > \n> > I don't see it as a debugging value. I'm not sure where that came\n> > from..? We do use it in places and if anything, I expect it to be used\n> > more, not less, in the future as a persistant generally increasing\n> > value (could go backwards on a crash-restart but in such case all\n> > unlogged tables are truncated).\n> \n> This is my understanding as well.\n> \n> >> My concern would be whether GetFakeLSNForUnloggedRel or CreateCheckPoint\n> >> might see an old value of unloggedLSN. From the following note in\n> >> README.barrier, it sounds like this would be a problem even if we ensured\n> >> full barrier semantics:\n> \n> Never mind. I think I'm forgetting that the atomics support in Postgres\n> deals with cache coherency.\n> \n> > The concern around unlogged LSN, imv anyway, is less about shared memory\n> > access and making sure that all callers understand that it can move\n> > backwards on a crash/restart. I don't think that's an issue for current\n> > users but we just need to make sure to try and comment sufficiently\n> > regarding that such that new users understand that about this particular\n> > value.\n> \n> Right.\n\nAwesome. Was there any other feedback on this change which basically is\njust getting rid of a spinlock and replacing it with using atomics?\nIt's still in needs-review status but there's been a number of\ncomments/reviews (drive-by, at least) but without any real ask for any\nchanges to be made.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Jul 2023 19:08:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 07:08:03PM -0400, Stephen Frost wrote:\n> Awesome. Was there any other feedback on this change which basically is\n> just getting rid of a spinlock and replacing it with using atomics?\n> It's still in needs-review status but there's been a number of\n> comments/reviews (drive-by, at least) but without any real ask for any\n> changes to be made.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Jul 2023 16:15:52 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-17 16:15:52 -0700, Nathan Bossart wrote:\n> On Mon, Jul 17, 2023 at 07:08:03PM -0400, Stephen Frost wrote:\n> > Awesome. Was there any other feedback on this change which basically is\n> > just getting rid of a spinlock and replacing it with using atomics?\n> > It's still in needs-review status but there's been a number of\n> > comments/reviews (drive-by, at least) but without any real ask for any\n> > changes to be made.\n> \n> LGTM\n\nWhy don't we just use a barrier when around reading the value? It's not like\nCreateCheckPoint() is frequent?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jul 2023 17:08:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": ">> Why don't we just use a barrier when around reading the value? It's not like\n>> CreateCheckPoint() is frequent?\n\nOne reason is that a barrier isn’t needed, and adding unnecessary barriers can also be confusing.\n\nWith respect to the “debug only” comment in the original code, whichever value is written to the structure during a checkpoint will be reset when restarting. Nobody will ever see that value. We could just as easily write a zero.\n\nShutting down is a different story. It isn’t stated, but we require the unlogged LSN be quiescent before shutting down. This patch doesn’t change that requirement.\n\nWe could also argue that memory ordering doesn’t matter when filling in a conventional, unlocked structure. But the fact we have only two cases 1) checkpoint where the value is ignored, and 2) shutdown where it is quiescent, makes the additional argument unnecessary.\n\nWould you be more comfortable if I added comments describing the two cases? My intent was to be minimalistic, but the original code could use better explanation.\n\n\n\n\n\n\n\n\n\n>> Why don't we just use a barrier when around reading the value? It's not like\n>> CreateCheckPoint() is frequent?\n \n\n\n\nOne reason is that a barrier isn’t needed, and adding unnecessary barriers can also be confusing.\n \nWith respect to the “debug only” comment in the original code, whichever value is written to the structure during a checkpoint will be reset when restarting. Nobody will ever see that value. We could just\n as easily write a zero.\n \nShutting down is a different story. It isn’t stated, but we require the unlogged LSN be quiescent before shutting down. This patch doesn’t change that requirement.\n \nWe could also argue that memory ordering doesn’t matter when filling in a conventional, unlocked structure. But the fact we have only two cases 1) checkpoint where the value is ignored, and 2) shutdown where\n it is quiescent, makes the additional argument unnecessary.\n \nWould you be more comfortable if I added comments describing the two cases? My intent was to be minimalistic, but the original code could use better explanation.",
"msg_date": "Wed, 19 Jul 2023 18:55:10 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 12:25 AM John Morris\n<[email protected]> wrote:\n>\n> >> Why don't we just use a barrier when around reading the value? It's not like\n> >> CreateCheckPoint() is frequent?\n>\n> One reason is that a barrier isn’t needed, and adding unnecessary barriers can also be confusing.\n>\n> With respect to the “debug only” comment in the original code, whichever value is written to the structure during a checkpoint will be reset when restarting. Nobody will ever see that value. We could just as easily write a zero.\n\n>\n> Shutting down is a different story. It isn’t stated, but we require the unlogged LSN be quiescent before shutting down. This patch doesn’t change that requirement.\n>\n> We could also argue that memory ordering doesn’t matter when filling in a conventional, unlocked structure. But the fact we have only two cases 1) checkpoint where the value is ignored, and 2) shutdown where it is quiescent, makes the additional argument unnecessary.\n>\n> Would you be more comfortable if I added comments describing the two cases? My intent was to be minimalistic, but the original code could use better explanation.\n\nHere, the case for unloggedLSN is that there can exist multiple\nbackends incrementing/writing it (via GetFakeLSNForUnloggedRel), and\nthere can exist one reader at a time in CreateCheckPoint with\nLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);. Is incrementing\nunloggedLSN atomically (with an implied memory barrier from\npg_atomic_fetch_add_u64) helping out synchronize multiple writers and\nreaders? With a spinlock, writers-readers synchronization is\nguaranteed. With an atomic variable, it is guaranteed that the readers\ndon't see a torn-value, but no synchronization is provided.\n\nIf the above understanding is right, what happens if readers see an\nold unloggedLSN value - reader here stores the old unloggedLSN value\nto control file and the server restarts (clean exit). So, the old\nvalue is loaded back to unloggedLSN upon restart and the callers of\nGetFakeLSNForUnloggedRel() will see an old/repeated value too. Is it a\nproblem?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Jul 2023 12:58:52 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "> what happens if … reader here stores the old unloggedLSN value\n> to control file and the server restarts (clean exit). So, the old\n>value is loaded back to unloggedLSN upon restart and the callers of\n> GetFakeLSNForUnloggedRel() will see an old/repeated value too. Is it a\n> problem?\n\nFirst, a clarification. The value being saved is the “next” unlogged LSN,\nnot one which has already been used.\n(we are doing “fetch and add”, not “add and fetch”)\n\nYou have a good point about shutdown and startup. It is vital we\ndon’t repeat an unlogged LSN. This situation could easily happen\nIf other readers were active while we were shutting down.\n\n>With an atomic variable, it is guaranteed that the readers\n>don't see a torn-value, but no synchronization is provided.\n\nThe atomic increment also ensures the sequence\nof values is correct, specifically we don’t see\nrepeated values like we might with a conventional increment.\nAs a side effect, the instruction enforces a memory barrier, but we are not\nrelying on a barrier in this case.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n> what happens if … reader here stores the old unloggedLSN value\n> to control file and the server restarts (clean exit). So, the old\n>value is loaded back to unloggedLSN upon restart and the callers of\n> GetFakeLSNForUnloggedRel() will see an old/repeated value too. Is it a\n> problem?\n \nFirst, a clarification. The value being saved is the “next” unlogged LSN,\nnot one which has already been used.\n(we are doing “fetch and add”, not “add and fetch”)\n \nYou have a good point about shutdown and startup. It is vital we\ndon’t repeat an unlogged LSN. This situation could easily happen\nIf other readers were active while we were shutting down.\n \n>With an atomic variable, it is guaranteed that the readers\n>don't see a torn-value, but no synchronization is provided.\n\n\nThe atomic increment also ensures the sequence\nof values is correct, specifically we don’t see \nrepeated values like we might with a conventional increment.\nAs a side effect, the instruction enforces a memory barrier, but we are not\nrelying on a barrier in this case.",
"msg_date": "Thu, 20 Jul 2023 16:32:22 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "Based on your feedback, I’ve updated the patch with additional comments.\n\n 1. Explain the two cases when writing to the control file, and\n 2. a bit more emphasis on unloggedLSNs not being valid after a crash.\n\nThanks to y’all.\n\n * John",
"msg_date": "Thu, 20 Jul 2023 23:13:29 +0000",
"msg_from": "John Morris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Atomic ops for unlogged LSN"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 4:43 AM John Morris <[email protected]> wrote:\n>\n> Based on your feedback, I’ve updated the patch with additional comments.\n>\n> Explain the two cases when writing to the control file, and\n> a bit more emphasis on unloggedLSNs not being valid after a crash.\n\nGiven that the callers already have to deal with the unloggedLSN being\nreset after a crash, meaning, they can't expect an always increasing\nvalue from unloggedLSN, I think converting it to an atomic variable\nseems a reasonable change. The other advantage we get here is the\nfreeing up shared memory space for spinlock ulsn_lck.\n\nThe attached v2 patch LGTM and marked the CF entry RfC -\nhttps://commitfest.postgresql.org/43/4330/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Jul 2023 12:33:10 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Atomic ops for unlogged LSN"
}
] |
[
{
"msg_contents": "With the idea that someone else will someday need to create the Postgres\nmajor release notes, here are the instructions and editor macros I use,\nand the intermediate files I created to generate the Postgres 16 major\nrelease notes.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 24 May 2023 00:34:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Instructions for creating the Postgres major release notes"
}
] |
[
{
"msg_contents": "Hi all,\n\n$Subject says it all. Based on an analysis of the code, I can note\nthe following when it comes to the removal of 1.0.1:\n- A lot of code related to channel binding is now simplified as\nX509_get_signature_nid() always exists, mostly around its CFLAGS.\n- This removes the comments related to 1.0.1e introduced by 74242c2.\n\nThen for 1.0.2, the following flags can be gone:\nHAVE_ASN1_STRING_GET0_DATA\nHAVE_BIO_GET_DATA\nHAVE_BIO_METH_NEW\nHAVE_HMAC_CTX_FREE\nHAVE_HMAC_CTX_NEW\n\nIt would be nice to remove CRYPTO_lock(), but from what I can see the\nfunction still exists in LibreSSL, meaning that libpq locking wouldn't\nbe thread-safe if these function's paths are removed.\n\nAnother related question is how much do we care about builds with\nLibreSSL with MSVC? This patch sets takes the simple path of assuming\nthat this has never been really tested, and if you look at the MSVC\nscripts on HEAD we rely on a version number from OpenSSL, which is not\nsomething LibreSSL copes nicely with already, as documented in\nconfigure.ac.\n\nOpenSSL 1.0.2 has been EOLd at the end of 2019, and 1.0.1 had its last\nminor release in September 2019, so with Postgres 17 targetted in\nSeptember/October 2024, we would be five years behind that.\n\nLast comes the buildfarm, and I suspect that a few animals are\nbuilding with 1.0.2. Among what I have spotted:\n- wobbegong and vulpes, on Fedora 27, though I could not find\nreferences about the version used there.\n- bonito, on Fedora 29.\n- SUSE 12 SP{4,5} have 1.0.2 as their newer version.\n- butterflyfish may not like that, if I recall correctly, as it should\nhave 1.0.2.\n\nSo, it seems to me that 1.0.1 would be a rather silent move for the\nbuildfarm, and 1.0.2 could lead to some noise. Note as well that\n1.1.0 support has been stopped by upstream at the same time as 1.0.1,\nwith a last minor release in September 2019, though that feels like a\nhuge jump at this stage. On a code basis, removing 1.0.1 leads to the\nmost cleanup. The removal of 1.0.2 would be nice, but the tweaks\nneeded for LibreSSL make it less appealing.\n\nAttached are two patches to remove support for 1.0.1 and 1.0.2 for\nnow, kept separated for clarity, to be considered as something to do\nat the beginning of the v17 cycle. 0001 is in a rather good shape\nseen from here.\n\nNow, for 0002 and the removal of 1.0.2, I am seeing two things once\nOPENSSL_API_COMPAT is bumped from 0x10002000L to 0x10100000L:\n- be-secure-openssl.c requires an extra openssl/bn.h, which is not a\nbig deal, from what I get.\n- Much more interesting: OpenSSL_add_all_algorithms() has two macros\nthat get removed, see include/openssl/evp.h:\n# ifdef OPENSSL_LOAD_CONF\n# define OpenSSL_add_all_algorithms() \\\n OPENSSL_init_crypto(OPENSSL_INIT_ADD_ALL_CIPHERS \\\n | OPENSSL_INIT_ADD_ALL_DIGESTS \\\n | OPENSSL_INIT_LOAD_CONFIG, NULL)\n# else\n# define OpenSSL_add_all_algorithms() \\\n OPENSSL_init_crypto(OPENSSL_INIT_ADD_ALL_CIPHERS \\\n | OPENSSL_INIT_ADD_ALL_DIGESTS, NULL)\n# endif\n\nThe part where I am not quite sure of what to do is about\nOPENSSL_LOAD_CONF. We could call directly OPENSSL_init_crypto() and\nadd OPENSSL_INIT_LOAD_CONFIG if building with OPENSSL_LOAD_CONF, but\nit feels a bit ugly to copy-paste this code from OpenSSL itself.\nNote that patch 0002 still has OPENSSL_API_COMPAT at 0x10002000L.\nOPENSSL_init_crypto() looks to be in LibreSSL, and it is new in\nOpenSSL 1.1.0, so switching the minimum to OpenSSL 1.1.0 should not\nrequire a new cflags on this one.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 24 May 2023 17:22:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 24 May 2023, at 10:22, Michael Paquier <[email protected]> wrote:\n\n> The removal of 1.0.2 would be nice, but the tweaks\n> needed for LibreSSL make it less appealing.\n\n1.0.2 is also an LTS version available commercially for premium support\ncustomers of OpenSSL (1.1.1 will become an LTS version as well), with 1.0.2zh\nslated for release next week. This raises the likelyhood of Postgres\ninstallations using 1.0.2 in production still, and for some time to come.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 24 May 2023 11:36:56 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, May 24, 2023 at 11:36:56AM +0200, Daniel Gustafsson wrote:\n> 1.0.2 is also an LTS version available commercially for premium support\n> customers of OpenSSL (1.1.1 will become an LTS version as well), with 1.0.2zh\n> slated for release next week. This raises the likelyhood of Postgres\n> installations using 1.0.2 in production still, and for some time to come.\n\nGood point. Indeed, that makes it pretty clear that not dropping\n1.0.2 would be the best option for the time being, so 0001 would be\nenough.\n\nI am wondering if we should worry about having a buildfarm member that\ncould test these binaries, though, in case they have compatibility\nissues.. But it would be harder to debug without the code at hand, as\nwell.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 18:52:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 24 May 2023, at 11:52, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, May 24, 2023 at 11:36:56AM +0200, Daniel Gustafsson wrote:\n>> 1.0.2 is also an LTS version available commercially for premium support\n>> customers of OpenSSL (1.1.1 will become an LTS version as well), with 1.0.2zh\n>> slated for release next week. This raises the likelyhood of Postgres\n>> installations using 1.0.2 in production still, and for some time to come.\n> \n> Good point. Indeed, that makes it pretty clear that not dropping\n> 1.0.2 would be the best option for the time being, so 0001 would be\n> enough.\n\nI think thats the right move re 1.0.2 support. 1.0.2 is also the version in\nRHEL7 which is in ELS until 2026.\n\nWhen we moved the goalposts to 1.0.1 (commit 7b283d0e1d1) we referred to RHEL6\nusing 1.0.1, and RHEL6 goes out of ELS in late June 2024 seems possible to drop\n1.0.1 support during v17. I haven't studied the patch yet but I'll have a look\nat it.\n\n> I am wondering if we should worry about having a buildfarm member that\n> could test these binaries, though, in case they have compatibility\n> issues.. But it would be harder to debug without the code at hand, as\n> well.\n\nIt would be nice it the OpenSSL project could grant us an LTS license for a\nbuildfarm animal to ensure compatibility but I have no idea how realistic that\nis (or how much the LTS version of 1.0.2 has diverged from the last available\npublic 1.0.2 version).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 24 May 2023 13:03:04 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> It would be nice it the OpenSSL project could grant us an LTS license for a\n> buildfarm animal to ensure compatibility but I have no idea how realistic that\n> is (or how much the LTS version of 1.0.2 has diverged from the last available\n> public 1.0.2 version).\n\nSurely the answer must be \"not much\". The entire point of an LTS\nversion is to not have to change dusty calling applications.\n\nWe had definitely better have some animals still using 1.0.2, but\nI don't see much reason to think that the last public release\nwouldn't be good enough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 10:15:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, May 24, 2023 at 01:03:04PM +0200, Daniel Gustafsson wrote:\n> When we moved the goalposts to 1.0.1 (commit 7b283d0e1d1) we referred to RHEL6\n> using 1.0.1, and RHEL6 goes out of ELS in late June 2024 seems possible to drop\n> 1.0.1 support during v17. I haven't studied the patch yet but I'll have a look\n> at it.\n\nGreat, thanks for the help.\n--\nMichael",
"msg_date": "Thu, 25 May 2023 07:18:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 25 May 2023, at 00:18, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, May 24, 2023 at 01:03:04PM +0200, Daniel Gustafsson wrote:\n>> When we moved the goalposts to 1.0.1 (commit 7b283d0e1d1) we referred to RHEL6\n>> using 1.0.1, and RHEL6 goes out of ELS in late June 2024 seems possible to drop\n>> 1.0.1 support during v17. I haven't studied the patch yet but I'll have a look\n>> at it.\n\nPatch looks to be in pretty good shape, just a few minor comments:\n\n-#ifdef HAVE_BE_TLS_GET_CERTIFICATE_HASH\n+#ifdef USE_OPENSSL\nSince this is only calling the pgtls abstraction API and not directly into the\nSSL implementation we should use USE_SSL here instead. Same for the\ncorresponding hunks in the frontend code.\n\n\n+\t * Note that if we don't support channel binding if we're not using SSL at\n+\t * all, we would not have advertised the PLUS variant in the first place.\nSeems like a word fell off when merging these sentences. This should probably\nbe \"..support channel binding, or if we're no..\" or something similar.\n\n\n-#if defined(USE_OPENSSL) && (defined(HAVE_X509_GET_SIGNATURE_NID) || defined(HAVE_X509_GET_SIGNATURE_INFO))\n-#define HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH\n+#ifdef USE_OPENSSL\n extern char *pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len);\n #endif\nNo need for any guard at all now is there? All supported SSL implementations\nhave it, and I doubt we'd accept a new one that doesn't (or which can't\nimplement the function and error out).\n\n\n- # Functions introduced in OpenSSL 1.0.2. LibreSSL does not have\n- # SSL_CTX_set_cert_cb().\n- AC_CHECK_FUNCS([X509_get_signature_nid SSL_CTX_set_cert_cb])\n+ # LibreSSL does not have SSL_CTX_set_cert_cb().\n+ AC_CHECK_FUNCS([SSL_CTX_set_cert_cb])\nThe comment about when the function was introduced is still interesting and\nshould remain IMHO.\n\nThe changes to the Windows buildsystem worry me a bit, but they don't move the\ngoalposts in either direction wrt to LibreSSL on Windows so for the purpose of\nthis patch we don't need to do more. Longer term we should either make\nLibreSSL work on Windows builds, or explicitly not support it on Windows.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 25 May 2023 10:16:27 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 24 May 2023, at 16:15, Tom Lane <[email protected]> wrote:\n\n> We had definitely better have some animals still using 1.0.2, but\n> I don't see much reason to think that the last public release\n> wouldn't be good enough.\n\nThere are still RHEL7 animals like chub who use 1.0.2 so I'm not worried. We\nmight want to consider displaying the OpenSSL version number during configure\n(meson already does it) in all branches to make it easier to figure out which\nversions we have coverage for?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 25 May 2023 10:27:02 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, May 25, 2023 at 10:27:02AM +0200, Daniel Gustafsson wrote:\n> There are still RHEL7 animals like chub who use 1.0.2 so I'm not worried. We\n> might want to consider displaying the OpenSSL version number during configure\n> (meson already does it) in all branches to make it easier to figure out which\n> versions we have coverage for?\n\n+1.\n--\nMichael",
"msg_date": "Fri, 26 May 2023 08:47:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, May 25, 2023 at 10:16:27AM +0200, Daniel Gustafsson wrote:\n> -#ifdef HAVE_BE_TLS_GET_CERTIFICATE_HASH\n> +#ifdef USE_OPENSSL\n> Since this is only calling the pgtls abstraction API and not directly into the\n> SSL implementation we should use USE_SSL here instead. Same for the\n> corresponding hunks in the frontend code.\n\nMakes sense, done.\n\n> +\t * Note that if we don't support channel binding if we're not using SSL at\n> +\t * all, we would not have advertised the PLUS variant in the first place.\n> Seems like a word fell off when merging these sentences. This should probably\n> be \"..support channel binding, or if we're no..\" or something similar.\n\nIndeed, something has been eaten when merging these lines.\n\n> -#if defined(USE_OPENSSL) && (defined(HAVE_X509_GET_SIGNATURE_NID) || defined(HAVE_X509_GET_SIGNATURE_INFO))\n> -#define HAVE_PGTLS_GET_PEER_CERTIFICATE_HASH\n> +#ifdef USE_OPENSSL\n> extern char *pgtls_get_peer_certificate_hash(PGconn *conn, size_t *len);\n> #endif\n> No need for any guard at all now is there? All supported SSL implementations\n> have it, and I doubt we'd accept a new one that doesn't (or which can't\n> implement the function and error out).\n\nYup. It looks that you are right. A build without SSL is not\ncomplaining once removed in libpq-int.h or libpq-be.h.\n\n> - # Functions introduced in OpenSSL 1.0.2. LibreSSL does not have\n> - # SSL_CTX_set_cert_cb().\n> - AC_CHECK_FUNCS([X509_get_signature_nid SSL_CTX_set_cert_cb])\n> + # LibreSSL does not have SSL_CTX_set_cert_cb().\n> + AC_CHECK_FUNCS([SSL_CTX_set_cert_cb])\n> The comment about when the function was introduced is still interesting and\n> should remain IMHO.\n\nOkay. Kept as well in meson.build.\n\n> The changes to the Windows buildsystem worry me a bit, but they don't move the\n> goalposts in either direction wrt to LibreSSL on Windows so for the purpose of\n> this patch we don't need to do more. Longer term we should either make\n> LibreSSL work on Windows builds, or explicitly not support it on Windows.\n\nYes, I was wondering what to do about that in the long term. With the\nargument that the MSVC scripts may be gone over meson, it is not\nreally appealing to dig into that.\n\nSomething that was missing in 0001 is the bump of OPENSSL_API_COMPAT\nin meson.build. This was in 0002.\n\nPlease find attached an updated patch only for the removal of 1.0.1.\nThanks for the review. \n--\nMichael",
"msg_date": "Fri, 26 May 2023 11:08:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 26 May 2023, at 04:08, Michael Paquier <[email protected]> wrote:\n> On Thu, May 25, 2023 at 10:16:27AM +0200, Daniel Gustafsson wrote:\n\n>> The changes to the Windows buildsystem worry me a bit, but they don't move the\n>> goalposts in either direction wrt to LibreSSL on Windows so for the purpose of\n>> this patch we don't need to do more. Longer term we should either make\n>> LibreSSL work on Windows builds, or explicitly not support it on Windows.\n> \n> Yes, I was wondering what to do about that in the long term. With the\n> argument that the MSVC scripts may be gone over meson, it is not\n> really appealing to dig into that.\n\nYeah, let's revisit this during the v17 cycle when the future of these scripts\nwill become clearer.\n\n> Something that was missing in 0001 is the bump of OPENSSL_API_COMPAT\n> in meson.build. This was in 0002.\n> \n> Please find attached an updated patch only for the removal of 1.0.1.\n> Thanks for the review. \n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 26 May 2023 10:09:35 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, May 25, 2023 at 7:09 PM Michael Paquier <[email protected]> wrote:\n> Please find attached an updated patch only for the removal of 1.0.1.\n> Thanks for the review.\n\nNice! Sorry about the new complications with LibreSSL. :(\n\n> - # Functions introduced in OpenSSL 1.0.2. LibreSSL does not have\n> + # Function introduced in OpenSSL 1.0.2. LibreSSL does not have\n> # SSL_CTX_set_cert_cb().\n> - AC_CHECK_FUNCS([X509_get_signature_nid SSL_CTX_set_cert_cb])\n> + AC_CHECK_FUNCS([SSL_CTX_set_cert_cb])\n\nCan X509_get_signature_nid be moved to the required section up above?\nAs it is now, anyone configuring with -Dssl=auto can still pick up a\n1.0.1 build, which Meson accepts, and then the build fails downstream.\nIf we require the function instead, Meson will ignore 1.0.1 (or, for\n-Dssl=openssl, complain before we compile).\n\nt/001_ssltests.pl has a reference to 1.0.1 that can probably be\nentirely deleted:\n\n # ... (Nor for OpenSSL\n # 1.0.1, but that's old enough that accommodating it isn't worth the cost.)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 26 May 2023 09:10:17 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, May 26, 2023 at 09:10:17AM -0700, Jacob Champion wrote:\n> Can X509_get_signature_nid be moved to the required section up above?\n> As it is now, anyone configuring with -Dssl=auto can still pick up a\n> 1.0.1 build, which Meson accepts, and then the build fails downstream.\n> If we require the function instead, Meson will ignore 1.0.1 (or, for\n> -Dssl=openssl, complain before we compile).\n\nYes, I was wondering a bit if something more should be marked as\nrequired, but just saw more value in removing all references to this\nfunction. Making the build fail straight when setting up things is OK\nby me, but I am not convinced that X509_get_signature_nid() would be\nthe right choice for the job, as it is an OpenSSL artifact originally,\nAFAIK.\n\nThe same problem exists with OpenSSL 1.0.0 on HEAD when building with\nmeson? CRYPTO_new_ex_data() and SSL_new() exist there.\n\n> t/001_ssltests.pl has a reference to 1.0.1 that can probably be\n> entirely deleted:\n> \n> # ... (Nor for OpenSSL\n> # 1.0.1, but that's old enough that accommodating it isn't worth the cost.)\n\nNot touching that is intentional. It sounded useful to me as an\nhistoric reference for LibreSSL ;)\n--\nMichael",
"msg_date": "Sat, 27 May 2023 11:02:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 27 May 2023, at 04:02, Michael Paquier <[email protected]> wrote:\n\n> Making the build fail straight when setting up things is OK\n> by me, but I am not convinced that X509_get_signature_nid() would be\n> the right choice for the job, as it is an OpenSSL artifact originally,\n> AFAIK.\n\nI think we should avoid the is-defined-in dance and just pull out the version\nnumbers for comparisons. While it's true that LibreSSL doesn't play well with\nOpenSSL versions, they do define their own which can be checked for to\ndistinguish the libraries.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 2 Jun 2023 10:35:43 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Jun 02, 2023 at 10:35:43AM +0200, Daniel Gustafsson wrote:\n> I think we should avoid the is-defined-in dance and just pull out the version\n> numbers for comparisons. While it's true that LibreSSL doesn't play well with\n> OpenSSL versions, they do define their own which can be checked for to\n> distinguish the libraries.\n\nAgreed. How about tackling that in a separate patch? At this stage,\nI would like to not care about ./configure and do it only with meson,\nbut there is value in reporting the version number in ./configure to\nsee the version coverage in the buildfarm.\n--\nMichael",
"msg_date": "Fri, 2 Jun 2023 17:21:54 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 2 Jun 2023, at 23:21, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, Jun 02, 2023 at 10:35:43AM +0200, Daniel Gustafsson wrote:\n>> I think we should avoid the is-defined-in dance and just pull out the version\n>> numbers for comparisons. While it's true that LibreSSL doesn't play well with\n>> OpenSSL versions, they do define their own which can be checked for to\n>> distinguish the libraries.\n> \n> Agreed. How about tackling that in a separate patch?\n\nAbsolutely, let's keep these goalposts in place and deal with that separately.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 2 Jun 2023 23:23:19 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Jun 02, 2023 at 11:23:19PM +0200, Daniel Gustafsson wrote:\n> Absolutely, let's keep these goalposts in place and deal with that separately.\n\nI have not gone back to this part yet, though I plan to do so. As we\nare at the beginning of the development cycle, I have applied the\npatch to remove support for 1.0.1 for now on HEAD. Let's see what the\nbuildfarm tells.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 13:26:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 4:26 PM Michael Paquier <[email protected]> wrote:\n> I have not gone back to this part yet, though I plan to do so. As we\n> are at the beginning of the development cycle, I have applied the\n> patch to remove support for 1.0.1 for now on HEAD. Let's see what the\n> buildfarm tells.\n\ncurculio (OpenBSD 5.9) is failing with \"undefined reference to\n`X509_get_signature_nid'\", but that's OK, Mikael already supplied a\nmodern OpenBSD system to replace it (schnauzer, which is green) and he\nwas planning to shut curculio down (see Direct I/O thread where that\ncame up because its GCC 4.2 compiler doesn't understand our stack\nalignment directives; it will also break comprehensively when I push\nthe nearby all-supported-computers-have-locale_t patch from the\ncheck_strxfrm_bug thread).\n\n\n",
"msg_date": "Tue, 4 Jul 2023 06:40:49 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 3 Jul 2023, at 20:40, Thomas Munro <[email protected]> wrote:\n> \n> On Mon, Jul 3, 2023 at 4:26 PM Michael Paquier <[email protected]> wrote:\n>> I have not gone back to this part yet, though I plan to do so. As we\n>> are at the beginning of the development cycle, I have applied the\n>> patch to remove support for 1.0.1 for now on HEAD. Let's see what the\n>> buildfarm tells.\n> \n> curculio (OpenBSD 5.9) is failing with \"undefined reference to\n> `X509_get_signature_nid'\", but that's OK, Mikael already supplied a\n> modern OpenBSD system to replace it\n\nThanks for the report! OpenBSD 5.9 was released in 2016 and is thus well over\n5 years EOL, so I agree that it doesn't warrant a code change from us to\nsupport this.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 20:53:52 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "\n\nOn 2023-07-03 20:53, Daniel Gustafsson wrote:\n\n>> curculio (OpenBSD 5.9) is failing with \"undefined reference to\n>> `X509_get_signature_nid'\", but that's OK, Mikael already supplied a\n>> modern OpenBSD system to replace it\n> \n> Thanks for the report! OpenBSD 5.9 was released in 2016 and is thus well over\n> 5 years EOL, so I agree that it doesn't warrant a code change from us to\n> support this.\n\nI have retired curculio now. So it will stop reporting in from now on.\n\n/Mikael\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 22:23:02 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 10:23:02PM +0200, Mikael Kjellström wrote:\n> On 2023-07-03 20:53, Daniel Gustafsson wrote:\n>>> curculio (OpenBSD 5.9) is failing with \"undefined reference to\n>>> `X509_get_signature_nid'\", but that's OK, Mikael already supplied a\n>>> modern OpenBSD system to replace it\n>> \n>> Thanks for the report! OpenBSD 5.9 was released in 2016 and is thus well over\n>> 5 years EOL, so I agree that it doesn't warrant a code change from us to\n>> support this.\n\nOpenBSD 5.9 was EOL in 2017 as far as I know.\n\n> I have retired curculio now. So it will stop reporting in from now on.\n\nThanks Mikael!\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 07:13:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 06:40:49AM +1200, Thomas Munro wrote:\n> curculio (OpenBSD 5.9) is failing with \"undefined reference to\n> `X509_get_signature_nid'\", but that's OK, Mikael already supplied a\n> modern OpenBSD system to replace it (schnauzer, which is green) and he\n> was planning to shut curculio down (see Direct I/O thread where that\n> came up because its GCC 4.2 compiler doesn't understand our stack\n> alignment directives; it will also break comprehensively when I push\n> the nearby all-supported-computers-have-locale_t patch from the\n> check_strxfrm_bug thread).\n\nThe second and third animals to fail are skate and snapper, both using\nDebian 7 Wheezy. As far as I know, it was an LTS supported until\n2018. The owner of both machines is added in CC. I guess that we\nthis stuff could just remove --with-openssl from the configure\nswitches.\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 07:16:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 07:16:47AM +0900, Michael Paquier wrote:\n> The second and third animals to fail are skate and snapper, both using\n> Debian 7 Wheezy. As far as I know, it was an LTS supported until\n> 2018. The owner of both machines is added in CC. I guess that we\n> this stuff could just remove --with-openssl from the configure\n> switches.\n\nlapwing has reported a failure and runs a Debian 7, so adding Julien\nin CC about the removal of --with-openssl or similar in this animal.\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 15:03:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 04, 2023 at 03:03:01PM +0900, Michael Paquier wrote:\n> On Tue, Jul 04, 2023 at 07:16:47AM +0900, Michael Paquier wrote:\n> > The second and third animals to fail are skate and snapper, both using\n> > Debian 7 Wheezy. As far as I know, it was an LTS supported until\n> > 2018. The owner of both machines is added in CC. I guess that we\n> > this stuff could just remove --with-openssl from the configure\n> > switches.\n>\n> lapwing has reported a failure and runs a Debian 7, so adding Julien\n> in CC about the removal of --with-openssl or similar in this animal.\n\nThanks, I actually saw that and already took care of removing openssl support a\ncouple of hours ago, and also added a new note on the animal to remember when\nit was removed. It should come back to green at the next scheduled run.\n\n\n",
"msg_date": "Tue, 4 Jul 2023 14:15:18 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 02:15:18PM +0800, Julien Rouhaud wrote:\n> Thanks, I actually saw that and already took care of removing openssl support a\n> couple of hours ago, and also added a new note on the animal to remember when\n> it was removed. It should come back to green at the next scheduled run.\n\nThanks!\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 15:22:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, May 24, 2023 at 11:03 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 24 May 2023, at 11:52, Michael Paquier <[email protected]> wrote:\n> > On Wed, May 24, 2023 at 11:36:56AM +0200, Daniel Gustafsson wrote:\n> >> 1.0.2 is also an LTS version available commercially for premium support\n> >> customers of OpenSSL (1.1.1 will become an LTS version as well), with 1.0.2zh\n> >> slated for release next week. This raises the likelyhood of Postgres\n> >> installations using 1.0.2 in production still, and for some time to come.\n> >\n> > Good point. Indeed, that makes it pretty clear that not dropping\n> > 1.0.2 would be the best option for the time being, so 0001 would be\n> > enough.\n>\n> I think thats the right move re 1.0.2 support. 1.0.2 is also the version in\n> RHEL7 which is in ELS until 2026.\n\nI don't mind either way if we rip out OpenSSL 1.0.2 support now or\nlater, other than a general feeling that cryptography must be about\nthe worst possible category of software to keep supporting for years\nafter it has been declared EOL.\n\nBut.. I don't like the idea that our *next* release's library version\nhorizon is controlled by Red Hat's \"ELS\" phase. The\nyum.postgresql.org team aren't packaging 17 for RHEL7 AFAICS, which is\nas it should be if you ask me, because the 10 year maintenance phase\nends before 17 will ship. These hypothetical users that want to run\nan OS even older than that and don't know how to get modern crypto\nlibraries on it but insist on a shiny new PostgreSQL release and build\nit from source because there are no packages available... don't exist?\n\n\n",
"msg_date": "Thu, 7 Sep 2023 23:30:15 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 7 Sep 2023, at 13:30, Thomas Munro <[email protected]> wrote:\n\n> I don't like the idea that our *next* release's library version\n> horizon is controlled by Red Hat's \"ELS\" phase.\n\nAgreed. If we instead fence it by \"only non-EOL version\" then 1.1.1 is also on\nthe chopping block for v17 as it goes EOL in 4 days from now with 1.1.1w (which\ncontains a CVE, going out with a bang). Not sure what the best strategy is,\nbut whichever we opt for I think the most important point is to document it\nclearly.\n\n> These hypothetical users that want to run\n> an OS even older than that and don't know how to get modern crypto\n> libraries on it but insist on a shiny new PostgreSQL release and build\n> it from source because there are no packages available... don't exist?\n\nSadly I wouldn't be the least bit surprised if there are 1.0.2 users on modern\noperating systems, especially given its LTS status (which OpenSSL hasn't even\ncapped but sells by \"for as long as it remains commercially viable to do so\"\nbasis). That being said, my gut feeling is that 3.x has gotten pretty good\nmarket penetration.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 7 Sep 2023 13:44:11 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 01:44:11PM +0200, Daniel Gustafsson wrote:\n> Sadly I wouldn't be the least bit surprised if there are 1.0.2 users on modern\n> operating systems, especially given its LTS status (which OpenSSL hasn't even\n> capped but sells by \"for as long as it remains commercially viable to do so\"\n> basis).\n\nYes, I would not be surprised by that either. TBH, I don't like much\nthe fact that we rely on OpenSSL to decide when we should cut it. \nParticularly since all the changes given to it after it got EOL'd are\nclose source at this point.\n\n> That being said, my gut feeling is that 3.x has gotten pretty good\n> market penetration.\n\nPerhaps.\n--\nMichael",
"msg_date": "Fri, 8 Sep 2023 08:48:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 11:48 AM Michael Paquier <[email protected]> wrote:\n> On Thu, Sep 07, 2023 at 01:44:11PM +0200, Daniel Gustafsson wrote:\n> > Sadly I wouldn't be the least bit surprised if there are 1.0.2 users on modern\n> > operating systems, especially given its LTS status (which OpenSSL hasn't even\n> > capped but sells by \"for as long as it remains commercially viable to do so\"\n> > basis).\n>\n> Yes, I would not be surprised by that either. TBH, I don't like much\n> the fact that we rely on OpenSSL to decide when we should cut it.\n> Particularly since all the changes given to it after it got EOL'd are\n> close source at this point.\n\nRight, just like there are people running ancient PostgreSQL and\nbuying support. That's not relevant to PostgreSQL 17 IMHO, which\nshould target contemporary distributions.\n\nBTW I'm not asking anyone to do anything here, I just didn't want to\nallow the \"RHEL ELS\" and \"closed source OpenSSL [sic]\" theories\nmentioned on this thread to go unchallenged. Next time I'm trying to\nclean up some other cruft in our tree, I don't want this thread to be\ncited as evidence that that is our policy, because I don't buy it, it\ndoesn't make any sense. Of course there is someone, somewhere selling\nsupport for anything you can think of. There are companies that\nsupport VAXen. There's a company in Irvine, California selling and\nsupporting modern drop-in replacements for PDP 11s for production use.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 12:19:28 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 11:44 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 7 Sep 2023, at 13:30, Thomas Munro <[email protected]> wrote:\n> > I don't like the idea that our *next* release's library version\n> > horizon is controlled by Red Hat's \"ELS\" phase.\n>\n> Agreed. If we instead fence it by \"only non-EOL version\" then 1.1.1 is also on\n> the chopping block for v17 as it goes EOL in 4 days from now with 1.1.1w (which\n> contains a CVE, going out with a bang). Not sure what the best strategy is,\n> but whichever we opt for I think the most important point is to document it\n> clearly.\n\nI was reminded of this thread by ambient security paranoia. As it\nstands, we require 1.0.2 (but we very much hope that package\nmaintainers and others in control of builds don't decide to use it).\nShould we skip 1.1.1 and move to requiring 3 for v17?\n\nUpstream says:\n\n\"The latest stable version is the 3.2 series supported until 23rd\nNovember 2025. Also available is the 3.1 series supported until 14th\nMarch 2025, and the 3.0 series which is a Long Term Support (LTS)\nversion and is supported until 7th September 2026. All older versions\n(including 1.1.1, 1.1.0, 1.0.2, 1.0.0 and 0.9.8) are now out of\nsupport and should not be used.\"\n\nI understand that some distros eg RHEL8 will continue to ship and\npresumably patch 1.1.1 until some date later than upstream's EOL, for\nstability and the benefit of people that really need it for a bit\nlonger, but that's in parallel with their package for 3, right? New\nthings should surely be able to require new things. I think we'd have\nto reject the argument that we should support it just because they\nship it until the year 2030, or that upstream will support anything\nfor $50,000/year. I mean, they only do that because some old apps\nneed it, to which I say 40P01 DEADLOCK DETECTED.\n\n\n",
"msg_date": "Sun, 31 Mar 2024 09:48:31 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> I was reminded of this thread by ambient security paranoia. As it\n> stands, we require 1.0.2 (but we very much hope that package\n> maintainers and others in control of builds don't decide to use it).\n> Should we skip 1.1.1 and move to requiring 3 for v17?\n\nI'd be kind of sad if I couldn't test SSL stuff anymore on my\nprimary workstation, which has\n\n$ rpm -q openssl\nopenssl-1.1.1k-12.el8_9.x86_64\n\nI think it's probably true that <=1.0.2 is not in any distro that\nwe still need to pay attention to, but I reject the contention\nthat RHEL8 is not in that set.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2024 16:59:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 9:59 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > I was reminded of this thread by ambient security paranoia. As it\n> > stands, we require 1.0.2 (but we very much hope that package\n> > maintainers and others in control of builds don't decide to use it).\n> > Should we skip 1.1.1 and move to requiring 3 for v17?\n>\n> I'd be kind of sad if I couldn't test SSL stuff anymore on my\n> primary workstation, which has\n>\n> $ rpm -q openssl\n> openssl-1.1.1k-12.el8_9.x86_64\n>\n> I think it's probably true that <=1.0.2 is not in any distro that\n> we still need to pay attention to, but I reject the contention\n> that RHEL8 is not in that set.\n\nHmm, OK so it doesn't have 3 available in parallel from base repos.\nBut it's also about to reach end of \"full support\" in 2 months[1], so\nif we applied the policies we discussed in the LLVM-vacuuming thread\n(to wit: build farm - EOL'd OSes), then... One question I'm unclear\non is whether v17 will be packaged for RHEL8.\n\n[1] https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux\n\n\n",
"msg_date": "Sun, 31 Mar 2024 10:27:44 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 30 Mar 2024, at 22:27, Thomas Munro <[email protected]> wrote:\n> On Sun, Mar 31, 2024 at 9:59 AM Tom Lane <[email protected]> wrote:\n\nThanks a lot for bringing this up again Thomas, 1.0.2 has bitten me so many\ntimes and I'd be thrilled to get rid of it.\n\n>> I think it's probably true that <=1.0.2 is not in any distro that\n>> we still need to pay attention to, but I reject the contention\n>> that RHEL8 is not in that set.\n> \n> Hmm, OK so it doesn't have 3 available in parallel from base repos.\n> But it's also about to reach end of \"full support\" in 2 months[1], so\n> if we applied the policies we discussed in the LLVM-vacuuming thread\n> (to wit: build farm - EOL'd OSes), then... One question I'm unclear\n> on is whether v17 will be packaged for RHEL8.\n\nWhile 1.1.1 is EOL in upstream, it won't buy us much to deprecate past it since\nwe don't really make use of 3.x specific functionality. I wouldn't mind not\nbeing on the hook to support an EOL version of OpenSSL for another 5 years, but\nit also won't shift the needle too much. For v18 I'd like to work on modernize\nour OpenSSL code to make more use of 3+ features/API's and that could be a good\npoint to cull 1.1.1 support.\n\nSettling for removing support for 1.0.2, which is antiques roadshow material at\nthis point (no TLSv1.3 support for example), removes most of the compatibility\nmess we have in libpq. 1.1.1 was not a deprecation point in OpenSSL but we can\ndefine 1.1.0 as our compatibility level to build warning-free.\n\nThe attached removes 1.0.2 support (meson build parts untested yet) with a few\nsmall touch ups of related documentation. I haven't yet done the research on\nwhere that leaves LibreSSL since we don't really define anywhere what we\nsupport (so for we've gotten by assuming it's kind of sort 1.0.2 for the parts\nwe care about which is skating on fairly thin ice).\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 2 Apr 2024 20:55:41 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 2 Apr 2024, at 20:55, Daniel Gustafsson <[email protected]> wrote:\n\n> The attached removes 1.0.2 support (meson build parts untested yet)\n\nAnd a rebased version which applies over the hmac_openssl.c changes earlier\ntoday that I hadn't pulled in.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 2 Apr 2024 23:16:39 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 11:55 AM Daniel Gustafsson <[email protected]> wrote:\n> The attached removes 1.0.2 support (meson build parts untested yet) with a few\n> small touch ups of related documentation. I haven't yet done the research on\n> where that leaves LibreSSL since we don't really define anywhere what we\n> support (so for we've gotten by assuming it's kind of sort 1.0.2 for the parts\n> we care about which is skating on fairly thin ice).\n\nAs far as I can tell, no versions of LibreSSL so far provide\nX509_get_signature_info(), so this patch is probably a bit too\naggressive.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Apr 2024 07:04:00 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> As far as I can tell, no versions of LibreSSL so far provide\n> X509_get_signature_info(), so this patch is probably a bit too\n> aggressive.\n\nAnother problem with cutting support is how many buildfarm members\nwill we lose. I scraped recent configure logs and got the attached\nresults. I count 3 machines running 1.0.1, 18 running some flavor\nof 1.0.2, and 7 running various LibreSSL versions. We could\nprobably retire or update the 1.0.1 installations, but the rest\nwould represent a heavier lift. Notably, it seems that what macOS\nis shipping is LibreSSL.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 03 Apr 2024 11:29:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 8:29 AM Tom Lane <[email protected]> wrote:\n> I count 3 machines running 1.0.1, 18 running some flavor\n> of 1.0.2, and 7 running various LibreSSL versions.\n\nI don't know all the tradeoffs with buildfarm wrangling, but IMO all\nthose 1.0.2 installations are the most problematic, so I dug in a bit:\n\n arowana CentOS 7\n batfish Ubuntu 16.04.3\n boa RHEL 7\n buri CentOS 7\n butterflyfish Photon 2.0\n clam RHEL 7.1\n cuon Ubuntu 16.04\n dhole CentOS 7.4\n hake OpenIndiana hipster\n mantid CentOS 7.9\n margay Solaris 11.4.42\n massasauga Amazon Linux 2\n myna Photon 3.0\n parula Amazon Linux 2\n rhinoceros CentOS 7.1\n shelduck SUSE 12SP5\n siskin RHEL 7.9\n snakefly Amazon Linux 2\n\nThe RHEL7-alikes are the biggest set, but that's already discussed\nabove. Looks like SUSE 12 goes EOL later this year (October 2024), and\nit ships OpenSSL 1.1.1 as an option. Already-dead distros are Ubuntu\n16.04 (April 2021), Photon 2 (January 2023), and Photon 3 (March\n2024). That leaves AL2, OpenIndiana Hipster, and Solaris 11.4, all of\nwhich appear to have newer versions of OpenSSL shipped and selectable.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Apr 2024 10:25:36 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> The RHEL7-alikes are the biggest set, but that's already discussed\n> above. Looks like SUSE 12 goes EOL later this year (October 2024), and\n> it ships OpenSSL 1.1.1 as an option. Already-dead distros are Ubuntu\n> 16.04 (April 2021), Photon 2 (January 2023), and Photon 3 (March\n> 2024). That leaves AL2, OpenIndiana Hipster, and Solaris 11.4, all of\n> which appear to have newer versions of OpenSSL shipped and selectable.\n\nThe discussion we had last year concluded that we were OK with\ndropping 1.0.1 support when RHEL6 goes out of extended support\n(June 2024 per this thread, I didn't check it). Seems like we\nshould have the same policy for RHEL7. Also, calling Photon 3\ndead because it went EOL three days ago seems over-hasty.\n\nBottom line for me is that pulling 1.0.1 support now is OK,\nbut I think pulling 1.0.2 is premature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 13:38:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 10:38 AM Tom Lane <[email protected]> wrote:\n> Also, calling Photon 3\n> dead because it went EOL three days ago seems over-hasty.\n\nWell, March 1, but either way I thought \"dead\" for the purposes of\nthis thread meant \"you can't build the very latest version of Postgres\non it\", not \"we've forgotten it exists\". Back branches will continue\nto need support and testing.\n\n> Bottom line for me is that pulling 1.0.1 support now is OK,\n> but I think pulling 1.0.2 is premature.\n\nOkay, but IIUC, waiting for it to drop out of extended support means\nwe deal with it for four more years. That seems excessive.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Apr 2024 10:55:44 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> On Wed, Apr 3, 2024 at 10:38 AM Tom Lane <[email protected]> wrote:\n>> Bottom line for me is that pulling 1.0.1 support now is OK,\n>> but I think pulling 1.0.2 is premature.\n\n> Okay, but IIUC, waiting for it to drop out of extended support means\n> we deal with it for four more years. That seems excessive.\n\nwikipedia says that RHEL7 ends ELS as of June 2026 [1].\n\n\t\t\tregards, tom lane\n\n[1] https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Product_life_cycle\n\n\n",
"msg_date": "Wed, 03 Apr 2024 14:13:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 11:13 AM Tom Lane <[email protected]> wrote:\n> wikipedia says that RHEL7 ends ELS as of June 2026 [1].\n\nI may have misunderstood something in here then:\n\n https://www.redhat.com/en/blog/announcing-4-years-extended-life-cycle-support-els-red-hat-enterprise-linux-7\n\n> ELS for RHEL 7 is now available for 4 years, starting on July 1, 2024.\n\nAm I missing something?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 3 Apr 2024 11:17:41 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 3 Apr 2024, at 19:38, Tom Lane <[email protected]> wrote:\n> \n> Jacob Champion <[email protected]> writes:\n>> The RHEL7-alikes are the biggest set, but that's already discussed\n>> above. Looks like SUSE 12 goes EOL later this year (October 2024), and\n>> it ships OpenSSL 1.1.1 as an option. Already-dead distros are Ubuntu\n>> 16.04 (April 2021), Photon 2 (January 2023), and Photon 3 (March\n>> 2024). That leaves AL2, OpenIndiana Hipster, and Solaris 11.4, all of\n>> which appear to have newer versions of OpenSSL shipped and selectable.\n> \n> The discussion we had last year concluded that we were OK with\n> dropping 1.0.1 support when RHEL6 goes out of extended support\n> (June 2024 per this thread, I didn't check it). Seems like we\n> should have the same policy for RHEL7. Also, calling Photon 3\n> dead because it went EOL three days ago seems over-hasty.\n> \n> Bottom line for me is that pulling 1.0.1 support now is OK,\n> but I think pulling 1.0.2 is premature.\n\nIs Red Hat building and and shipping v17 packages for RHEL7 ELS customers? If\nnot then it seems mostly academical to tie our dependencies to RHEL ELS unless\nI'm missing something.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 3 Apr 2024 21:08:10 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 3 Apr 2024, at 17:29, Tom Lane <[email protected]> wrote:\n> \n> Jacob Champion <[email protected]> writes:\n>> As far as I can tell, no versions of LibreSSL so far provide\n>> X509_get_signature_info(), so this patch is probably a bit too\n>> aggressive.\n> \n> Another problem with cutting support is how many buildfarm members\n> will we lose. I scraped recent configure logs and got the attached\n> results. I count 3 machines running 1.0.1,\n\nSupport for 1.0.1 was removed with 8e278b657664 in July 2023 so those are not\nbuilding with OpenSSL enabled already.\n\n> 18 running some flavor of 1.0.2,\n\nmassasauga and snakefly run the ssl_passphrase_callback-check test but none of\nthese run the ssl-check tests AFAICT, so we have very low coverage as is. The\nfact that very few animals run the ssl tests is a pet peeve of mine, it would\nbe nice if we could get broader coverage there.\n\nWorth noting is that the OpenSSL check in configure.ac only reports what the\nversion of the OpenSSL binary in $PATH is, not which version of the library\nthat we build against (using --with-libs/--with-includes etc).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 3 Apr 2024 21:12:47 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 2024-04-03 We 15:12, Daniel Gustafsson wrote:\n> The\n> fact that very few animals run the ssl tests is a pet peeve of mine, it would\n> be nice if we could get broader coverage there.\n>\n\nWell, the only reason for that is that the SSL tests need to be listed \nin PG_TEST_EXTRA, and the only reason for that is that there's a \npossible hazard on multi-user servers. But I bet precious few buildfarm \nanimals run in such an environment. Mine don't - I'm the only user.\n\nMaybe we could send out an email to the buildfarm-owners list asking \npeople to consider allowing the ssl tests.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-03 We 15:12, Daniel\n Gustafsson wrote:\n\n The\nfact that very few animals run the ssl tests is a pet peeve of mine, it would\nbe nice if we could get broader coverage there.\n\n\n\n\n\nWell, the only reason for that is that the SSL tests need to be\n listed in PG_TEST_EXTRA, and the only reason for that is that\n there's a possible hazard on multi-user servers. But I bet\n precious few buildfarm animals run in such an environment. Mine\n don't - I'm the only user. \n\nMaybe we could send out an email to the buildfarm-owners list\n asking people to consider allowing the ssl tests.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 3 Apr 2024 15:48:31 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 3 Apr 2024, at 19:38, Tom Lane <[email protected]> wrote:\n>> Bottom line for me is that pulling 1.0.1 support now is OK,\n>> but I think pulling 1.0.2 is premature.\n\n> Is Red Hat building and and shipping v17 packages for RHEL7 ELS customers? If\n> not then it seems mostly academical to tie our dependencies to RHEL ELS unless\n> I'm missing something.\n\nTrue, they won't be doing that, and neither will Devrim. So maybe\nwe can leave RHEL7 out of the discussion, in which case there's\nnot a lot of reason to keep 1.0.2 support. We'll need to notify\nbuildfarm owners to adjust their configurations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2024 18:06:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 3 Apr 2024, at 21:48, Andrew Dunstan <[email protected]> wrote:\n> On 2024-04-03 We 15:12, Daniel Gustafsson wrote:\n>> The\n>> fact that very few animals run the ssl tests is a pet peeve of mine, it would\n>> be nice if we could get broader coverage there.\n> \n> Well, the only reason for that is that the SSL tests need to be listed in PG_TEST_EXTRA, and the only reason for that is that there's a possible hazard on multi-user servers. But I bet precious few buildfarm animals run in such an environment. Mine don't - I'm the only user. \n> \n> Maybe we could send out an email to the buildfarm-owners list asking people to consider allowing the ssl tests.\n\nI think that sounds like a good idea.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 00:26:17 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 4 Apr 2024, at 00:06, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> On 3 Apr 2024, at 19:38, Tom Lane <[email protected]> wrote:\n>>> Bottom line for me is that pulling 1.0.1 support now is OK,\n>>> but I think pulling 1.0.2 is premature.\n> \n>> Is Red Hat building and and shipping v17 packages for RHEL7 ELS customers? If\n>> not then it seems mostly academical to tie our dependencies to RHEL ELS unless\n>> I'm missing something.\n> \n> True, they won't be doing that, and neither will Devrim. So maybe\n> we can leave RHEL7 out of the discussion, in which case there's\n> not a lot of reason to keep 1.0.2 support. We'll need to notify\n> buildfarm owners to adjust their configurations.\n\nThe patch will also need to be adjusted to work with LibreSSL, but I know Jacob\nwas looking into that so ideally we should have something to review before\nthe weekend.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 00:27:27 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 30.03.24 22:27, Thomas Munro wrote:\n> On Sun, Mar 31, 2024 at 9:59 AM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> I was reminded of this thread by ambient security paranoia. As it\n>>> stands, we require 1.0.2 (but we very much hope that package\n>>> maintainers and others in control of builds don't decide to use it).\n>>> Should we skip 1.1.1 and move to requiring 3 for v17?\n>>\n>> I'd be kind of sad if I couldn't test SSL stuff anymore on my\n>> primary workstation, which has\n>>\n>> $ rpm -q openssl\n>> openssl-1.1.1k-12.el8_9.x86_64\n>>\n>> I think it's probably true that <=1.0.2 is not in any distro that\n>> we still need to pay attention to, but I reject the contention\n>> that RHEL8 is not in that set.\n> \n> Hmm, OK so it doesn't have 3 available in parallel from base repos.\n> But it's also about to reach end of \"full support\" in 2 months[1], so\n> if we applied the policies we discussed in the LLVM-vacuuming thread\n> (to wit: build farm - EOL'd OSes), then... One question I'm unclear\n> on is whether v17 will be packaged for RHEL8.\n\nThe rest of the thread talks about the end of support of RHEL 7, but you \nare here talking about RHEL 8. It is true that \"full support\" for RHEL \n8 ended in May 2024, but that is the not the one we are tracking. We \nare tracking the 10-year one, which I suppose is now called \"maintenance \nsupport\".\n\nSo if the above package list is correct, then we ought to keep \nsupporting openssl 1.1.* until 2029.\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 00:51:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 03, 2024 at 01:38:50PM -0400, Tom Lane wrote:\n> The discussion we had last year concluded that we were OK with\n> dropping 1.0.1 support when RHEL6 goes out of extended support\n> (June 2024 per this thread, I didn't check it). Seems like we\n> should have the same policy for RHEL7. Also, calling Photon 3\n> dead because it went EOL three days ago seems over-hasty.\n\nYeah. A bunch of users of Photon are VMware (or you could say\nBroadcom) product appliances, and I'd suspect that quite a lot of them\nrely on Photon 3 for their base OS image. Upgrading that stuff is not\neasy work in my experience because they need to cope with a bunch of\nembedded services.\n\n> Bottom line for me is that pulling 1.0.1 support now is OK,\n> but I think pulling 1.0.2 is premature.\n\nYeah, I guess so. At least that seems like the safest conclusion\ncurrently here. The build-time check on X509_get_signature_info()\nwould still be required.\n\nI'd love being able to rip out the internal locking logic currently in\nlibpq as LibreSSL has traces of CRYPTO_lock(), as far as I've checked,\nand we rely on its existence.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2024 08:24:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 11:51 AM Peter Eisentraut <[email protected]> wrote:\n> On 30.03.24 22:27, Thomas Munro wrote:\n> > Hmm, OK so it doesn't have 3 available in parallel from base repos.\n> > But it's also about to reach end of \"full support\" in 2 months[1], so\n> > if we applied the policies we discussed in the LLVM-vacuuming thread\n> > (to wit: build farm - EOL'd OSes), then... One question I'm unclear\n> > on is whether v17 will be packaged for RHEL8.\n>\n> The rest of the thread talks about the end of support of RHEL 7, but you\n> are here talking about RHEL 8. It is true that \"full support\" for RHEL\n> 8 ended in May 2024, but that is the not the one we are tracking. We\n> are tracking the 10-year one, which I suppose is now called \"maintenance\n> support\".\n\nI might have confused myself with the two EOLs and some wishful\nthinking. I am a lot less worked up about this general topic now that\nRHEL has moved to \"rolling\" LLVM updates in minor releases, removing a\nphysical-pain-inducing 10-year vacuuming horizon (that's 20 LLVM major\nreleases and they only fix bugs in one...). I will leave openssl\ndiscussions to those more knowledgeable about that.\n\n> So if the above package list is correct, then we ought to keep\n> supporting openssl 1.1.* until 2029.\n\nThat's a shame. But it sounds like the developer burden isn't so\ndifferent from 1.1.1 to 3.x, so maybe it's not such a big deal from\nour point of view. (I have no opinion on the security ramifications\nof upstream's EOL, but as a layman it sounds completely bonkers to use\nit. I wonder why the packaging community wouldn't just arrange to\nhave a supported-by-upstream 3.x package in their RPM repo when they\nsupply the newest PostgreSQL versions for the oldest RHEL, but again\nnot my area so I'll shut up).\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:50:52 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 4 Apr 2024, at 00:51, Peter Eisentraut <[email protected]> wrote:\n> \n> On 30.03.24 22:27, Thomas Munro wrote:\n>> On Sun, Mar 31, 2024 at 9:59 AM Tom Lane <[email protected]> wrote:\n>>> Thomas Munro <[email protected]> writes:\n>>>> I was reminded of this thread by ambient security paranoia. As it\n>>>> stands, we require 1.0.2 (but we very much hope that package\n>>>> maintainers and others in control of builds don't decide to use it).\n>>>> Should we skip 1.1.1 and move to requiring 3 for v17?\n>>> \n>>> I'd be kind of sad if I couldn't test SSL stuff anymore on my\n>>> primary workstation, which has\n>>> \n>>> $ rpm -q openssl\n>>> openssl-1.1.1k-12.el8_9.x86_64\n>>> \n>>> I think it's probably true that <=1.0.2 is not in any distro that\n>>> we still need to pay attention to, but I reject the contention\n>>> that RHEL8 is not in that set.\n>> Hmm, OK so it doesn't have 3 available in parallel from base repos.\n>> But it's also about to reach end of \"full support\" in 2 months[1], so\n>> if we applied the policies we discussed in the LLVM-vacuuming thread\n>> (to wit: build farm - EOL'd OSes), then... One question I'm unclear\n>> on is whether v17 will be packaged for RHEL8.\n> \n> The rest of the thread talks about the end of support of RHEL 7, but you are here talking about RHEL 8. It is true that \"full support\" for RHEL 8 ended in May 2024, but that is the not the one we are tracking. We are tracking the 10-year one, which I suppose is now called \"maintenance support\".\n> \n> So if the above package list is correct, then we ought to keep supporting openssl 1.1.* until 2029.\n\nNot 1.1.* but 1.1.1+. In the old OpenSSL version numbering scheme, releases\nchanging the last digit would contain new features and releases that updated\nthe appended letter only fixed bugs.\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 08:49:46 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 4 Apr 2024, at 01:24, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, Apr 03, 2024 at 01:38:50PM -0400, Tom Lane wrote:\n>> The discussion we had last year concluded that we were OK with\n>> dropping 1.0.1 support when RHEL6 goes out of extended support\n>> (June 2024 per this thread, I didn't check it). Seems like we\n>> should have the same policy for RHEL7. Also, calling Photon 3\n>> dead because it went EOL three days ago seems over-hasty.\n> \n> Yeah. A bunch of users of Photon are VMware (or you could say\n> Broadcom) product appliances, and I'd suspect that quite a lot of them\n> rely on Photon 3 for their base OS image. Upgrading that stuff is not\n> easy work in my experience because they need to cope with a bunch of\n> embedded services.\n\nThat's true, but Photon3 won't package new major versions of PostgreSQL (the\nlatest RPM is 13.14). Anyone who builds v17 on Photon 3 on their own can just\nas well be expected to build an updated OpenSSL no? This is equivalent to\nRHEL7 which was discussed elsewhere in this thread.\n\nIf we are going to pin version dependencies for postgres X to available OS\nrelease packages then it, IMHO, is reasonable to be for OS's that realistically\nwill package X (either by the vendor or a trusted external packager like PGDG).\n\nIt's possible, but not guaranteed, that RHEL8 ships v17 packages in ther\nApplication Streams Life Cycle model, they have packaged v15 so far with\nretirement in 2028 so it seems likely there will be another package to retire\nin 2029 when RHEL8 finally goes away (whether that will be v16 or v17 is also\nspeculation). Thus, pinning on 1.1.1 is grounded in packaging reality, even\nthough I sincerely hope that noone who isn't paying for support is running\n1.1.1 now, let alone in 4 years from now.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 09:56:22 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 4 Apr 2024, at 01:50, Thomas Munro <[email protected]> wrote:\n\n> That's a shame. But it sounds like the developer burden isn't so\n> different from 1.1.1 to 3.x, so maybe it's not such a big deal from\n> our point of view.\n\nIt isn't as of now since OpenSSL still supply the deprecated API's we use, but\nthere is no guarantee that they will do that for 5 more years.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 4 Apr 2024 09:59:08 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 3:27 PM Daniel Gustafsson <[email protected]> wrote:\n> The patch will also need to be adjusted to work with LibreSSL, but I know Jacob\n> was looking into that so ideally we should have something to review before\n> the weekend.\n\nv3 does that by putting back checks for symbols that aren't part of\nLibreSSL (tested back to 2.7, which is where the 1.1.x APIs started to\narrive). It also makes adjustments for the new OPENSSL_API_COMPAT\nversion, getting rid of OpenSSL_add_all_algorithms() and adding a\nmissing header.\n\nThis patch has a deficiency where 1.1.0 itself isn't actually rejected\nat configure time; Daniel's working on an explicit check for the\nOPENSSL/LIBRESSL_VERSION_NUMBER that should fix that up. There's an\nopen question about which version we should pin for LibreSSL, which\nshould ultimately come down to which versions of OpenBSD we want PG17\nto support.\n\nThanks,\n--Jacob",
"msg_date": "Thu, 4 Apr 2024 11:03:35 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "(Adding Mikael Kjellstrom in CC as OpenBSD owner)\n\nOn Thu, Apr 04, 2024 at 11:03:35AM -0700, Jacob Champion wrote:\n> v3 does that by putting back checks for symbols that aren't part of\n> LibreSSL (tested back to 2.7, which is where the 1.1.x APIs started to\n> arrive).\n\nFrom where did you pull the LibreSSL sources? Directly from the\nOpenBSD tree?\n\n> It also makes adjustments for the new OPENSSL_API_COMPAT\n> version, getting rid of OpenSSL_add_all_algorithms() and adding a\n> missing header.\n\nAh, right. OpenSSL_add_all_algorithms() is documented as having no\neffect in 1.1.0.\n\n> This patch has a deficiency where 1.1.0 itself isn't actually rejected\n> at configure time; Daniel's working on an explicit check for the\n> OPENSSL/LIBRESSL_VERSION_NUMBER that should fix that up. There's an\n> open question about which version we should pin for LibreSSL, which\n> should ultimately come down to which versions of OpenBSD we want PG17\n> to support.\n\nI would be OK to draw a line to what we test in the buildfarm if it\ncomes to that, down to OpenBSD 6.9. This version is already not\nsupported, and we had a number of issues with older versions and\ntimestamps going backwards.\n\n-/* Define to 1 if you have the `CRYPTO_lock' function. */\n-#undef HAVE_CRYPTO_LOCK\n\nI'm happy to see that gone for good.\n\n+ # Functions introduced in OpenSSL 1.1.0/LibreSSL 2.7.0.\n+ ['OPENSSL_init_ssl', {'required': true}],\n+ ['BIO_meth_new', {'required': true}],\n+ ['ASN1_STRING_get0_data', {'required': true}],\n+ ['HMAC_CTX_new', {'required': true}],\n+ ['HMAC_CTX_free', {'required': true}],\n\nThese should be removed to save cycles in ./configure and meson, no?\nWe don't have any more of their HAVE_* flags in the tree with this\npatch applied.\n\n- cdata.set('OPENSSL_API_COMPAT', '0x10002000L',\n+ cdata.set('OPENSSL_API_COMPAT', '0x10100000L',\n\nSeems to me that this should also document in meson.build why 1.1.0 is\nchosen, same as ./configure.\n\nIt seems to me that src/common/protocol_openssl.c could be cleaned up;\nI see SSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version\nlisted in LibreSSL (looking at some past version of\nhttps://github.com/libressl/libressl.git that I still have around).\n\nThere is an extra thing in pg_strong_random.c once we cut OpenSSL <\n1.1.1.. Do we still need pg_strong_random_init() and its RAND_poll()\nwhen it comes to LibreSSL? This is a sensitive area, so we should be\ncareful.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2024 10:37:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 2024-04-05 03:37, Michael Paquier wrote:\n> (Adding Mikael Kjellstrom in CC as OpenBSD owner)\n\nMy 2 OpenBSD animals (morepork \n<https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=morepork&br=HEAD> \nOpenBSD 6.9, schnauzer \n<https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=schnauzer&br=HEAD> \nOpenBSD 7.3) is using OpenSSL and not LibreSSL.\n\nThe versions are:\n\nOpenBSD 6.9 - openssl-1.1.1k\nOpenBSD 7.3 - openssl-3.1.0p1\n\nand that is what I installed them with when I setup both animals.\n\nIs there anything you need me change or do?\n\n/Mikael\n\n\n\n\n\n\n\nOn 2024-04-05 03:37, Michael Paquier\n wrote:\n\n\n(Adding Mikael Kjellstrom in CC as OpenBSD owner)\n\n\n\n My 2 OpenBSD animals (morepork OpenBSD 6.9, schnauzer OpenBSD 7.3) is using OpenSSL and\n not LibreSSL.\n\n The versions are:\n\n OpenBSD 6.9 - openssl-1.1.1k\n OpenBSD 7.3 - openssl-3.1.0p1\n\n and that is what I installed them with when I setup both animals.\n\n Is there anything you need me change or do?\n\n /Mikael",
"msg_date": "Fri, 5 Apr 2024 08:11:43 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=C3=B6m?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 6:37 PM Michael Paquier <[email protected]> wrote:\n> From where did you pull the LibreSSL sources? Directly from the\n> OpenBSD tree?\n\nI've been building LibreSSL Portable: https://github.com/libressl/portable\n\n> Ah, right. OpenSSL_add_all_algorithms() is documented as having no\n> effect in 1.1.0.\n\nI think it still has an effect, but it's unnecessary unless you need\nsome specific initialization other than the defaults -- for which we\nnow have OPENSSL_init_crypto(). Best I could tell, pgcrypto had no\nsuch requirements (but extra eyes on that would be appreciated).\n\n> I would be OK to draw a line to what we test in the buildfarm if it\n> comes to that, down to OpenBSD 6.9.\n\nThat would correspond to LibreSSL 3.3 if I'm not mistaken. Any\nparticular reason for 6.9 as the dividing line, and not something\nlater? And by \"test in the buildfarm\", do you mean across all\nversions, or just what we support for PG17? (For the record, I don't\nthink there's any reason to drop older LibreSSL testing for earlier\nbranches.)\n\n> This version is already not\n> supported, and we had a number of issues with older versions and\n> timestamps going backwards.\n\nSpeaking of which: for completeness I should note that LibreSSL 3.2\n(OpenBSD 6.8) is failing src/test/ssl because of alternate error\nmessages. That failure exists on HEAD and is not introduced in this\npatchset. LibreSSL 3.3, which passes, has the following changelog\nnote:\n\n * If x509_verify() fails, ensure that the error is set on both\n the x509_verify_ctx() and its store context to make some failures\n visible from SSL_get_verify_result().\n\nSo that would explain that. If we drop support for 3.2 and earlier\nthen there's nothing to be done anyway.\n\n> -/* Define to 1 if you have the `CRYPTO_lock' function. */\n> -#undef HAVE_CRYPTO_LOCK\n>\n> I'm happy to see that gone for good.\n\n+1\n\n> + # Functions introduced in OpenSSL 1.1.0/LibreSSL 2.7.0.\n> + ['OPENSSL_init_ssl', {'required': true}],\n> + ['BIO_meth_new', {'required': true}],\n> + ['ASN1_STRING_get0_data', {'required': true}],\n> + ['HMAC_CTX_new', {'required': true}],\n> + ['HMAC_CTX_free', {'required': true}],\n>\n> These should be removed to save cycles in ./configure and meson, no?\n> We don't have any more of their HAVE_* flags in the tree with this\n> patch applied.\n\nTrue, but they are required, and I needed something in there to reject\nearlier builds. Daniel's suggested approach with _VERSION_NUMBER\nshould be able to replace this I think.\n\n> - cdata.set('OPENSSL_API_COMPAT', '0x10002000L',\n> + cdata.set('OPENSSL_API_COMPAT', '0x10100000L',\n>\n> Seems to me that this should also document in meson.build why 1.1.0 is\n> chosen, same as ./configure.\n\nGood point.\n\n> It seems to me that src/common/protocol_openssl.c could be cleaned up;\n> I see SSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version\n> listed in LibreSSL (looking at some past version of\n> https://github.com/libressl/libressl.git that I still have around).\n>\n> There is an extra thing in pg_strong_random.c once we cut OpenSSL <\n> 1.1.1.. Do we still need pg_strong_random_init() and its RAND_poll()\n> when it comes to LibreSSL? This is a sensitive area, so we should be\n> careful.\n\nIt would be cool if there are more cleanups that can happen. I agree\nwe need to be careful around removal, though, especially now that we\nknow that LibreSSL testing is spotty... I will look more into these\nlater today.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 5 Apr 2024 09:41:46 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 5 Apr 2024, at 03:37, Michael Paquier <[email protected]> wrote:\n> On Thu, Apr 04, 2024 at 11:03:35AM -0700, Jacob Champion wrote:\n\n>> v3 does that by putting back checks for symbols that aren't part of\n>> LibreSSL (tested back to 2.7, which is where the 1.1.x APIs started to\n>> arrive).\n> \n> From where did you pull the LibreSSL sources? Directly from the\n> OpenBSD tree?\n> \n>> It also makes adjustments for the new OPENSSL_API_COMPAT\n>> version, getting rid of OpenSSL_add_all_algorithms() and adding a\n>> missing header.\n> \n> Ah, right. OpenSSL_add_all_algorithms() is documented as having no\n> effect in 1.1.0.\n\nThis API was deprecated and made into a no-op in OpenBSD 6.4 which corresponds\nto LibreSSL 2.8.3.\n\n>> This patch has a deficiency where 1.1.0 itself isn't actually rejected\n>> at configure time; Daniel's working on an explicit check for the\n>> OPENSSL/LIBRESSL_VERSION_NUMBER that should fix that up. There's an\n>> open question about which version we should pin for LibreSSL, which\n>> should ultimately come down to which versions of OpenBSD we want PG17\n>> to support.\n> \n> I would be OK to draw a line to what we test in the buildfarm if it\n> comes to that, down to OpenBSD 6.9. This version is already not\n> supported, and we had a number of issues with older versions and\n> timestamps going backwards.\n\nThere is a member on 6.8 as well, and while 6.8 work fine the tests all fail\ndue to the error messages being different. Rather than adding alternate output\nfor an EOL version of OpenBSD (which currently don't even run the ssl checks in\nthe BF) I think using 6.9 as the minimum makes sense.\n\n> + # Functions introduced in OpenSSL 1.1.0/LibreSSL 2.7.0.\n> + ['OPENSSL_init_ssl', {'required': true}],\n> + ['BIO_meth_new', {'required': true}],\n> + ['ASN1_STRING_get0_data', {'required': true}],\n> + ['HMAC_CTX_new', {'required': true}],\n> + ['HMAC_CTX_free', {'required': true}],\n> \n> These should be removed to save cycles in ./configure and meson, no?\n\nCorrect, they are removed in favor of a compile test for OpenSSL version.\n\n> - cdata.set('OPENSSL_API_COMPAT', '0x10002000L',\n> + cdata.set('OPENSSL_API_COMPAT', '0x10100000L',\n> \n> Seems to me that this should also document in meson.build why 1.1.0 is\n> chosen, same as ./configure.\n\nDone.\n\n> It seems to me that src/common/protocol_openssl.c could be cleaned up;\n> I see SSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version\n> listed in LibreSSL (looking at some past version of\n> https://github.com/libressl/libressl.git that I still have around).\n\nBoth SSL_CTX_set_min_proto_version and SSL_CTX_set_max_proto_version are\navailable in at least OpenBSD 6.3 which is LibreSSL 2.7.5. With this we can\nthus remove the whole file.\n\n> There is an extra thing in pg_strong_random.c once we cut OpenSSL <\n> 1.1.1.. Do we still need pg_strong_random_init() and its RAND_poll()\n> when it comes to LibreSSL? This is a sensitive area, so we should be\n> careful.\n\nRe-reading the thread which added this comment, and the OpenSSL docs and code,\nI'm leaning towards leaving this in. The overhead is marginal and fork safety\nhas been broken at least once in OpenSSL since 1.1.1:\n\n https://github.com/openssl/openssl/issues/12377\n\nThat particular bug was thankfully caught before it shipped, but mitigating the\nrisk is this cheap enough that is seems reasonable to keep this in.\n\nAttached is a WIP patch to get more eyes on it, the Meson test for 1.1.1 fails\non Windows in CI which I will investigate next.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 5 Apr 2024 18:59:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 5 Apr 2024, at 18:41, Jacob Champion <[email protected]> wrote:\n> On Thu, Apr 4, 2024 at 6:37 PM Michael Paquier <[email protected]> wrote:\n\n>> I would be OK to draw a line to what we test in the buildfarm if it\n>> comes to that, down to OpenBSD 6.9.\n> \n> That would correspond to LibreSSL 3.3 if I'm not mistaken. Any\n> particular reason for 6.9 as the dividing line, and not something\n> later? And by \"test in the buildfarm\", do you mean across all\n> versions, or just what we support for PG17? (For the record, I don't\n> think there's any reason to drop older LibreSSL testing for earlier\n> branches.)\n\nWe should draw the line on something we can reliably test, so 6.9 seems fine to\nme (unless there is evidence of older versions being common in the wild).\nOpenBSD themselves support 2 backbranches so 6.9 is still far beyond the EOL\nmark upstream.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 5 Apr 2024 19:03:28 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 9:59 AM Daniel Gustafsson <[email protected]> wrote:\n> Attached is a WIP patch to get more eyes on it, the Meson test for 1.1.1 fails\n> on Windows in CI which I will investigate next.\n\nThe changes for SSL_OP_NO_CLIENT_RENEGOTIATION and\nSSL_R_VERSION_TOO_LOW look good to me.\n\n> - Remove support for OpenSSL 1.0.2 and 1.1.0\n> + Remove support for OpenSSL 1.0.2\n\nI modified the commit message while working on v3 and forgot to put it\nback before posting, sorry.\n\n> +++ b/src/include/pg_config.h.in\n> @@ -84,9 +84,6 @@\n> /* Define to 1 if you have the <crtdefs.h> header file. */\n> #undef HAVE_CRTDEFS_H\n>\n> -/* Define to 1 if you have the `CRYPTO_lock' function. */\n> -#undef HAVE_CRYPTO_LOCK\n\nAn autoreconf run on my machine pulls in more changes (getting rid of\nthe symbols we no longer check for).\n\n--Jacob\n\n\n",
"msg_date": "Fri, 5 Apr 2024 13:55:22 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 05.04.24 18:59, Daniel Gustafsson wrote:\n> Attached is a WIP patch to get more eyes on it, the Meson test for 1.1.1 fails\n> on Windows in CI which I will investigate next.\n\nI'm not a fan of the new PGAC_CHECK_OPENSSL. It creates a second place \nwhere the OpenSSL version number has to be updated. We had this \ncarefully constructed so that there is only one place that \nOPENSSL_API_COMPAT is defined and that is the only place that needs to \nbe updated. We put the setting of OPENSSL_API_COMPAT into configure so \nthat the subsequent OpenSSL tests would use it, and if the API number \nhigher than what the library supports, the tests should fail. So I \ndon't understand why the configure changes have to be so expansive.\n\n\n",
"msg_date": "Fri, 5 Apr 2024 23:26:00 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 5 Apr 2024, at 23:26, Peter Eisentraut <[email protected]> wrote:\n> \n> On 05.04.24 18:59, Daniel Gustafsson wrote:\n>> Attached is a WIP patch to get more eyes on it, the Meson test for 1.1.1 fails\n>> on Windows in CI which I will investigate next.\n> \n> I'm not a fan of the new PGAC_CHECK_OPENSSL. It creates a second place where the OpenSSL version number has to be updated. We had this carefully constructed so that there is only one place that OPENSSL_API_COMPAT is defined and that is the only place that needs to be updated. We put the setting of OPENSSL_API_COMPAT into configure so that the subsequent OpenSSL tests would use it, and if the API number higher than what the library supports, the tests should fail.\n\nBut does that actually work? If I change the API_COMPAT to the 1.1.1 version\nnumber and run configure against 1.0.2 it passes just fine. Am I missing some\nclever trick here?\n\nThe reason to expand the check is to ensure that we have the version we want\nfor both OpenSSL and LibreSSL, and deprecating OpenSSL versions isn't all that\ncommonly done so having to change the version in the check didn't seem that\ninvasive to me.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 5 Apr 2024 23:48:30 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 2:48 PM Daniel Gustafsson <[email protected]> wrote:\n> But does that actually work? If I change the API_COMPAT to the 1.1.1 version\n> number and run configure against 1.0.2 it passes just fine. Am I missing some\n> clever trick here?\n\nSimilarly, I changed my API_COMPAT to a nonsense 0x90100000L and\n- a 1.1.1 build succeeds\n- a 3.0 build fails\n- LibreSSL doesn't appear to care or check that #define at all\n\n--Jacob\n\n\n",
"msg_date": "Fri, 5 Apr 2024 14:53:22 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 5 Apr 2024, at 22:55, Jacob Champion <[email protected]> wrote:\n> \n> On Fri, Apr 5, 2024 at 9:59 AM Daniel Gustafsson <[email protected]> wrote:\n>> Attached is a WIP patch to get more eyes on it, the Meson test for 1.1.1 fails\n>> on Windows in CI which I will investigate next.\n\nThe attached version fixes the Windows 1.1.1 check which was missing the\ninclude directory.\n\n> The changes for SSL_OP_NO_CLIENT_RENEGOTIATION and\n> SSL_R_VERSION_TOO_LOW look good to me.\n\nThanks!\n\n>> +++ b/src/include/pg_config.h.in\n>> @@ -84,9 +84,6 @@\n>> /* Define to 1 if you have the <crtdefs.h> header file. */\n>> #undef HAVE_CRTDEFS_H\n>> \n>> -/* Define to 1 if you have the `CRYPTO_lock' function. */\n>> -#undef HAVE_CRYPTO_LOCK\n> \n> An autoreconf run on my machine pulls in more changes (getting rid of\n> the symbols we no longer check for).\n\nAh yes, missed updating before formatting the patch. Done in the attached.\n\n--\nDaniel Gustafsson",
"msg_date": "Sat, 6 Apr 2024 00:31:57 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Apr 5, 2024 at 3:32 PM Daniel Gustafsson <[email protected]> wrote:\n> > An autoreconf run on my machine pulls in more changes (getting rid of\n> > the symbols we no longer check for).\n>\n> Ah yes, missed updating before formatting the patch. Done in the attached.\n\nThe commit subject may still need to be reverted to how you had it\noriginally, unless you are keeping my omission of \"1.1.0\" on purpose.\n\nI've tested (with Meson) on LibreSSL 3.3 to 3.8, and verified that 2.7\nto 3.2 now fail to configure. Similarly, OpenSSL 1.0.2 and 1.1.0 fail\nto configure, and I ran tests with 1.1.1 to 3.3. I did a quick smoke\ntest with autoconf to make sure that old versions are rejected there\ntoo, but I didn't run all the tests again.\n\nMaybe there was a good reason for them to do it, but I'm kind of\namazed that LibreSSL camped on the 2.x version number, then revved to\na 3.x line anyway and camped on those numbers too, so that Meson can't\njust rely on pkgconfig to figure out which version we have.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 5 Apr 2024 17:14:51 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 05.04.24 23:48, Daniel Gustafsson wrote:\n> The reason to expand the check is to ensure that we have the version we want\n> for both OpenSSL and LibreSSL, and deprecating OpenSSL versions isn't all that\n> commonly done so having to change the version in the check didn't seem that\n> invasive to me.\n\nWhy do we need to check for the versions at all? We should just check \nfor the functions we need. At least that's always been the normal \napproach in configure.\n\n\n",
"msg_date": "Sat, 6 Apr 2024 08:02:23 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 6 Apr 2024, at 08:02, Peter Eisentraut <[email protected]> wrote:\n> \n> On 05.04.24 23:48, Daniel Gustafsson wrote:\n>> The reason to expand the check is to ensure that we have the version we want\n>> for both OpenSSL and LibreSSL, and deprecating OpenSSL versions isn't all that\n>> commonly done so having to change the version in the check didn't seem that\n>> invasive to me.\n> \n> Why do we need to check for the versions at all? We should just check for the functions we need. At least that's always been the normal approach in configure.\n\nWe could, but finding a stable set of functions which identifies the version of\nOpenSSL *and* LibreSSL that we want, and their successors, while not matching\nany older versions seemed more opaque than testing two numeric values. The\nsuggested check is modelled on the LDAP check which tests for an explicit\nversion in a header file (albeit not for erroring out).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 6 Apr 2024 09:12:01 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 6 Apr 2024, at 08:02, Peter Eisentraut <[email protected]> wrote:\n>> Why do we need to check for the versions at all? We should just check for the functions we need. At least that's always been the normal approach in configure.\n\n> We could, but finding a stable set of functions which identifies the version of\n> OpenSSL *and* LibreSSL that we want, and their successors, while not matching\n> any older versions seemed more opaque than testing two numeric values.\n\nI don't think you responded to Peter's point at all. The way autoconf\nis designed to work is explicitly NOT to try to identify the exact\nversion of $whatever. Rather, the idea is to probe for the API\nfeatures that you want to rely on: functions, macros, struct fields,\nor the like. If you can't point to an important API difference\nbetween 1.0.2 and 1.1.1, why drop support for 1.0.2?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2024 10:04:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 6 Apr 2024, at 16:04, Tom Lane <[email protected]> wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>>> On 6 Apr 2024, at 08:02, Peter Eisentraut <[email protected]> wrote:\n\n>>> Why do we need to check for the versions at all? We should just check for the functions we need. At least that's always been the normal approach in configure.\n> \n>> We could, but finding a stable set of functions which identifies the version of\n>> OpenSSL *and* LibreSSL that we want, and their successors, while not matching\n>> any older versions seemed more opaque than testing two numeric values.\n> \n> I don't think you responded to Peter's point at all. The way autoconf\n> is designed to work is explicitly NOT to try to identify the exact\n> version of $whatever. Rather, the idea is to probe for the API\n> features that you want to rely on: functions, macros, struct fields,\n> or the like. If you can't point to an important API difference\n> between 1.0.2 and 1.1.1, why drop support for 1.0.2?\n\nMy apologies, I thought I did but clearly failed. My point was that this is a\nspecial/corner case where we try to find one of two different libraries (with\ndifferent ideas about backwards compatability etc) for supporting a single\nthing. So instead I tested for the explicit versions like how we already test\nfor the exact Perl version in config/perl.m4 (albeit that a library and a\nprogram are two different things of course).\n\nIn bumping we want to move to 1.1.1 since that's the first version with the\nrewritten RNG which is fork-safe by design, something PostgreSQL clearly\nbenefits from. There is no new API for this to gate on though. For LibreSSL\nwe want 3.3.2 to a) ensure we have coverage in the BF and b) since it's the\nfirst version where the tests pass due to error message alignment with OpenSSL.\nThe combination of these gets rid of lots of specialcased #ifdef soup. I\nwasn't however able to find a specific API call which is unique to the two\nversion which we rely on.\n\nTesting for the presence of an API provided and introduced by both libraries in\nthe version we're interested in, but which we don't use, is the alternative but\nI thought that would be more frowned upon. EVP_PKEY_new_CMAC_key() was\nintroduced in 1.1.1 and LibreSSL 3.3.2, so an AC_CHECK_FUNCS for that, as in\nthe attached, achieves the version check but pollutes pg_config.h with a define\nwhich will never be used which seemed a bit ugly.\n\n--\nDaniel Gustafsson",
"msg_date": "Sat, 6 Apr 2024 19:47:43 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Sat, Apr 06, 2024 at 07:47:43PM +0200, Daniel Gustafsson wrote:\n> My apologies, I thought I did but clearly failed. My point was that this is a\n> special/corner case where we try to find one of two different libraries (with\n> different ideas about backwards compatability etc) for supporting a single\n> thing. So instead I tested for the explicit versions like how we already test\n> for the exact Perl version in config/perl.m4 (albeit that a library and a\n> program are two different things of course).\n\n+ # Function introduced in OpenSSL 1.1.1 and LibreSSL 3.3.2\n+ AC_CHECK_FUNCS([EVP_PKEY_new_CMAC_key], [], [AC_MSG_ERROR([OpenSSL 1.1.1 or later is required for SSL support])]) \n\nI can see why you want to do that, but we've always relied on\ncompilation failures while documenting the versions supported. I\ndon't disagree to your point of detecting that earlier, but it sounds\nlike this should be a separate patch separate from the one removing\nsupport for OpenSSL 1.0.2 and 1.1.0, at least, because you are solving\ntwo problems at once.\n\n> In bumping we want to move to 1.1.1 since that's the first version with the\n> rewritten RNG which is fork-safe by design, something PostgreSQL clearly\n> benefits from. There is no new API for this to gate on though. For LibreSSL\n> we want 3.3.2 to a) ensure we have coverage in the BF and b) since it's the\n> first version where the tests pass due to error message alignment with OpenSSL.\n> The combination of these gets rid of lots of specialcased #ifdef soup. I\n> wasn't however able to find a specific API call which is unique to the two\n> version which we rely on.\n\nBased on the state of the patch, all the symbols cleaned up in\npg_config.h.in would be removed only by dropping support for\n1.0.2. The routines of protocol_openssl.c exist in 1.1.0. libpq\ninternal locking can be removed by dropping support for 1.0.2. So\na bunch of ifdefs are removed with 1.0.2 support, but that's much,\nmuch less cleaned once 1.1.0 is removed. And pg_strong_random.c stuff\nwould still remain around.\n\n> Testing for the presence of an API provided and introduced by both libraries in\n> the version we're interested in, but which we don't use, is the alternative but\n> I thought that would be more frowned upon. EVP_PKEY_new_CMAC_key() was\n> introduced in 1.1.1 and LibreSSL 3.3.2, so an AC_CHECK_FUNCS for that, as in\n> the attached, achieves the version check but pollutes pg_config.h with a define\n> which will never be used which seemed a bit ugly.\n\nTwo lines in pg_config.h is a minimal cost. That does not look like a\nbig deal to me.\n\n-\t * Make sure processes do not share OpenSSL randomness state. This is no\n-\t * longer required in OpenSSL 1.1.1 and later versions, but until we drop\n-\t * support for version < 1.1.1 we need to do this.\n+\t * Make sure processes do not share OpenSSL randomness state. This is in\n+\t * theory no longer be required in OpenSSL 1.1.1 and later versions, but\n+\t * there is no harm in taking extra precautions.\n\nI was wondering if this should also document what you've mentioned,\naka that OpenSSL still found ways to break the randomness state and\nthat this is a cheap insurance against future mistakes that could\nhappen in this area.\n--\nMichael",
"msg_date": "Mon, 8 Apr 2024 07:46:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 8 Apr 2024, at 00:46, Michael Paquier <[email protected]> wrote:\n> \n> On Sat, Apr 06, 2024 at 07:47:43PM +0200, Daniel Gustafsson wrote:\n>> My apologies, I thought I did but clearly failed. My point was that this is a\n>> special/corner case where we try to find one of two different libraries (with\n>> different ideas about backwards compatability etc) for supporting a single\n>> thing. So instead I tested for the explicit versions like how we already test\n>> for the exact Perl version in config/perl.m4 (albeit that a library and a\n>> program are two different things of course).\n> \n> + # Function introduced in OpenSSL 1.1.1 and LibreSSL 3.3.2\n> + AC_CHECK_FUNCS([EVP_PKEY_new_CMAC_key], [], [AC_MSG_ERROR([OpenSSL 1.1.1 or later is required for SSL support])]) \n> \n> I can see why you want to do that, but we've always relied on\n> compilation failures while documenting the versions supported. I\n> don't disagree to your point of detecting that earlier, but it sounds\n> like this should be a separate patch separate from the one removing\n> support for OpenSSL 1.0.2 and 1.1.0, at least, because you are solving\n> two problems at once.\n> \n>> In bumping we want to move to 1.1.1 since that's the first version with the\n>> rewritten RNG which is fork-safe by design, something PostgreSQL clearly\n>> benefits from. There is no new API for this to gate on though. For LibreSSL\n>> we want 3.3.2 to a) ensure we have coverage in the BF and b) since it's the\n>> first version where the tests pass due to error message alignment with OpenSSL.\n>> The combination of these gets rid of lots of specialcased #ifdef soup. I\n>> wasn't however able to find a specific API call which is unique to the two\n>> version which we rely on.\n> \n> Based on the state of the patch, all the symbols cleaned up in\n> pg_config.h.in would be removed only by dropping support for\n> 1.0.2. The routines of protocol_openssl.c exist in 1.1.0. libpq\n> internal locking can be removed by dropping support for 1.0.2. So\n> a bunch of ifdefs are removed with 1.0.2 support, but that's much,\n> much less cleaned once 1.1.0 is removed. \n\nIf we are settling for removing 1.0.2 we need to make sure there is 1.1.0\nbuildfarm animals that actually run the sslcheck. 1.1.0 was never an LTS\nrelease so it's presence in distributions is far less widespread. I would\nprefer 1.1.1 but, either way, we have ample time to discuss that during the v18\ncycle.\n\n> And pg_strong_random.c stuff would still remain around.\n\nIt stays but as belts-and-suspenders safety against bugs, not as a requirement.\n\n>> Testing for the presence of an API provided and introduced by both libraries in\n>> the version we're interested in, but which we don't use, is the alternative but\n>> I thought that would be more frowned upon. EVP_PKEY_new_CMAC_key() was\n>> introduced in 1.1.1 and LibreSSL 3.3.2, so an AC_CHECK_FUNCS for that, as in\n>> the attached, achieves the version check but pollutes pg_config.h with a define\n>> which will never be used which seemed a bit ugly.\n> \n> Two lines in pg_config.h is a minimal cost. That does not look like a\n> big deal to me.\n> \n> -\t * Make sure processes do not share OpenSSL randomness state. This is no\n> -\t * longer required in OpenSSL 1.1.1 and later versions, but until we drop\n> -\t * support for version < 1.1.1 we need to do this.\n> +\t * Make sure processes do not share OpenSSL randomness state. This is in\n> +\t * theory no longer be required in OpenSSL 1.1.1 and later versions, but\n> +\t * there is no harm in taking extra precautions.\n> \n> I was wondering if this should also document what you've mentioned,\n> aka that OpenSSL still found ways to break the randomness state and\n> that this is a cheap insurance against future mistakes that could\n> happen in this area.\n\nI'm not quite sure how stable Github links are over time, but I guess we could\npost the commit sha for the bug in OpenSSL (as well as the fix) which is a\nstable documentation of how it can be subtly broken.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 8 Apr 2024 01:01:35 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 06.04.24 19:47, Daniel Gustafsson wrote:\n> In bumping we want to move to 1.1.1 since that's the first version with the\n> rewritten RNG which is fork-safe by design, something PostgreSQL clearly\n> benefits from.\n\nI think it might be better to separate this into two steps:\n\n1. Move to 1.1.0. This is an API update. Change OPENSSL_API_COMPAT, \nand remove a bunch of code that no longer needs to be conditional. We \ncould check for a representative function like OPENSSL_init_ssl() in \nconfigure/meson, or we could just let the compilation fail with older \nversions.\n\n2. Move to 1.1.1. I understand this has to do with the fork-safety of \npg_strong_random(), and it's not an API change but a behavior change. \nLet's make this association clearer in the code. For example, add a \nversion check or assertion about this into pg_strong_random() itself.\n\nI don't know how LibreSSL interacts with either of these two points. \nThat's something that could be clearer.\n\nSome more detailed review on the v6 patch:\n\n* doc/src/sgml/libpq.sgml\n\nThis small documentation patch could be committed forthwith.\n\n* src/backend/libpq/be-secure-openssl.c\n\n+#include <openssl/bn.h>\n\nThis patch doesn't appear to add anything, so why does it need a new \ninclude?\n\nCould the additions of SSL_OP_NO_CLIENT_RENEGOTIATION and \nSSL_R_VERSION_TOO_LOW be separate patches?\n\n* src/common/hmac_openssl.c\n\nThere appears to be some unrelated refactoring happening here?\n\n* src/include/common/openssl.h\n\nIs the comment no longer applicable to OpenSSL, only to LibreSSL?\n\n* src/port/pg_strong_random.c\n\nI would prefer to remove pg_strong_random_init() if it's no longer \nuseful. I mean, if we leave it as is, and we are not removing any \ncallers, then we are effectively continuing to support OpenSSL <1.1.1, \nright?\n\n\n\n",
"msg_date": "Wed, 10 Apr 2024 09:31:16 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 12:31 AM Peter Eisentraut <[email protected]> wrote:\n> * src/backend/libpq/be-secure-openssl.c\n>\n> +#include <openssl/bn.h>\n>\n> This patch doesn't appear to add anything, so why does it need a new\n> include?\n\nThis one was mine -- it was an indirect header dependency that was\neffectively removed in 1.1.0 and later, due to the bump to\nOPENSSL_API_COMPAT [1]. We have to depend on it directly now.\n\n--Jacob\n\n[1] https://github.com/openssl/openssl/blob/b372b1f764/include/openssl/dh.h#L20-L22\n\n\n",
"msg_date": "Wed, 10 Apr 2024 06:10:56 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 10, 2024 at 09:31:16AM +0200, Peter Eisentraut wrote:\n> I think it might be better to separate this into two steps:\n> \n> 1. Move to 1.1.0. This is an API update. Change OPENSSL_API_COMPAT, and\n> remove a bunch of code that no longer needs to be conditional. We could\n> check for a representative function like OPENSSL_init_ssl() in\n> configure/meson, or we could just let the compilation fail with older\n> versions.\n> \n> 2. Move to 1.1.1. I understand this has to do with the fork-safety of\n> pg_strong_random(), and it's not an API change but a behavior change. Let's\n> make this association clearer in the code. For example, add a version check\n> or assertion about this into pg_strong_random() itself.\n\n+1 for a split and a two-step move. The areas cleaned up are not\nreally dependent.\n\n> I don't know how LibreSSL interacts with either of these two points. That's\n> something that could be clearer.\n\nNot looked at that, unfortunately. Cutting to one specific version of\nLibreSSL would help.\n\n> I would prefer to remove pg_strong_random_init() if it's no longer useful.\n> I mean, if we leave it as is, and we are not removing any callers, then we\n> are effectively continuing to support OpenSSL <1.1.1, right?\n\nI'd rather see it gone too, at the end, but I also get that the\nconcerns from Daniel are worth keeping in mind.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 07:43:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 10 Apr 2024, at 09:31, Peter Eisentraut <[email protected]> wrote:\n\n> I think it might be better to separate this into two steps:\n\nFair enough.\n\n> 1. Move to 1.1.0. This is an API update. Change OPENSSL_API_COMPAT, and remove a bunch of code that no longer needs to be conditional. We could check for a representative function like OPENSSL_init_ssl() in configure/meson, or we could just let the compilation fail with older versions.\n\nThe attached 0002 bumps the minimum required version to 1.1.0 with a hard error\nin autoconf/meson, and removes all the 1.0.2 support code.\n\nI think the documentation for PQinitOpenSSL should be reworded from \"you have\nto do this, unless you run 1.1.0 or later\" to \"If you run 1.1.0, you need to do\nthis). As it is now the important bit of the paragrapg is at the end rather\nthan in the beginning. Trying this I didn't find a wording which seemed like\nan improvement though, suggestions are welcome.\n\n> 2. Move to 1.1.1. I understand this has to do with the fork-safety of pg_strong_random(), and it's not an API change but a behavior change. Let's make this association clearer in the code. For example, add a version check or assertion about this into pg_strong_random() itself.\n\n0005 moves the fork safety init inline with calling pg_strong_random, and\nremoves it for everyone else. This allows 1.1.0 to be supported as we\neffectively are at the 1.1.0 API level, at the cost of calls for strong random\nbeing slower on 1.1.0. An unscientific guess based on packaged OpenSSL\nversions and the EOL and ELS/LTS status of 1.1.0, is that the number of\nproduction installs of postgres 17 using OpenSSL 1.1.0 is close to zero.\n\n> I don't know how LibreSSL interacts with either of these two points. That's something that could be clearer.\n\nThe oldest LibreSSL we have in the buildfarm is 3.2 (from OpenBSD 6.8), which\nthe attached version supports and passes tests with. LibreSSL has been\nproviding fork safety since 2.0.2 which is well into the past.\n\n> * doc/src/sgml/libpq.sgml\n> \n> This small documentation patch could be committed forthwith.\n\nAgreed, separated into 0001 in the attached and can be committed regardless of\nthe remaining ones.\n\n> * src/backend/libpq/be-secure-openssl.c\n> \n> +#include <openssl/bn.h>\n> \n> This patch doesn't appear to add anything, so why does it need a new include?\n\nAs mentioned downthread, an indirect inclusion was removed so we need to\nexplicitly include it.\n\n> Could the additions of SSL_OP_NO_CLIENT_RENEGOTIATION and SSL_R_VERSION_TOO_LOW be separate patches?\n\nThey can, done in the attached.\n\nSSL_R_VERSION_TOO_LOW was introduced quite recently in LibreSSL 3.6.3 (OpenBSD\n7.2), but splitting the check for TOO_LOW and TOO_HIGH into two seems pretty\nuncontroversial and a good way to get ever so slightly better error handling on\nrecent LibreSSL.\n\nSSL renegotiation has been supported for much longer in LibreSSL so adding that\nto make OpenSSL and LibreSSL support slightly more on par seems seems like a\ngood idea regardless of version bump.\n\n> * src/common/hmac_openssl.c\n> \n> There appears to be some unrelated refactoring happening here?\n\nI assume you mean changing back to FRONTEND from USE_RESOWNER_FOR_HMAC, the\nlatter was added recently in 38698dd38 and have never shipped, so it seemed\nmore backpatch-friendly to move back. Given the amount of changes it probably\nwon't move the needle though so reverted.\n\n> * src/include/common/openssl.h\n> \n> Is the comment no longer applicable to OpenSSL, only to LibreSSL?\n\nOpenSSL has since 0.9.8 defined TLS_MAX_VERSION which points highest version\nTLS protocol supported, but AFAIK there is no such construction in LibreSSL.\nAssuming I didn't totally misunderstand the comment of course.\n\n> * src/port/pg_strong_random.c\n> \n> I would prefer to remove pg_strong_random_init() if it's no longer useful. I mean, if we leave it as is, and we are not removing any callers, then we are effectively continuing to support OpenSSL <1.1.1, right?\n\nThe attached removes pg_strong_random_init and instead calls it explicitly for\n1.1.0 users by checking the OpenSSL version.\n\nIs the attached split in line with how you were thinking about it?\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 12 Apr 2024 14:42:57 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, Apr 12, 2024 at 02:42:57PM +0200, Daniel Gustafsson wrote:\n>> On 10 Apr 2024, at 09:31, Peter Eisentraut <[email protected]> wrote:\n>> 2. Move to 1.1.1. I understand this has to do with the fork-safety of pg_strong_random(), and it's not an API change but a behavior change. Let's make this association clearer in the code. For example, add a version check or assertion about this into pg_strong_random() itself.\n\n> 0005 moves the fork safety init inline with calling pg_strong_random, and\n> removes it for everyone else. This allows 1.1.0 to be supported as we\n> effectively are at the 1.1.0 API level, at the cost of calls for strong random\n> being slower on 1.1.0. An unscientific guess based on packaged OpenSSL\n> versions and the EOL and ELS/LTS status of 1.1.0, is that the number of\n> production installs of postgres 17 using OpenSSL 1.1.0 is close to zero.\n\nIt is only necessary to call RAND_poll once after forking. Wouldn't\nit be OK to use a static flag and use the initialization once?\n\n>> * src/port/pg_strong_random.c\n>> \n>> I would prefer to remove pg_strong_random_init() if it's no longer\n>> useful. I mean, if we leave it as is, and we are not removing any\n>> callers, then we are effectively continuing to support OpenSSL\n>> <1.1.1, right?\n> \n> The attached removes pg_strong_random_init and instead calls it explicitly for\n> 1.1.0 users by checking the OpenSSL version.\n> \n> Is the attached split in line with how you were thinking about it?\n\nIf I may, 0001 looks sensible here. The bits from 0003 and 0004 could\nbe applied before 0002, as well.\n\n--- a/src/backend/postmaster/fork_process.c\n+++ b/src/backend/postmaster/fork_process.c\n@@ -110,9 +110,6 @@ fork_process(void)\n \t\t\t\tclose(fd);\n \t\t\t}\n \t\t}\n-\n-\t\t/* do post-fork initialization for random number generation */\n-\t\tpg_strong_random_init();\n\nPerhaps you intented this diff to be in 0005 rather than in 0002?\nWith 0002 applied, only support for 1.0.2 is removed, not 1.1.0 yet.\n\n pg_strong_random(void *buf, size_t len)\n {\n \tint\t\t\ti;\n \n+#if (OPENSSL_VERSION_NUMBER <= 0x10100000L)\n+\t/*\n+\t * Make sure processes do not share OpenSSL randomness state. This is not\n+\t * requred on LibreSSL and no longer required in OpenSSL 1.1.1 and later\n+\t * versions.\n+\t */\n+\tRAND_poll();\n+#endif\n\ns/requred/required/. Rather than calling always RAND_poll(), this\ncould use a static flag to call it only once when pg_strong_random is\ncalled for the first time. I would not mind seeing this part entirely\ngone with the removal of support for 1.1.0.\n--\nMichael",
"msg_date": "Mon, 15 Apr 2024 14:04:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 15 Apr 2024, at 07:04, Michael Paquier <[email protected]> wrote:\n> On Fri, Apr 12, 2024 at 02:42:57PM +0200, Daniel Gustafsson wrote:\n\n>> Is the attached split in line with how you were thinking about it?\n> \n> If I may, 0001 looks sensible here. The bits from 0003 and 0004 could\n> be applied before 0002, as well.\n\nAgreed, once we are in post-freeze I think those three are mostly ready to go.\n\n> - /* do post-fork initialization for random number generation */\n> - pg_strong_random_init();\n> \n> Perhaps you intented this diff to be in 0005 rather than in 0002?\n> With 0002 applied, only support for 1.0.2 is removed, not 1.1.0 yet.\n\nYes, nice catch, that was a mistake in splitting up the patch into multiple\npieces, it should be in the 0005 patch for strong random. Fixed.\n\n> s/requred/required/.\n\nFixed.\n\n> Rather than calling always RAND_poll(), this\n> could use a static flag to call it only once when pg_strong_random is\n> called for the first time.\n\nAgreed, we can good that. Fixed.\n\n> I would not mind seeing this part entirely\n> gone with the removal of support for 1.1.0.\n\nIf we want to keep autoconf from checking versions and just check compatibility\n(with our code) then we will remain at 1.1.0 compatibility. The only 1.1.1 API\nwe use is not present in LibreSSL so we can't really place a hard restriction\non that. It might be that keeping it for now, and removing it later during the\nv18 cycle as we modernize our OpenSSL code (which I hope to find time to work\non) and make use of newer 1.1.1 API:s. That way we can keep our autoconf/meson\nchecks consistent across library checks. If we end up with no new API:s to\ncheck for by the time the last commitfest of v18 rolls around, we can revisit\nthe decision then.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 15 Apr 2024 11:07:05 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 11:07:05AM +0200, Daniel Gustafsson wrote:\n> On 15 Apr 2024, at 07:04, Michael Paquier <[email protected]> wrote:\n>> On Fri, Apr 12, 2024 at 02:42:57PM +0200, Daniel Gustafsson wrote:\n>>> Is the attached split in line with how you were thinking about it?\n>> \n>> If I may, 0001 looks sensible here. The bits from 0003 and 0004 could\n>> be applied before 0002, as well.\n> \n> Agreed, once we are in post-freeze I think those three are mostly\n> ready to go.\n\nIs there a point to wait for 0001, 0003 and 0004, though, and\nshouldn't these three be backpatched? 0001 is certainly OK as a\ndoc-only change to be consistent across the board without waiting 5\nmore years. 0003 and 0004 are conditional and depend on if\nSSL_R_VERSION_TOO_LOW and SSL_OP_NO_CLIENT_RENEGOTIATION are defined\nat compile-time. 0003 is much more important, and note that\n01e6f1a842f4 has been backpatched all the way down. 0004 is nice,\nstill not strongly mandatory.\n\n>> Rather than calling always RAND_poll(), this\n>> could use a static flag to call it only once when pg_strong_random is\n>> called for the first time.\n> \n> Agreed, we can good that. Fixed.\n\n+#if (OPENSSL_VERSION_NUMBER <= 0x10100000L)\n+\tstatic bool rand_initialized = false;\n\nThis does not look right. At the top of upstream's branch\nOpenSSL_1_1_0-stable, OPENSSL_VERSION_NUMBER is 0x101000d0L, so the\ninitialization would be missed for any version in the 1.1.0 series\nexcept the GA one without this code being compiled, no?\n\n>> I would not mind seeing this part entirely\n>> gone with the removal of support for 1.1.0.\n> \n> If we want to keep autoconf from checking versions and just check compatibility\n> (with our code) then we will remain at 1.1.0 compatibility. The only 1.1.1 API\n> we use is not present in LibreSSL so we can't really place a hard restriction\n> on that. It might be that keeping it for now, and removing it later during the\n> v18 cycle as we modernize our OpenSSL code (which I hope to find time to work\n> on) and make use of newer 1.1.1 API:s. That way we can keep our autoconf/meson\n> checks consistent across library checks. If we end up with no new API:s to\n> check for by the time the last commitfest of v18 rolls around, we can revisit\n> the decision then.\n\nOkay, fine by me.\n--\nMichael",
"msg_date": "Tue, 16 Apr 2024 08:03:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 16 Apr 2024, at 01:03, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, Apr 15, 2024 at 11:07:05AM +0200, Daniel Gustafsson wrote:\n>> On 15 Apr 2024, at 07:04, Michael Paquier <[email protected]> wrote:\n>>> On Fri, Apr 12, 2024 at 02:42:57PM +0200, Daniel Gustafsson wrote:\n>>>> Is the attached split in line with how you were thinking about it?\n>>> \n>>> If I may, 0001 looks sensible here. The bits from 0003 and 0004 could\n>>> be applied before 0002, as well.\n>> \n>> Agreed, once we are in post-freeze I think those three are mostly\n>> ready to go.\n> \n> Is there a point to wait for 0001, 0003 and 0004, though, and\n> shouldn't these three be backpatched? 0001 is certainly OK as a\n> doc-only change to be consistent across the board without waiting 5\n> more years. 0003 and 0004 are conditional and depend on if\n> SSL_R_VERSION_TOO_LOW and SSL_OP_NO_CLIENT_RENEGOTIATION are defined\n> at compile-time. 0003 is much more important, and note that\n> 01e6f1a842f4 has been backpatched all the way down. 0004 is nice,\n> still not strongly mandatory.\n\nI forgot (and didn't check) that we backpatched 01e6f1a842f4, with that in mind\nI agree that we should backpatch 0003 as well to put LibreSSL on par as much as\nwe can. 0004 is a fix for the LibreSSL support, not adding anything new, so\npushing that to master now makes sense. Unless objections are raised I'll push\n0001, 0003 and 0004 shortly. 0002 and 0005 can hopefully be addressed in the\nJuly commitfest.\n\n>>> Rather than calling always RAND_poll(), this\n>>> could use a static flag to call it only once when pg_strong_random is\n>>> called for the first time.\n>> \n>> Agreed, we can good that. Fixed.\n> \n> +#if (OPENSSL_VERSION_NUMBER <= 0x10100000L)\n> + static bool rand_initialized = false;\n> \n> This does not look right. At the top of upstream's branch\n> OpenSSL_1_1_0-stable, OPENSSL_VERSION_NUMBER is 0x101000d0L, so the\n> initialization would be missed for any version in the 1.1.0 series\n> except the GA one without this code being compiled, no?\n\nMeh, I was clearly not caffeinated as that's a thinko with a typo attached to\nit. The check should be \"< 0x10101000\" to catch any version prior to 1.1.1.\nFixed.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 16 Apr 2024 10:17:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 16.04.24 10:17, Daniel Gustafsson wrote:\n> I forgot (and didn't check) that we backpatched 01e6f1a842f4, with that in mind\n> I agree that we should backpatch 0003 as well to put LibreSSL on par as much as\n> we can. 0004 is a fix for the LibreSSL support, not adding anything new, so\n> pushing that to master now makes sense. Unless objections are raised I'll push\n> 0001, 0003 and 0004 shortly. 0002 and 0005 can hopefully be addressed in the\n> July commitfest.\n\nReview of the latest batch:\n\n* v9-0001-Doc-Use-past-tense-for-things-which-happened-in-t.patch\n\nOk\n\n8 v9-0002-Remove-support-for-OpenSSL-1.0.2.patch\n\nOk, but maybe make the punctuation consistent here:\n\n+ # Function introduced in OpenSSL 1.0.2, not in LibreSSL.\n+ ['SSL_CTX_set_cert_cb'],\n+\n+ # Function introduced in OpenSSL 1.1.1, not in LibreSSL\n ['X509_get_signature_info'],\n\n* v9-0003-Support-disallowing-SSL-renegotiation-in-LibreSSL.patch\n\nok\n\n* v9-0004-Support-SSL_R_VERSION_TOO_LOW-on-LibreSSL.patch\n\nSeems ok, but the reason isn't clear to me. Are there LibreSSL versions \nthat have SSL_R_VERSION_TOO_LOW but not SSL_R_VERSION_TOO_HIGH? Maybe \nthis could be explained better.\n\nAlso, \"OpenSSL 7.2\" in the commit message probably meant \"OpenBSD\"?\n\n* v9-0005-Remove-pg_strong_random-initialization.patch\n\nI don't understand the reason for this phrase in the commit message: \n\"1.1.1 is being increasingly phased out from production use\". Did you \nmean 1.1.0 there?\n\nConditionally sticking the RAND_poll() into pg_strong_random(), does \nthat have the effect we want? It wouldn't reinitialize after a fork, \nAFAICT.\n\n\nIf everything is addressed, I agree that 0001, 0003, and 0004 can go \ninto PG17, the rest later.\n\n\n\n",
"msg_date": "Thu, 18 Apr 2024 12:53:43 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 18 Apr 2024, at 12:53, Peter Eisentraut <[email protected]> wrote:\n\n> Review of the latest batch:\n\nThanks for reviewing!\n\n> 8 v9-0002-Remove-support-for-OpenSSL-1.0.2.patch\n> \n> Ok, but maybe make the punctuation consistent here:\n\nFixed.\n\n> * v9-0004-Support-SSL_R_VERSION_TOO_LOW-on-LibreSSL.patch\n> \n> Seems ok, but the reason isn't clear to me. Are there LibreSSL versions that have SSL_R_VERSION_TOO_LOW but not SSL_R_VERSION_TOO_HIGH? Maybe this could be explained better.\n\nLibreSSL doesn't support SSL_R_VERSION_TOO_HIGH at all, they only support\n_TOO_LOW starting with the OpenBSD 7.2 release. I've expanded the commit\nmessage to document this.\n\n> Also, \"OpenSSL 7.2\" in the commit message probably meant \"OpenBSD\"?\n\nAh yes, fixed.\n\n> * v9-0005-Remove-pg_strong_random-initialization.patch\n> \n> I don't understand the reason for this phrase in the commit message: \"1.1.1 is being increasingly phased out from production use\". Did you mean 1.1.0 there?\n\nCorrect, I got lost among the version numbers it seems. Fixed.\n\n> Conditionally sticking the RAND_poll() into pg_strong_random(), does that have the effect we want? It wouldn't reinitialize after a fork, AFAICT.\n\nNo I think you're right, my previous version would have worked (but was ugly)\nbut this doesn't guarantee that. Thinking more about it maybe it's best to\njust keep the init function and have a version check for 1.1.0 there, making it\nan empty no-op for all other cases. When we move past 1.1.0 due to a new API\nrequirement we can blow it all away.\n\n> If everything is addressed, I agree that 0001, 0003, and 0004 can go into PG17, the rest later.\n\nAgreed, 0002 and 0005 are clearly for the v18 cycle.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 18 Apr 2024 22:26:33 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 12:53:43PM +0200, Peter Eisentraut wrote:\n> If everything is addressed, I agree that 0001, 0003, and 0004 can go into\n> PG17, the rest later.\n\nAbout the PG17 bits, would you agree about a backpatch? Or perhaps\nyou disagree?\n--\nMichael",
"msg_date": "Fri, 19 Apr 2024 14:37:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 19 Apr 2024, at 07:37, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Apr 18, 2024 at 12:53:43PM +0200, Peter Eisentraut wrote:\n>> If everything is addressed, I agree that 0001, 0003, and 0004 can go into\n>> PG17, the rest later.\n> \n> About the PG17 bits, would you agree about a backpatch? Or perhaps\n> you disagree?\n\nIf we want to 0001 can be baclpatched to v15, 0004 to v13 and 0003 all the way.\nI don't have strong opinions either way.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 19 Apr 2024 08:55:32 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 19.04.24 07:37, Michael Paquier wrote:\n> On Thu, Apr 18, 2024 at 12:53:43PM +0200, Peter Eisentraut wrote:\n>> If everything is addressed, I agree that 0001, 0003, and 0004 can go into\n>> PG17, the rest later.\n> \n> About the PG17 bits, would you agree about a backpatch? Or perhaps\n> you disagree?\n\nI don't think any of these need to be backpatched, at least right now.\n\n0001 is just a cosmetic documentation tweak, has no reason to be \nbackpatched.\n\n0003 adds new functionality for LibreSSL. While the code looks \nstraightforward, we have little knowledge about how it works in \npractice. How is the buildfarm coverage of LibreSSL (with SSL tests \nenabled!)? If people are keen on this, it might be better to get it \ninto PG17 and at least let to go through a few months of beta testing.\n\n0004 effectively just enhances an error message for LibreSSL; there is \nlittle reason to backpatch this.\n\n\n\n",
"msg_date": "Fri, 19 Apr 2024 10:06:26 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 19 Apr 2024, at 10:06, Peter Eisentraut <[email protected]> wrote:\n> \n> On 19.04.24 07:37, Michael Paquier wrote:\n>> On Thu, Apr 18, 2024 at 12:53:43PM +0200, Peter Eisentraut wrote:\n>>> If everything is addressed, I agree that 0001, 0003, and 0004 can go into\n>>> PG17, the rest later.\n>> About the PG17 bits, would you agree about a backpatch? Or perhaps\n>> you disagree?\n> \n> I don't think any of these need to be backpatched, at least right now.\n> \n> 0001 is just a cosmetic documentation tweak, has no reason to be backpatched.\n> \n> 0003 adds new functionality for LibreSSL. While the code looks straightforward, we have little knowledge about how it works in practice. How is the buildfarm coverage of LibreSSL (with SSL tests enabled!)? If people are keen on this, it might be better to get it into PG17 and at least let to go through a few months of beta testing.\n> \n> 0004 effectively just enhances an error message for LibreSSL; there is little reason to backpatch this.\n\nHearing no objections to this plan (and the posted v10), I'll go ahead with\n0001, 0003 and 0004 into v17 tomorrow.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 23 Apr 2024 22:08:13 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, Apr 23, 2024 at 10:08:13PM +0200, Daniel Gustafsson wrote:\n> Hearing no objections to this plan (and the posted v10), I'll go ahead with\n> 0001, 0003 and 0004 into v17 tomorrow.\n\nWFM, thanks.\n--\nMichael",
"msg_date": "Wed, 24 Apr 2024 07:20:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 24 Apr 2024, at 00:20, Michael Paquier <[email protected]> wrote:\n> \n> On Tue, Apr 23, 2024 at 10:08:13PM +0200, Daniel Gustafsson wrote:\n>> Hearing no objections to this plan (and the posted v10), I'll go ahead with\n>> 0001, 0003 and 0004 into v17 tomorrow.\n> \n> WFM, thanks.\n\nDone. Attached are the two remaining patches, rebased over HEAD, for removing\nsupport for OpenSSL 1.0.2 in v18. Parking them in the commitfest for now.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 24 Apr 2024 13:31:12 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Apr 24, 2024 at 01:31:12PM +0200, Daniel Gustafsson wrote:\n> Done. Attached are the two remaining patches, rebased over HEAD, for removing\n> support for OpenSSL 1.0.2 in v18. Parking them in the commitfest for now.\n\nYou have mentioned once upthread the documentation of PQinitOpenSSL():\n However, this is unnecessary when using <productname>OpenSSL</productname>\n version 1.1.0 or later, as duplicate initializations are no longer problematic.\n\nWith 1.0.2's removal in place, this could be simplified more and the\npatch does not touch it. This relates also to pq_init_crypto_lib,\nwhich is gone with 0001. Of course, it is not possible to just remove\nPQinitOpenSSL() or old application may fail linking. Removing\npq_init_crypto_lib reduces any confusion around this part of the\ninitialization.\n\n0002 looks OK here.\n--\nMichael",
"msg_date": "Thu, 25 Apr 2024 12:49:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 25 Apr 2024, at 05:49, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, Apr 24, 2024 at 01:31:12PM +0200, Daniel Gustafsson wrote:\n>> Done. Attached are the two remaining patches, rebased over HEAD, for removing\n>> support for OpenSSL 1.0.2 in v18. Parking them in the commitfest for now.\n> \n> You have mentioned once upthread the documentation of PQinitOpenSSL():\n> However, this is unnecessary when using <productname>OpenSSL</productname>\n> version 1.1.0 or later, as duplicate initializations are no longer problematic.\n> \n> With 1.0.2's removal in place, this could be simplified more and the\n> patch does not touch it. This relates also to pq_init_crypto_lib,\n> which is gone with 0001. Of course, it is not possible to just remove\n> PQinitOpenSSL() or old application may fail linking. Removing\n> pq_init_crypto_lib reduces any confusion around this part of the\n> initialization.\n\nThat's a good point, there is potential for more code removal here. The\nattached 0001 takes a stab at it while it's fresh in mind, I'll revisit before\nthe July CF to see if there is more that can be done.\n\n--\nDaniel Gustafsson\n\n\n\n\n",
"msg_date": "Sat, 27 Apr 2024 20:32:19 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 27 Apr 2024, at 20:32, Daniel Gustafsson <[email protected]> wrote:\n\n> That's a good point, there is potential for more code removal here. The\n> attached 0001 takes a stab at it while it's fresh in mind, I'll revisit before\n> the July CF to see if there is more that can be done.\n\n..and again with the attachment. Not enough coffee.\n\n--\nDaniel Gustafsson",
"msg_date": "Sat, 27 Apr 2024 20:33:55 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Sat, Apr 27, 2024 at 08:33:55PM +0200, Daniel Gustafsson wrote:\n> > On 27 Apr 2024, at 20:32, Daniel Gustafsson <[email protected]> wrote:\n> \n> > That's a good point, there is potential for more code removal here. The\n> > attached 0001 takes a stab at it while it's fresh in mind, I'll revisit before\n> > the July CF to see if there is more that can be done.\n> \n> ..and again with the attachment. Not enough coffee.\n\nMy remark was originally about pq_init_crypto_lib that does the\nlocking initialization, and your new patch a bit more, as of:\n\n- /* This stuff need be done only once. */\n- if (!SSL_initialized)\n- {\n-#ifdef HAVE_OPENSSL_INIT_SSL\n- OPENSSL_init_ssl(OPENSSL_INIT_LOAD_CONFIG, NULL);\n-#else\n- OPENSSL_config(NULL);\n- SSL_library_init();\n- SSL_load_error_strings();\n-#endif\n- SSL_initialized = true;\n- }\n\nOPENSSL_init_ssl() has replaced SSL_library_init(), marked as\ndeprecated, and even this step is mentioned as not required anymore\nwith 1.1.0~:\nhttps://www.openssl.org/docs/man1.1.1/man3/OPENSSL_init_ssl.html\n\nSame with OPENSSL_init_crypto(), replacing OPENSSL_config(), again not\nrequired in 1.1.0~:\nhttps://www.openssl.org/docs/man1.1.1/man3/OPENSSL_init_crypto.html\n\nSSL_load_error_strings() is recommended as not to use in 1.1.0,\nreplaced by the others:\nhttps://www.openssl.org/docs/man3.2/man3/SSL_load_error_strings.html\n\nWhile OpenSSL will be able to cope with that, how much of that applies\nto LibreSSL? SSL_load_error_strings(), OPENSSL_init_ssl(),\nOPENSSL_CONFIG() are OK based on the docs:\nhttps://man.archlinux.org/man/extra/libressl/libressl-OPENSSL_config.3.en\nhttps://man.archlinux.org/man/extra/libressl/libressl-OPENSSL_init_ssl.3.en\nhttps://man.archlinux.org/man/extra/libressl/libressl-ERR_load_crypto_strings.3.en\n\nSo +1 to remove all this code after a closer lookup. I would\nrecommend to update the documentation of PQinitSSL and PQinitOpenSSL\nto tell that these become useless and are deprecated.\n\n ERR_clear_error();\n-\n#ifdef USE_RESOWNER_FOR_HMAC\n\nSome noise diff.\n--\nMichael",
"msg_date": "Wed, 1 May 2024 13:21:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 1 May 2024, at 06:21, Michael Paquier <[email protected]> wrote:\n\n> My remark was originally about pq_init_crypto_lib that does the\n> locking initialization, and your new patch a bit more, as of:\n> \n> ...\n> \n> So +1 to remove all this code after a closer lookup.\n\nThanks for review.\n\n> I would\n> recommend to update the documentation of PQinitSSL and PQinitOpenSSL\n> to tell that these become useless and are deprecated.\n\nThey are no-ops when linking against v18, but writing an extension which\ntargets all supported versions of postgres along with their respective\nsupported OpenSSL versions make them still required, or am I missing something?\n\n> ERR_clear_error();\n> -\n> #ifdef USE_RESOWNER_FOR_HMAC\n> \n> Some noise diff.\n\nWill fix with a new rebase when once the tree has settled down from the\npost-freeze fixups in this area.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 3 May 2024 10:39:15 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 03.05.24 10:39, Daniel Gustafsson wrote:\n>> I would\n>> recommend to update the documentation of PQinitSSL and PQinitOpenSSL\n>> to tell that these become useless and are deprecated.\n> They are no-ops when linking against v18, but writing an extension which\n> targets all supported versions of postgres along with their respective\n> supported OpenSSL versions make them still required, or am I missing something?\n\nI don't think extensions come into play here, since this is libpq, so \nonly the shared library interface compatibility matters.\n\n\n\n",
"msg_date": "Fri, 3 May 2024 21:02:12 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 03.05.24 10:39, Daniel Gustafsson wrote:\n>> They are no-ops when linking against v18, but writing an extension which\n>> targets all supported versions of postgres along with their respective\n>> supported OpenSSL versions make them still required, or am I missing something?\n\n> I don't think extensions come into play here, since this is libpq, so \n> only the shared library interface compatibility matters.\n\nYeah, server-side extensions don't really seem to be at hazard,\nbut doesn't the argument apply to client-side applications and\nlibraries that want to work across different PG/OpenSSL versions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 May 2024 15:21:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 3 May 2024, at 21:21, Tom Lane <[email protected]> wrote:\n> \n> Peter Eisentraut <[email protected]> writes:\n>> On 03.05.24 10:39, Daniel Gustafsson wrote:\n>>> They are no-ops when linking against v18, but writing an extension which\n>>> targets all supported versions of postgres along with their respective\n>>> supported OpenSSL versions make them still required, or am I missing something?\n> \n>> I don't think extensions come into play here, since this is libpq, so \n>> only the shared library interface compatibility matters.\n> \n> Yeah, server-side extensions don't really seem to be at hazard,\n> but doesn't the argument apply to client-side applications and\n> libraries that want to work across different PG/OpenSSL versions?\n\nRight, I was using \"extension\" a bit carelessly, what I meant was client-side\napplications using libpq. \n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 4 May 2024 10:08:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Fri, May 03, 2024 at 10:39:15AM +0200, Daniel Gustafsson wrote:\n> They are no-ops when linking against v18, but writing an extension which\n> targets all supported versions of postgres along with their respective\n> supported OpenSSL versions make them still required, or am I missing something?\n\nYeah, that depends on how much version you expect your application to\nwork on. Still it seems to me that there's value in mentioning that\nif your application does not care about anything older than OpenSSL \n1.1.0, like PG 18 assuming that this patch is merged, then these calls\nare pointless for HEAD. The routine definitions would be around only\nfor the .so compatibility.\n--\nMichael",
"msg_date": "Tue, 7 May 2024 08:31:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 7 May 2024, at 01:31, Michael Paquier <[email protected]> wrote:\n> \n> On Fri, May 03, 2024 at 10:39:15AM +0200, Daniel Gustafsson wrote:\n>> They are no-ops when linking against v18, but writing an extension which\n>> targets all supported versions of postgres along with their respective\n>> supported OpenSSL versions make them still required, or am I missing something?\n> \n> Yeah, that depends on how much version you expect your application to\n> work on. Still it seems to me that there's value in mentioning that\n> if your application does not care about anything older than OpenSSL \n> 1.1.0, like PG 18 assuming that this patch is merged, then these calls\n> are pointless for HEAD. The routine definitions would be around only\n> for the .so compatibility.\n\nFair enough. I've taken a stab at documenting that the functions are\ndeprecated, while at the same time documenting when and how they can be used\n(and be useful). The attached also removes one additional comment in the\ntestcode which is now obsolete (since removing 1.0.1 support really), and fixes\nthe spurious whitespace you detected upthread.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 7 May 2024 12:36:24 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Tue, May 07, 2024 at 12:36:24PM +0200, Daniel Gustafsson wrote:\n> Fair enough. I've taken a stab at documenting that the functions are\n> deprecated, while at the same time documenting when and how they can be used\n> (and be useful). The attached also removes one additional comment in the\n> testcode which is now obsolete (since removing 1.0.1 support really), and fixes\n> the spurious whitespace you detected upthread.\n\n+ This function is deprecated and only present for backwards compatibility,\n+ it does nothing.\n[...]\n+ <xref linkend=\"libpq-PQinitSSL\"/> and <xref linkend=\"libpq-PQinitOpenSSL\"/>\n+ are maintained for backwards compatibility, but are no longer required\n+ since <productname>PostgreSQL</productname> 18.\n\nLGTM, thanks for doing this!\n--\nMichael",
"msg_date": "Wed, 8 May 2024 09:45:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 07.05.24 11:36, Daniel Gustafsson wrote:\n>> Yeah, that depends on how much version you expect your application to\n>> work on. Still it seems to me that there's value in mentioning that\n>> if your application does not care about anything older than OpenSSL\n>> 1.1.0, like PG 18 assuming that this patch is merged, then these calls\n>> are pointless for HEAD. The routine definitions would be around only\n>> for the .so compatibility.\n> \n> Fair enough. I've taken a stab at documenting that the functions are\n> deprecated, while at the same time documenting when and how they can be used\n> (and be useful). The attached also removes one additional comment in the\n> testcode which is now obsolete (since removing 1.0.1 support really), and fixes\n> the spurious whitespace you detected upthread.\n\nThe 0001 patch removes the functions pgtls_init_library() and \npgtls_init() but keeps the declarations in libpq-int.h. This should be \nfixed.\n\n\n\n",
"msg_date": "Thu, 11 Jul 2024 22:22:50 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 11 Jul 2024, at 23:22, Peter Eisentraut <[email protected]> wrote:\n\n> The 0001 patch removes the functions pgtls_init_library() and pgtls_init() but keeps the declarations in libpq-int.h. This should be fixed.\n\nAh, nice catch. Done in the attached rebase.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 12 Jul 2024 22:42:25 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 12.07.24 21:42, Daniel Gustafsson wrote:\n>> On 11 Jul 2024, at 23:22, Peter Eisentraut <[email protected]> wrote:\n> \n>> The 0001 patch removes the functions pgtls_init_library() and pgtls_init() but keeps the declarations in libpq-int.h. This should be fixed.\n> \n> Ah, nice catch. Done in the attached rebase.\n\nThis patch version looks good to me.\n\nSmall comments on the commit message of 0002: Typo \"checkig\". Also, \nmaybe the commit message title can be qualified a little, since we're \nnot really doing \"Remove pg_strong_random initialization.\" but something \nlike \"Remove unnecessary ...\"?\n\n\n\n",
"msg_date": "Sun, 14 Jul 2024 13:03:35 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 14 Jul 2024, at 14:03, Peter Eisentraut <[email protected]> wrote:\n> \n> On 12.07.24 21:42, Daniel Gustafsson wrote:\n>>> On 11 Jul 2024, at 23:22, Peter Eisentraut <[email protected]> wrote:\n>>> The 0001 patch removes the functions pgtls_init_library() and pgtls_init() but keeps the declarations in libpq-int.h. This should be fixed.\n>> Ah, nice catch. Done in the attached rebase.\n> \n> This patch version looks good to me.\n\nThanks for review, I will go ahead with this once back from vacation at the\ntail end of July when I can properly handle the BF.\n\n> Small comments on the commit message of 0002: Typo \"checkig\". Also, maybe the commit message title can be qualified a little, since we're not really doing \"Remove pg_strong_random initialization.\" but something like \"Remove unnecessary ...\"?\n\nGood points, will address before pushing.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 15 Jul 2024 23:42:05 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 7/15/24 17:42, Daniel Gustafsson wrote:\n>> On 14 Jul 2024, at 14:03, Peter Eisentraut <[email protected]>\n>> wrote:\n>> \n>> On 12.07.24 21:42, Daniel Gustafsson wrote:\n>>>> On 11 Jul 2024, at 23:22, Peter Eisentraut\n>>>> <[email protected]> wrote: The 0001 patch removes the\n>>>> functions pgtls_init_library() and pgtls_init() but keeps the\n>>>> declarations in libpq-int.h. This should be fixed.\n>>> Ah, nice catch. Done in the attached rebase.\n>> \n>> This patch version looks good to me.\n> \n> Thanks for review, I will go ahead with this once back from vacation\n> at the tail end of July when I can properly handle the BF.\n> \n>> Small comments on the commit message of 0002: Typo \"checkig\".\n>> Also, maybe the commit message title can be qualified a little,\n>> since we're not really doing \"Remove pg_strong_random\n>> initialization.\" but something like \"Remove unnecessary ...\"?\n> \n> Good points, will address before pushing.\n\nI know I am way late to this thread, and I have only tried a cursory \nskim of it given the length, but have we made any kind of announcement \n(packagers at least?) that we intend to not support Postgres 18 with ssl \non RHEL 7.9 and derivatives? Yes, RHEL 7 just passed EOL, but there is \ncommercial extended support available until July 2028[1] which means \nmany people will continue to use it.\n\nJoe\n\n[1] \nhttps://www.redhat.com/en/blog/announcing-4-years-extended-life-cycle-support-els-red-hat-enterprise-linux-7\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 5 Aug 2024 08:38:00 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Hi Joe,\n\nOn Mon, 2024-08-05 at 08:38 -0400, Joe Conway wrote:\n> I know I am way late to this thread, and I have only tried a cursory \n> skim of it given the length, but have we made any kind of announcement\n> (packagers at least?) that we intend to not support Postgres 18 with\n> ssl on RHEL 7.9 and derivatives?\n\nI dropped RHEL 7 support as of PostgreSQL 16:\nhttps://yum.postgresql.org/news/rhel7-postgresql-rpms-end-of-life/\n\nand also recently stopped providing updates except PostgreSQL packages:\n\nhttps://yum.postgresql.org/news/rhel7-end-of-life/\n\nMin OS version is RHEL 8 / SLES 15.\n\n-HTH\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Mon, 05 Aug 2024 16:13:38 +0300",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> I know I am way late to this thread, and I have only tried a cursory \n> skim of it given the length, but have we made any kind of announcement \n> (packagers at least?) that we intend to not support Postgres 18 with ssl \n> on RHEL 7.9 and derivatives? Yes, RHEL 7 just passed EOL, but there is \n> commercial extended support available until July 2028[1] which means \n> many people will continue to use it.\n\nPG v16 will be in-support until November 2028, so it's not like\nwe are leaving RHEL 7 completely in the lurch. I doubt that the\nsort of people who are still running an EOL OS are looking to put\na bleeding-edge database on it, so this seems sufficient to me.\n\nAs for notifying packagers --- Red Hat themselves will certainly\nnot be trying to put new major versions of anything on RHEL 7,\nand Devrim has stopped packaging newer PG for RHEL 7 altogether,\nso who among them is going to care?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2024 09:14:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 8/5/24 09:14, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> I know I am way late to this thread, and I have only tried a cursory \n>> skim of it given the length, but have we made any kind of announcement \n>> (packagers at least?) that we intend to not support Postgres 18 with ssl \n>> on RHEL 7.9 and derivatives? Yes, RHEL 7 just passed EOL, but there is \n>> commercial extended support available until July 2028[1] which means \n>> many people will continue to use it.\n> \n> PG v16 will be in-support until November 2028, so it's not like\n> we are leaving RHEL 7 completely in the lurch. I doubt that the\n> sort of people who are still running an EOL OS are looking to put\n> a bleeding-edge database on it, so this seems sufficient to me.\n\nok\n\n> As for notifying packagers --- Red Hat themselves will certainly\n> not be trying to put new major versions of anything on RHEL 7,\n> and Devrim has stopped packaging newer PG for RHEL 7 altogether,\n> so who among them is going to care?\n\n\nPerhaps no one on packagers. It would not shock me to see complaints \nfrom others after we rip out support for 1.0.2, but maybe not ¯\\_(ツ)_/¯\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 5 Aug 2024 09:36:21 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 5 Aug 2024, at 15:36, Joe Conway <[email protected]> wrote:\n\n> It would not shock me to see complaints from others after we rip out support\n> for 1.0.2, but maybe not ¯\\_(ツ)_/¯\n\n\nI think it's highly likely that we will see complaints for any support we\ndeprecate. OpenSSL 1.0.2 will however still be supported for another 5 years\nwith v17 (which is ~9years past its EOL date) so I don't feel too bad about it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 7 Aug 2024 15:49:37 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "\n> On Aug 7, 2024, at 22:49, Daniel Gustafsson <[email protected]> wrote:\n> I think it's highly likely that we will see complaints for any support we\n> deprecate. OpenSSL 1.0.2 will however still be supported for another 5 years\n> with v17 (which is ~9years past its EOL date) so I don't feel too bad about it.\n\nI’ve cut 1.0.1 at least by myself last year, and definitely not regret it while not hearing complains on the matter. 1.0.2 being a LTS matters more in the likeliness to see complaints, but I think it’s time to just let it go. So let’s do it and move on.\n\nI’d seriously consider 1.1.0 as well for this release cycle as something to drop but I’m ok to keep it as the code gains are not really here. The 1.0.2 cut removes a lot of code and concepts particularly with the libcrypto threading and its locks!\n--\nMichael\n\n\n\n",
"msg_date": "Thu, 8 Aug 2024 08:59:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 07.08.24 15:49, Daniel Gustafsson wrote:\n>> On 5 Aug 2024, at 15:36, Joe Conway <[email protected]> wrote:\n> \n>> It would not shock me to see complaints from others after we rip out support\n>> for 1.0.2, but maybe not ¯\\_(ツ)_/¯\n> \n> \n> I think it's highly likely that we will see complaints for any support we\n> deprecate. OpenSSL 1.0.2 will however still be supported for another 5 years\n> with v17 (which is ~9years past its EOL date) so I don't feel too bad about it.\n\nIs anything -- other than this inquiry -- preventing this patch set from \ngetting committed?\n\n\n\n",
"msg_date": "Wed, 21 Aug 2024 15:01:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 8/21/24 09:01, Peter Eisentraut wrote:\n> On 07.08.24 15:49, Daniel Gustafsson wrote:\n>>> On 5 Aug 2024, at 15:36, Joe Conway <[email protected]> wrote:\n>> \n>>> It would not shock me to see complaints from others after we rip out support\n>>> for 1.0.2, but maybe not ¯\\_(ツ)_/¯\n>> \n>> \n>> I think it's highly likely that we will see complaints for any support we\n>> deprecate. OpenSSL 1.0.2 will however still be supported for another 5 years\n>> with v17 (which is ~9years past its EOL date) so I don't feel too bad about it.\n> \n> Is anything -- other than this inquiry -- preventing this patch set from\n> getting committed?\n\nThe overwhelming consensus seemed to be \"just do it\", so FWIW consider \nmy reservations withdrawn ;-)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Aug 2024 10:48:38 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 10:48:38AM -0400, Joe Conway wrote:\n> On 8/21/24 09:01, Peter Eisentraut wrote:\n>> Is anything -- other than this inquiry -- preventing this patch set from\n>> getting committed?\n> \n> The overwhelming consensus seemed to be \"just do it\", so FWIW consider my\n> reservations withdrawn ;-)\n\nJust do it :)\n\nI am pretty sure that Daniel is just on vacations and that this will\nhappen sooner than later during the next commit fest, so I'd rather\nwait for him for an update on this thread first.\n--\nMichael",
"msg_date": "Thu, 22 Aug 2024 09:31:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 22 Aug 2024, at 02:31, Michael Paquier <[email protected]> wrote:\n> \n> On Wed, Aug 21, 2024 at 10:48:38AM -0400, Joe Conway wrote:\n>> On 8/21/24 09:01, Peter Eisentraut wrote:\n>>> Is anything -- other than this inquiry -- preventing this patch set from\n>>> getting committed?\n\nThat, and available time.\n\n>> The overwhelming consensus seemed to be \"just do it\", so FWIW consider my\n>> reservations withdrawn ;-)\n> \n> Just do it :)\n\nThat's my plan, I wanted to wait a bit to see if anyone else chimed in with\nconcerns.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 22 Aug 2024 23:13:15 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On Thu, Aug 22, 2024 at 11:13:15PM +0200, Daniel Gustafsson wrote:\n> On 22 Aug 2024, at 02:31, Michael Paquier <[email protected]> wrote:\n>> Just do it :)\n> \n> That's my plan, I wanted to wait a bit to see if anyone else chimed in with\n> concerns.\n\nCool, thanks!\n--\nMichael",
"msg_date": "Fri, 23 Aug 2024 08:56:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 23 Aug 2024, at 01:56, Michael Paquier <[email protected]> wrote:\n> \n> On Thu, Aug 22, 2024 at 11:13:15PM +0200, Daniel Gustafsson wrote:\n>> On 22 Aug 2024, at 02:31, Michael Paquier <[email protected]> wrote:\n>>> Just do it :)\n>> \n>> That's my plan, I wanted to wait a bit to see if anyone else chimed in with\n>> concerns.\n> \n> Cool, thanks!\n\nAttached is a rebased v15 (only changes are commit-message changes noted by\nPeter upthread) for the sake of archives, and for a green-check run in the\nCFBot. Assuming this builds green I intend to push this.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 2 Sep 2024 10:03:14 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 2 Sep 2024, at 10:03, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 23 Aug 2024, at 01:56, Michael Paquier <[email protected]> wrote:\n>> \n>> On Thu, Aug 22, 2024 at 11:13:15PM +0200, Daniel Gustafsson wrote:\n>>> On 22 Aug 2024, at 02:31, Michael Paquier <[email protected]> wrote:\n>>>> Just do it :)\n>>> \n>>> That's my plan, I wanted to wait a bit to see if anyone else chimed in with\n>>> concerns.\n>> \n>> Cool, thanks!\n> \n> Attached is a rebased v15 (only changes are commit-message changes noted by\n> Peter upthread) for the sake of archives, and for a green-check run in the\n> CFBot. Assuming this builds green I intend to push this.\n\nAnd pushed. All BF owners with animals using 1.0.2 have been notified but not\nall have been updated (or modified to skip SSL) so there will be some failing.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 14:26:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "On 02.09.24 14:26, Daniel Gustafsson wrote:\n>> On 2 Sep 2024, at 10:03, Daniel Gustafsson <[email protected]> wrote:\n>>\n>>> On 23 Aug 2024, at 01:56, Michael Paquier <[email protected]> wrote:\n>>>\n>>> On Thu, Aug 22, 2024 at 11:13:15PM +0200, Daniel Gustafsson wrote:\n>>>> On 22 Aug 2024, at 02:31, Michael Paquier <[email protected]> wrote:\n>>>>> Just do it :)\n>>>>\n>>>> That's my plan, I wanted to wait a bit to see if anyone else chimed in with\n>>>> concerns.\n>>>\n>>> Cool, thanks!\n>>\n>> Attached is a rebased v15 (only changes are commit-message changes noted by\n>> Peter upthread) for the sake of archives, and for a green-check run in the\n>> CFBot. Assuming this builds green I intend to push this.\n> \n> And pushed. All BF owners with animals using 1.0.2 have been notified but not\n> all have been updated (or modified to skip SSL) so there will be some failing.\n\nA small follow-up for this: With the current minimum OpenSSL version \nbeing 1.1.0, we can remove an unconstify() call; see attached patch.\n\nSee this OpenSSL commit: \n<https://github.com/openssl/openssl/commit/8ab31975ba>. The analogous \nLibreSSL change is here: \n<https://cvsweb.openbsd.org/src/lib/libcrypto/bio/bss_mem.c?rev=1.17&content-type=text/x-cvsweb-markup>. \n I don't know if we have a concrete minimum LibreSSL version, but the \nchange is about as old as the OpenSSL change.",
"msg_date": "Tue, 10 Sep 2024 10:01:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
},
{
"msg_contents": "> On 10 Sep 2024, at 10:01, Peter Eisentraut <[email protected]> wrote:\n\n>> And pushed. All BF owners with animals using 1.0.2 have been notified but not\n>> all have been updated (or modified to skip SSL) so there will be some failing.\n> \n> A small follow-up for this: With the current minimum OpenSSL version being 1.1.0, we can remove an unconstify() call; see attached patch.\n\nNice catch.\n\n> See this OpenSSL commit: <https://github.com/openssl/openssl/commit/8ab31975ba>. The analogous LibreSSL change is here: <https://cvsweb.openbsd.org/src/lib/libcrypto/bio/bss_mem.c?rev=1.17&content-type=text/x-cvsweb-markup>. \n\n> I don't know if we have a concrete minimum LibreSSL version, but the change is about as old as the OpenSSL change.\n\nWe've never documented the minimum LibreSSL version we support, but given that\nwe regularly test LibreSSL and fix breakage in our support I think we should.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:11:25 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cutting support for OpenSSL 1.0.1 and 1.0.2 in 17~?"
}
] |
[
{
"msg_contents": "Testing with SQLancer reports a wrong results issue on master and I\nreduced it to the repro query below.\n\ncreate table t (a int, b int);\n\nexplain (costs off)\nselect * from t t1 left join\n (t t2 left join t t3 full join t t4 on false on false)\n left join t t5 on t2.a = t5.a\non t2.b = 1;\n QUERY PLAN\n--------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Materialize\n -> Nested Loop Left Join\n -> Nested Loop Left Join\n Join Filter: false\n -> Seq Scan on t t2\n Filter: (b = 1)\n -> Result\n One-Time Filter: false\n -> Materialize\n -> Seq Scan on t t5\n(12 rows)\n\nSo the qual 't2.a = t5.a' is missing.\n\nI looked into it and found that both clones of this joinqual are\nrejected by clause_is_computable_at, because their required_relids do\nnot include the outer join of t2/(t3/t4), and meanwhile include nullable\nrels of this outer join.\n\nI think the root cause is that, as Tom pointed out in [1], we're not\nmaintaining required_relids very accurately. In b9c755a2, we make\nclause_is_computable_at test required_relids for clone clauses. I think\nthis is how this issue sneaks in.\n\nTo fix it, it seems to me that the ideal way would be to always compute\naccurate required_relids. But I'm not sure how difficult it is.\n\n[1] https://www.postgresql.org/message-id/395264.1684698283%40sss.pgh.pa.us\n\nThanks\nRichard\n\nTesting with SQLancer reports a wrong results issue on master and Ireduced it to the repro query below.create table t (a int, b int);explain (costs off)select * from t t1 left join (t t2 left join t t3 full join t t4 on false on false) left join t t5 on t2.a = t5.aon t2.b = 1; QUERY PLAN-------------------------------------------------- Nested Loop Left Join -> Seq Scan on t t1 -> Materialize -> Nested Loop Left Join -> Nested Loop Left Join Join Filter: false -> Seq Scan on t t2 Filter: (b = 1) -> Result One-Time Filter: false -> Materialize -> Seq Scan on t t5(12 rows)So the qual 't2.a = t5.a' is missing.I looked into it and found that both clones of this joinqual arerejected by clause_is_computable_at, because their required_relids donot include the outer join of t2/(t3/t4), and meanwhile include nullablerels of this outer join.I think the root cause is that, as Tom pointed out in [1], we're notmaintaining required_relids very accurately. In b9c755a2, we makeclause_is_computable_at test required_relids for clone clauses. I thinkthis is how this issue sneaks in.To fix it, it seems to me that the ideal way would be to always computeaccurate required_relids. But I'm not sure how difficult it is.[1] https://www.postgresql.org/message-id/395264.1684698283%40sss.pgh.pa.usThanksRichard",
"msg_date": "Wed, 24 May 2023 19:19:16 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong results due to missing quals"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> So the qual 't2.a = t5.a' is missing.\n\nUgh.\n\n> I looked into it and found that both clones of this joinqual are\n> rejected by clause_is_computable_at, because their required_relids do\n> not include the outer join of t2/(t3/t4), and meanwhile include nullable\n> rels of this outer join.\n> I think the root cause is that, as Tom pointed out in [1], we're not\n> maintaining required_relids very accurately. In b9c755a2, we make\n> clause_is_computable_at test required_relids for clone clauses. I think\n> this is how this issue sneaks in.\n\nYeah. I'm starting to think that b9c755a2 took the wrong approach.\nReally, required_relids is about making sure that a qual isn't\nevaluated \"too low\", before all necessary joins have been formed. But\nclause_is_computable_at is charged with making sure we don't evaluate\nit \"too high\", after some incompatible join has been formed. There's\nno really good reason to suppose that required_relids can serve both\npurposes, even if it were computed perfectly accurately (and what is\nperfect, anyway?).\n\nSo right now I'm playing with the idea of reverting the change in\nclause_is_computable_at and seeing how else we can fix the previous\nbug. Don't have anything to show yet, but one thought is that maybe\ndeconstruct_distribute_oj_quals needs to set up clause_relids for\nclone clauses differently. Another idea is that maybe we need another\nRestrictInfo field that's directly a set of OJ relids that this clause\ncan't be applied above. That'd reduce clause_is_computable_at to\nbasically a bms_intersect test which would be nice speed-wise. The\nspace consumption could be annoying, but I'm thinking that we might\nonly have to populate the field in clone clauses, which would\nalleviate that issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 11:10:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results due to missing quals"
},
{
"msg_contents": "I wrote:\n> ... Another idea is that maybe we need another\n> RestrictInfo field that's directly a set of OJ relids that this clause\n> can't be applied above. That'd reduce clause_is_computable_at to\n> basically a bms_intersect test which would be nice speed-wise. The\n> space consumption could be annoying, but I'm thinking that we might\n> only have to populate the field in clone clauses, which would\n> alleviate that issue.\n\nI tried this and it seems to work all right: it fixes the example\nyou showed while not causing any new failures. (Doesn't address\nthe broken join-removal logic you showed in the other thread,\nthough.)\n\nWhile at it, I also changed make_restrictinfo to treat has_clone\nand is_clone as first-class citizens, to fix the dubious coding in\nequivclass.c that I mentioned at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/395264.1684698283%40sss.pgh.pa.us",
"msg_date": "Wed, 24 May 2023 17:28:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results due to missing quals"
},
{
"msg_contents": "On Thu, May 25, 2023 at 5:28 AM Tom Lane <[email protected]> wrote:\n\n> I tried this and it seems to work all right: it fixes the example\n> you showed while not causing any new failures. (Doesn't address\n> the broken join-removal logic you showed in the other thread,\n> though.)\n>\n> While at it, I also changed make_restrictinfo to treat has_clone\n> and is_clone as first-class citizens, to fix the dubious coding in\n> equivclass.c that I mentioned at [1].\n\n\nThe \"incompatible_relids\" idea is a stroke of genius. I reviewed the\npatch and did not find any problem. So big +1 to the patch.\n\nThanks\nRichard\n\nOn Thu, May 25, 2023 at 5:28 AM Tom Lane <[email protected]> wrote:\nI tried this and it seems to work all right: it fixes the example\nyou showed while not causing any new failures. (Doesn't address\nthe broken join-removal logic you showed in the other thread,\nthough.)\n\nWhile at it, I also changed make_restrictinfo to treat has_clone\nand is_clone as first-class citizens, to fix the dubious coding in\nequivclass.c that I mentioned at [1].The \"incompatible_relids\" idea is a stroke of genius. I reviewed thepatch and did not find any problem. So big +1 to the patch.ThanksRichard",
"msg_date": "Thu, 25 May 2023 14:36:31 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong results due to missing quals"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> The \"incompatible_relids\" idea is a stroke of genius. I reviewed the\n> patch and did not find any problem. So big +1 to the patch.\n\nPushed, thanks for the report and the review.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 10:29:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results due to missing quals"
}
] |
[
{
"msg_contents": "I recently stumbled over the following Intel proposal for dropping 32bit support in x86 processors. [1]\n\n\nThis inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n\n\nIt seems obvious that production systems mostly won't use newly installed 32 bit native code in late 2024 and beyond.\n\n\nA quick inspection of the buildfarm only showed a very limited number of 32bit systems.\n\n\nThis proposal follows the practice for Windows which already (practically) dropped 32bit support.\n\n\n32bit systems may get continuing support in the backbranches until PG16 retires in 2028.\n\n\nEven if I am not a postgres hacker I suppose this could simplify things quite a lot.\n\n\nAny thoughts for discussion are welcome!\n\n\nHans Buschmann\n\n\n[1] https://www.phoronix.com/news/Intel-X86-S-64-bit-Only\n\n\n\n\n\n\n\n\n\n\nI recently stumbled over the following Intel proposal for dropping 32bit support in x86 processors. [1]\n\n\nThis inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n\n\nIt seems obvious that production systems mostly won't use newly installed 32 bit native code in late 2024 and beyond.\n\n\nA quick inspection of the buildfarm only showed a very limited number of 32bit systems.\n\n\nThis proposal follows the practice for Windows which already (practically) dropped 32bit support.\n\n\n32bit systems may get continuing support in the backbranches until PG16 retires in 2028.\n\n\nEven if I am not a postgres hacker I suppose this could simplify things quite a lot.\n\n\nAny thoughts for discussion are welcome!\n\n\nHans Buschmann\n\n\n[1] https://www.phoronix.com/news/Intel-X86-S-64-bit-Only",
"msg_date": "Wed, 24 May 2023 14:33:06 +0000",
"msg_from": "Hans Buschmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Hans Buschmann <[email protected]> writes:\n> This inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n\nI don't think this is a great idea. Even if Intel isn't interested,\nthere'll be plenty of 32-bit left in the lower end of the market\n(think ARM, IoT, and so on).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 10:44:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 14:33:06 +0000, Hans Buschmann wrote:\n> I recently stumbled over the following Intel proposal for dropping 32bit support in x86 processors. [1]\n\nIt's a proposal for something in the future. Which, even if implemented as is,\nwill affect future hardware, several years down the line. I don't think that's\na good reason for removing 32 bit support in postgres.\n\nAnd postgres is used on non-x86 architectures...\n\n\n> This inspired me to propose dropping 32bit support for PostgreSQL starting\n> with PG17.\n> ...\n> Even if I am not a postgres hacker I suppose this could simplify things quite a lot.\n\nThere's some simplification, but I don't think it'd be that much.\n\nI do think there are code removals and simplifications that would be bigger\nthan dropping 32bit support.\n\nDropping support for effectively-obsolete compilers like sun studio (requires\nrandom environment variables to be exported to not run out of memory) and\nAIX's xlc (requires a lot of extra compiler flags to be passed in for a sane\nbuild) would remove a fair bit of code.\n\nDropping CPUs without native atomic operations / without a way to do tear-free\n8 byte reads would make several substantial performance improvements easier,\nwhile not really dropping any relevant platform.\n\nEtc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 14:41:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Dropping CPUs without native atomic operations / without a way to do tear-free\n> 8 byte reads would make several substantial performance improvements easier,\n> while not really dropping any relevant platform.\n\nHmm, can we really expect atomic 8-byte reads on \"relevant\" 32-bit\nplatforms? I'd be on board with this if so, but it sounds a bit\noptimistic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 17:44:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "On Wed, May 24, 2023 at 10:44:11AM -0400, Tom Lane wrote:\n> Hans Buschmann <[email protected]> writes:\n> > This inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n> \n> I don't think this is a great idea. Even if Intel isn't interested,\n> there'll be plenty of 32-bit left in the lower end of the market\n> (think ARM, IoT, and so on).\n\nA few examples of that are the first models of the Raspberry PIs,\nwhich are still produced (until 2026 actually for the first model).\nThese rely on ARMv6 if I recall correctly, which are 32b.\n--\nMichael",
"msg_date": "Thu, 25 May 2023 07:25:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 17:44:36 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Dropping CPUs without native atomic operations / without a way to do tear-free\n> > 8 byte reads would make several substantial performance improvements easier,\n> > while not really dropping any relevant platform.\n>\n> Hmm, can we really expect atomic 8-byte reads on \"relevant\" 32-bit\n> platforms? I'd be on board with this if so, but it sounds a bit\n> optimistic.\n\nCombining https://wiki.postgresql.org/wiki/Atomics with our docs:\n\n> In general, PostgreSQL can be expected to work on these CPU architectures:\n> x86, PowerPC, S/390, SPARC, ARM, MIPS, RISC-V, and PA-RISC, including\n> big-endian, little-endian, 32-bit, and 64-bit variants where applicable. It\n> is often possible to build on an unsupported CPU type by configuring with\n> --disable-spinlocks, but performance will be poor.\n\n\nOn x86 8 byte atomics are supported since the 586 - released in 1993. I don't\nthink we need to care about i386 / i486?\n\nPPC always had it from what I can tell (the docs don't mention a minimum\nversion).\n\nSparc has supported single copy atomicity for 8 byte values since sparc v9, so\n~1995 (according to wikipedia there was one V8 chip released in 1996, there's\nalso \"LEON\", a bit later, but that's \"V8E\", which includes the atomicity\nguarantees from v9).\n\nOn s390 sufficient instructions to make 64bit atomics work natively are\nsupported (just using cmpxchg).\n\nOn older arm it's supported via kernel emulation - which afaict is better than\nfalling back to a semaphore, our current fallback. I don't currently have\naccess to armv7*, so I can't run a benchmark.\n\n\nSo the only problematic platforms are 32 bit MIPS and PA-RISC.\n\n\nI'm not entirely sure whether my determination about 32 bit MIPS from back\nthen is actually true - I might have read the docs too narrowly back then or\nthey were updated since. In a newer version of the manual I see:\n\n> The paired instructions, Load Linked and Store Conditional, can be used to\n> perform an atomic read-modify-write of word or doubleword cached memory\n> locations.\n\nThe word width is referenced to be 32bit earlier on the same page. And it's\ndocumented to be true for mips32, not just mips64.\n\nWith that one can implement a 64bit cmpxchg, and with 64bit cmpxchg one can\nimplement a, not too efficient, tear-free read.\n\n\nWhat gave me pause for a moment is that both clang and gcc generate calls to\nexternal functions on 32 bit mips - but they do provide libatomic and looking\nat the disassembly, there does appear to be a some non-emulated execution. But\ntbh, there are so many ABIs and options that I am not sure what is what.\n\nReading the https://en.wikipedia.org/wiki/MIPS_architecture history part gives\nme a headache: \"During the mid-1990s, many new 32-bit MIPS processors for\nembedded systems were MIPS II implementations because the introduction of the\n64-bit MIPS III architecture in 1991 left MIPS II as the newest 32-bit MIPS\narchitecture until MIPS32 was introduced in 1999.[3]: 19 \"\n\nMy rough understanding is that it's always doable in 64 mode, and has been\navailable in 32bit mode for a long time, but that it depends on the the ABI\nused.\n\n\nSo it looks like the only certain problem is PA-RISC - which I personally\nwouldn't include in \"relevant\" :), with some evaluation needed for 32bit mips\nand old arms.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 16:36:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-05-24 17:44:36 -0400, Tom Lane wrote:\n>> Hmm, can we really expect atomic 8-byte reads on \"relevant\" 32-bit\n>> platforms? I'd be on board with this if so, but it sounds a bit\n>> optimistic.\n\n> ...\n\n> So it looks like the only certain problem is PA-RISC - which I personally\n> wouldn't include in \"relevant\" :), with some evaluation needed for 32bit mips\n> and old arms.\n\nYou'll no doubt be glad to hear that I'll be retiring chickadee\nin the very near future. (I'm moving/downsizing, and that machine\nisn't making the cut.) So dropping PA-RISC altogether should probably\nhappen for v17, maybe even v16. Seems like we should poke into\nARM more closely, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 19:51:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "On Thu, May 25, 2023 at 11:51 AM Tom Lane <[email protected]> wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-05-24 17:44:36 -0400, Tom Lane wrote:\n> > So it looks like the only certain problem is PA-RISC - which I personally\n> > wouldn't include in \"relevant\" :), with some evaluation needed for 32bit mips\n> > and old arms.\n>\n> You'll no doubt be glad to hear that I'll be retiring chickadee\n> in the very near future.\n\n. o O { I guess chickadee might have been OK anyway, along with e.g.\nantique low-end SGI MIPS gear etc of \"workstation\"/\"desktop\" form that\nany collector is likely to have still running, because they only had\none CPU (unlike their Vogon-spaceship-sized siblings). As long as\nthey had 64 bit load/store instructions, those couldn't be 'divided'\nby an interrupt, so scheduler switches shouldn't be able to tear them,\nAFAIK? }\n\n\n",
"msg_date": "Thu, 25 May 2023 12:09:36 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Thu, May 25, 2023 at 11:51 AM Tom Lane <[email protected]> wrote:\n>> You'll no doubt be glad to hear that I'll be retiring chickadee\n>> in the very near future.\n\n> . o O { I guess chickadee might have been OK anyway, along with e.g.\n> antique low-end SGI MIPS gear etc of \"workstation\"/\"desktop\" form that\n> any collector is likely to have still running, because they only had\n> one CPU (unlike their Vogon-spaceship-sized siblings). As long as\n> they had 64 bit load/store instructions, those couldn't be 'divided'\n> by an interrupt, so scheduler switches shouldn't be able to tear them,\n> AFAIK? }\n\nPA-RISC can probably do tear-free 8-byte reads, but Andres also\nwanted to raise the bar enough to include 32-bit atomic instructions,\nwhich PA-RISC hasn't got; the one such instruction it has is\nlimited enough that you can't do much beyond building a spinlock.\n\nDunno about antique MIPS. I think there's still some interest in\nnot-antique 32-bit MIPS; I have some current-production routers\nwith such CPUs. (Sadly, they don't have enough storage to do\nanything useful with, or I'd think about repurposing one for\nbuildfarm.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 20:34:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 19:51:22 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-05-24 17:44:36 -0400, Tom Lane wrote:\n> >> Hmm, can we really expect atomic 8-byte reads on \"relevant\" 32-bit\n> >> platforms? I'd be on board with this if so, but it sounds a bit\n> >> optimistic.\n> \n> > ...\n> \n> > So it looks like the only certain problem is PA-RISC - which I personally\n> > wouldn't include in \"relevant\" :), with some evaluation needed for 32bit mips\n> > and old arms.\n> \n> You'll no doubt be glad to hear that I'll be retiring chickadee\n> in the very near future.\n\nHeh, I have to admit, I am.\n\n\n> So dropping PA-RISC altogether should probably happen for v17, maybe even\n> v16.\n\nDefinitely for 17 - not sure if we have much to gain by doing it in 16. The\nlikelihood that somebody will find a PA-RISC specific bug, after we dropped\nsupport for 15, is pretty low, I think.\n\n\n> Seems like we should poke into ARM more closely, though.\n\nLooks like the necessary support was added in armv6k and armv7a:\n\nhttps://developer.arm.com/documentation/dui0489/i/arm-and-thumb-instructions/strex\n ARM STREXB, STREXD, and STREXH are available in ARMv6K and above.\n All these 32-bit Thumb instructions are available in ARMv6T2 and above, except that STREXD is not available in the ARMv7-M architecture.\n\nARMv7-M is for microcontrollers without an MMU, so it's not going to be used\nfor postgres.\n\nThe arm buildfarm machines (using the architecture names from there) we have\nare:\n\narmv6l: chipmunk\nARMv7: mereswine, gull, grison\narm64: dikkop, sifaka, indri\n\nturako says it's armv7l, but it seems actually to target aarch64.\n\nAt least for type of machine available on the buildfarm, it looks like we\nactually should be good. They all appear to already be supporting 64bit\natomics, without needing further compiler flags.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 18:00:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-05-24 19:51:22 -0400, Tom Lane wrote:\n>> So dropping PA-RISC altogether should probably happen for v17, maybe even\n>> v16.\n\n> Definitely for 17 - not sure if we have much to gain by doing it in 16.\n\nI'm just thinking that we'll have no way to test it. I wouldn't advocate\nsuch a change in released branches; but I think it'd be within policy\nstill for v16, and that would give us one less year of claimed support\nfor an arch we can't test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 May 2023 21:04:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 20:34:38 -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Thu, May 25, 2023 at 11:51 AM Tom Lane <[email protected]> wrote:\n> >> You'll no doubt be glad to hear that I'll be retiring chickadee\n> >> in the very near future.\n>\n> > . o O { I guess chickadee might have been OK anyway, along with e.g.\n> > antique low-end SGI MIPS gear etc of \"workstation\"/\"desktop\" form that\n> > any collector is likely to have still running, because they only had\n> > one CPU (unlike their Vogon-spaceship-sized siblings). As long as\n> > they had 64 bit load/store instructions, those couldn't be 'divided'\n> > by an interrupt, so scheduler switches shouldn't be able to tear them,\n> > AFAIK? }\n>\n> PA-RISC can probably do tear-free 8-byte reads, but Andres also\n> wanted to raise the bar enough to include 32-bit atomic instructions,\n> which PA-RISC hasn't got; the one such instruction it has is\n> limited enough that you can't do much beyond building a spinlock.\n\nYes, I looked at some point, and it didn't seem viable to do more.\n\n\n> I think there's still some interest in not-antique 32-bit MIPS; I have some\n> current-production routers with such CPUs. (Sadly, they don't have enough\n> storage to do anything useful with, or I'd think about repurposing one for\n> buildfarm.)\n\nAfter spending a bunch more time staring at various reference manuals, it\nlooks to me like 32bit MIPS supports 64 bit atomics these days, via LLWP /\nSCWP.\n\nhttps://s3-eu-west-1.amazonaws.com/downloads-mips/documents/MD00086-2B-MIPS32BIS-AFP-6.06.pdf\ndocuments them as having been added to MIPS32 Release 6, from 2014.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 18:28:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "On Thu, May 25, 2023 at 12:34 PM Tom Lane <[email protected]> wrote:\n> Dunno about antique MIPS. I think there's still some interest in\n> not-antique 32-bit MIPS; I have some current-production routers\n> with such CPUs. (Sadly, they don't have enough storage to do\n> anything useful with, or I'd think about repurposing one for\n> buildfarm.)\n\nFWIW \"development of the MIPS architecture has ceased\"[1]. (Clearly\nthere are living ISAs that continue either its spirit or its ...\ninstructions, but they aren't calling themselves MIPS.)\n\n[1] https://en.wikipedia.org/wiki/MIPS_architecture\n\n\n",
"msg_date": "Thu, 25 May 2023 14:38:15 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: Removing 32 bit support starting from PG17++"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n>Hans Buschmann <[email protected]> writes:\n>> This inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n\n>I don't think this is a great idea. Even if Intel isn't interested,\n>there'll be plenty of 32-bit left in the lower end of the market\n>(think ARM, IoT, and so on).\n\nHello Tom,\n\nCertainly there are many 32bit implementations out there in many markets.\nTherefore I spoke from inspiration by Intel's move, what does not implicitely indicate a preference of x86 architecture.\n\nConsidering ARM (worlds most used ISA) I think we should focus on the use cases.\nAll implementations in many various scenarios (keyboard controllers,SetTop-Boxes,TV sets, SSD/Hard-disk-controllers,BMC controllers and many more) are no real use cases for Postgres on ARM.\n\nThe only relevant usage scenario at this time is in the datacenter when AARCH64 based CPU designs are used in servers.\n\nThe most spreaded usage of ARM (nowadays almost 64bit), which is used dayly by biliion of people, are the Smartphones and tablets, which are completelely unsupported by Postgres!\n\nAn officially supported access module for remote access to server-based database would bring Postgres to a much broader knowledge and use for many people.\n(why not provide an official libpq on IOS or Android to enable specialized client applications?)\n\nOn the IoT side I have not much knowledge, perhaps you have a relevant example of a native 32bit server implementation in this area. But to my knowledge there is no special support in our code base yet (OS, File systems, storage, administration).\n\nFor me the central question is:\n\nAre there many use cases for new PG server installations on 32bit starting with late 2024/2025?\n\nand not\n\nDo we provide support for NEW VERSIONS for aging architectures which themselves aren't used for new installations any more and mostly never are updated even to currently supported versions?\n\nHans Buschmann\n\n\n\n\n\n\n\n\n\n\n\n\n\nTom Lane <[email protected]> writes:\n>Hans Buschmann <[email protected]>\n writes:\n\n\n\n>> This inspired me to propose dropping 32bit support for PostgreSQL starting with PG17.\n\n>I don't think this is a great idea. Even if Intel isn't interested,\n>there'll be plenty of 32-bit left in the lower end of the market\n>(think ARM, IoT, and so on).\n\nHello Tom,\n\n\n\nCertainly there are many 32bit implementations out there in many markets.\nTherefore I spoke from inspiration by Intel's move, what does not implicitely indicate a preference of x86 architecture.\n\n\nConsidering ARM (worlds most used ISA) I think we should focus on the use cases.\nAll implementations in many various scenarios (keyboard controllers,SetTop-Boxes,TV sets, SSD/Hard-disk-controllers,BMC controllers and many more) are no real use cases for Postgres on ARM.\n\n\nThe only relevant usage scenario at this time is in the datacenter when AARCH64 based CPU designs are used in servers.\n\n\nThe most spreaded usage of ARM (nowadays almost 64bit), which is used dayly by biliion of people, are the Smartphones and tablets, which are completelely unsupported by Postgres!\n\n\nAn officially supported access module for remote access to server-based database would bring Postgres to a much broader knowledge and use for many people.\n(why not provide an official libpq on IOS or Android to enable specialized client applications?)\n\n\nOn the IoT side I have not much knowledge, perhaps you have a relevant example of a native 32bit server implementation in this area. But to my knowledge there is no special support in our code base yet (OS, File systems, storage, administration).\n\n\nFor me the central question is:\n\n\nAre there many use cases for new PG server installations on 32bit starting with late 2024/2025?\n\n\nand not\n\n\nDo we provide support for NEW VERSIONS for aging architectures which themselves aren't used for new installations any more and mostly never are updated even to currently supported versions?\n\n\nHans Buschmann",
"msg_date": "Thu, 25 May 2023 08:55:57 +0000",
"msg_from": "Hans Buschmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "AW: Proposal: Removing 32 bit support starting from PG17++"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI was wondering if waiting for an LSN in SyncRepWaitForLSN ensures that the\nXLOG has been flushed locally up to that location before the record is\nshipped off to standbys?\n\nRegards,\nTej\n\nHi everyone,I was wondering if waiting for an LSN in SyncRepWaitForLSN ensures that the XLOG has been flushed locally up to that location before the record is shipped off to standbys?Regards,Tej",
"msg_date": "Wed, 24 May 2023 10:53:51 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "SyncRepWaitForLSN waits for XLogFlush?"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 10:53:51 -0400, Tejasvi Kashi wrote:\n> I was wondering if waiting for an LSN in SyncRepWaitForLSN ensures that the\n> XLOG has been flushed locally up to that location before the record is\n> shipped off to standbys?\n\nNo, SyncRepWaitForLSN() doesn't directly ensure that. The callers have to (and\ndo) call XLogFlush() separately. See e.g. the XLogFlush() call in\nRecordTransactionCommit().\n\nNote that calling SyncRepWaitForLSN() for an LSN that is not yet flushed would\nnot lead for data to be prematurely sent out - walsender won't send data that\nhasn't yet been flushed. So a backend with such a spurious SyncRepWaitForLSN()\nwould just wait until the LSN is actually flushed and *then* replicated.\n\nAny specific reason for that question?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 May 2023 14:33:10 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SyncRepWaitForLSN waits for XLogFlush?"
},
{
"msg_contents": "Hi Andres,\n\nThanks for your reply. If I understand you correctly, you're saying that\nthe walsender *waits* for xlogflush, but does not *cause* it. This means\nthat for a sync_commit=off transaction, the xlog records won't get shipped\nout to standbys until the walwriter flushes in-memory xlog contents to disk.\n\nAnd furthermore, am I correct in assuming that this behaviour is different\nfrom the buffer manager and the slru which both *cause* xlog flush up to a\ncertain lsn before they proceed with flushing a page to disk?\n\nThe reason I'm asking this is that I'm working on modifying the transaction\nmanager for my thesis project, and the pg_walinspect test is failing when I\nrun make check-world. So, I'm just trying to understand things to identify\nthe cause of this issue.\n\nRegards,\nTej\n\nOn Wed, 24 May 2023 at 17:33, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-05-24 10:53:51 -0400, Tejasvi Kashi wrote:\n> > I was wondering if waiting for an LSN in SyncRepWaitForLSN ensures that\n> the\n> > XLOG has been flushed locally up to that location before the record is\n> > shipped off to standbys?\n>\n> No, SyncRepWaitForLSN() doesn't directly ensure that. The callers have to\n> (and\n> do) call XLogFlush() separately. See e.g. the XLogFlush() call in\n> RecordTransactionCommit().\n>\n> Note that calling SyncRepWaitForLSN() for an LSN that is not yet flushed\n> would\n> not lead for data to be prematurely sent out - walsender won't send data\n> that\n> hasn't yet been flushed. So a backend with such a spurious\n> SyncRepWaitForLSN()\n> would just wait until the LSN is actually flushed and *then* replicated.\n>\n> Any specific reason for that question?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi Andres,Thanks for your reply. If I understand you correctly, you're saying that the walsender *waits* for xlogflush, but does not *cause* it. This means that for a sync_commit=off transaction, the xlog records won't get shipped out to standbys until the walwriter flushes in-memory xlog contents to disk.And furthermore, am I correct in assuming that this behaviour is different from the buffer manager and the slru which both *cause* xlog flush up to a certain lsn before they proceed with flushing a page to disk?The reason I'm asking this is that I'm working on modifying the transaction manager for my thesis project, and the pg_walinspect test is failing when I run make check-world. So, I'm just trying to understand things to identify the cause of this issue.Regards,TejOn Wed, 24 May 2023 at 17:33, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-05-24 10:53:51 -0400, Tejasvi Kashi wrote:\n> I was wondering if waiting for an LSN in SyncRepWaitForLSN ensures that the\n> XLOG has been flushed locally up to that location before the record is\n> shipped off to standbys?\n\nNo, SyncRepWaitForLSN() doesn't directly ensure that. The callers have to (and\ndo) call XLogFlush() separately. See e.g. the XLogFlush() call in\nRecordTransactionCommit().\n\nNote that calling SyncRepWaitForLSN() for an LSN that is not yet flushed would\nnot lead for data to be prematurely sent out - walsender won't send data that\nhasn't yet been flushed. So a backend with such a spurious SyncRepWaitForLSN()\nwould just wait until the LSN is actually flushed and *then* replicated.\n\nAny specific reason for that question?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 24 May 2023 18:28:57 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SyncRepWaitForLSN waits for XLogFlush?"
}
] |
[
{
"msg_contents": "Greetings,\n\nIn\nhttps://github.com/postgres/postgres/blob/5c2c59ba0b5f723b067a6fa8bf8452d41fbb2125/src/backend/libpq/auth.c#L463\n\nThe last piece of information is the encryption state. However when an SSL\nconnection fails to authenticate the state is not encrypted.\n\nWhen would it ever be encrypted if authentication fails ?\n\nDave Cramer\n\nGreetings,In https://github.com/postgres/postgres/blob/5c2c59ba0b5f723b067a6fa8bf8452d41fbb2125/src/backend/libpq/auth.c#L463The last piece of information is the encryption state. However when an SSL connection fails to authenticate the state is not encrypted.When would it ever be encrypted if authentication fails ?Dave Cramer",
"msg_date": "Wed, 24 May 2023 14:12:23 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about error message in auth.c"
},
{
"msg_contents": "On Wed, May 24, 2023 at 02:12:23PM -0400, Dave Cramer wrote:\n> The last piece of information is the encryption state. However when an SSL\n> connection fails to authenticate the state is not encrypted.\n>\n> When would it ever be encrypted if authentication fails ?\n\nI am not sure to follow. Under a SSL connection things should be\nencrypted. Or perhaps that's something related to hostssl and/or\nhostnossl?\n\nBack to the point, the SSL handshake happens before any authentication\nattempt and any HBA resolution, so a connection could be encrypted\nbefore authentication gets rejected. The error path you are pointing\nat would happen after the SSL handshake is done. For instance, take\nan HBA entry like that:\n# TYPE DATABASE USER ADDRESS METHOD\nhost all all 127.0.0.1/32 reject\n\nThen, attempting a connection with sslmode=prefer, one can see the\ndifference:\n$ psql -h 127.0.0.1 -d postgres -U postgres\npsql: error: connection to server at \"127.0.0.1\", port 5432 failed:\nFATAL: pg_hba.conf rejects connection for host \"127.0.0.1\", user\n\"postgres\", database \"postgres\", SSL encryption\nconnection to server at \"127.0.0.1\", port 5432 failed: FATAL:\npg_hba.conf rejects connection for host \"127.0.0.1\", user \"postgres\",\ndatabase \"postgres\", no encryption\n--\nMichael",
"msg_date": "Thu, 25 May 2023 09:15:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about error message in auth.c"
}
] |
[
{
"msg_contents": "Why does pgindent require that pg_bsd_indent be installed in the path? \nCouldn't pgindent call the pg_bsd_indent built inside the tree? That \nwould make the whole process much smoother.\n\n\n",
"msg_date": "Thu, 25 May 2023 10:32:20 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Why does pgindent require that pg_bsd_indent be installed in the path? \n> Couldn't pgindent call the pg_bsd_indent built inside the tree? That \n> would make the whole process much smoother.\n\nWell, the current expectation is that you run it in a distclean'd\ntree, in which case it won't be there. VPATH builds would have\na problem finding it as well.\n\nI'm not sure if there's any problem in applying it in a built-out\ntree, but the VPATH scenario seems like a problem in any case,\nespecially since (IIUC) meson builds have to be done that way.\n\nI wouldn't object to adding more logic to the calling script\nto support multiple usage scenarios, of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 09:09:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-25 09:09:45 -0400, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > Why does pgindent require that pg_bsd_indent be installed in the path? \n> > Couldn't pgindent call the pg_bsd_indent built inside the tree? That \n> > would make the whole process much smoother.\n> \n> Well, the current expectation is that you run it in a distclean'd\n> tree, in which case it won't be there. VPATH builds would have\n> a problem finding it as well.\n>\n> I'm not sure if there's any problem in applying it in a built-out\n> tree, but the VPATH scenario seems like a problem in any case,\n> especially since (IIUC) meson builds have to be done that way.\n\nIsn't the situation actually *easier* in VPATH builds? There's no build\nartifacts in the source tree, so you can just invoke the pg_bsd_indent built\nin the build directory against the source tree, without any problems?\n\nI'd like to add a build system target for reindenting with the in-tree\npg_bsd_indent, obviously with the right dependencies to build pg_bsd_indent\nfirst.\n\nAnd yes, meson only supports building in a separate directory (which obviously\ncan be inside the source directory, although I don't do that, because the\nautoconf vpath build copies such directories, which isn't fun).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 May 2023 09:52:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-05-25 09:09:45 -0400, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> Why does pgindent require that pg_bsd_indent be installed in the path? \n\n> Isn't the situation actually *easier* in VPATH builds? There's no build\n> artifacts in the source tree, so you can just invoke the pg_bsd_indent built\n> in the build directory against the source tree, without any problems?\n\nWell, if you know where the build directory is, sure. But any way you\nslice it there is an extra bit of knowledge required. Since pg_bsd_indent\nchanges so seldom, keeping it in your PATH is at least as easy as any\nother solution, IMO.\n\nAnother reason why I like to do it that way is that it supports running\npgindent on files that aren't in the source tree at all, which suits\nsome old habits of mine.\n\nBut, as I said before, I'm open to adding support for other scenarios\nas long as we don't remove that one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 13:05:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-25 13:05:57 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-05-25 09:09:45 -0400, Tom Lane wrote:\n> >> Peter Eisentraut <[email protected]> writes:\n> >>> Why does pgindent require that pg_bsd_indent be installed in the path?\n>\n> > Isn't the situation actually *easier* in VPATH builds? There's no build\n> > artifacts in the source tree, so you can just invoke the pg_bsd_indent built\n> > in the build directory against the source tree, without any problems?\n>\n> Well, if you know where the build directory is, sure.\n\nI'm imaginging adding make / meson targets 'indent-tree' and 'indent-head' or\nsuch. Obviously the buildsystem knows where the source dir is, and you'd\ninvoke it in the build dir, so it'd know all that'd need to be known.\n\n\nAttached is a prototype adding such meson targets. It's easier with a\nparameter telling pgindent where the source tree is, so I addeded that too\n(likely would need to be cleaned up some).\n\n\n> Since pg_bsd_indent\n> changes so seldom, keeping it in your PATH is at least as easy as any\n> other solution, IMO.\n>\n> Another reason why I like to do it that way is that it supports running\n> pgindent on files that aren't in the source tree at all, which suits\n> some old habits of mine.\n\n> But, as I said before, I'm open to adding support for other scenarios\n> as long as we don't remove that one.\n\nI can't imagine that we'd remove support for doing so...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 27 May 2023 11:42:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "On 25.05.23 13:05, Tom Lane wrote:\n> Well, if you know where the build directory is, sure. But any way you\n> slice it there is an extra bit of knowledge required. Since pg_bsd_indent\n> changes so seldom, keeping it in your PATH is at least as easy as any\n> other solution, IMO.\n\nThe reason I bumped into this is that 15 and 16 use different versions \nof pg_bsd_indent, so you can't just keep one copy in, like, ~/bin/.\n\n\n",
"msg_date": "Wed, 31 May 2023 06:55:00 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 25.05.23 13:05, Tom Lane wrote:\n>> Well, if you know where the build directory is, sure. But any way you\n>> slice it there is an extra bit of knowledge required. Since pg_bsd_indent\n>> changes so seldom, keeping it in your PATH is at least as easy as any\n>> other solution, IMO.\n\n> The reason I bumped into this is that 15 and 16 use different versions \n> of pg_bsd_indent, so you can't just keep one copy in, like, ~/bin/.\n\nWell, personally, I never bother to adjust patches to the indentation\nrules of old versions, so using the latest pg_bsd_indent suffices.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 May 2023 13:21:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "On Wed, May 31, 2023 at 01:21:05PM -0400, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n> > On 25.05.23 13:05, Tom Lane wrote:\n> >> Well, if you know where the build directory is, sure. But any way you\n> >> slice it there is an extra bit of knowledge required. Since pg_bsd_indent\n> >> changes so seldom, keeping it in your PATH is at least as easy as any\n> >> other solution, IMO.\n> \n> > The reason I bumped into this is that 15 and 16 use different versions \n> > of pg_bsd_indent, so you can't just keep one copy in, like, ~/bin/.\n> \n> Well, personally, I never bother to adjust patches to the indentation\n> rules of old versions, so using the latest pg_bsd_indent suffices.\n\nI guess we could try looking for pg_bsd_indent-$MAJOR_VERSION first,\nthen pg_bsd_indent.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 31 May 2023 14:32:35 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "On 2023-May-31, Bruce Momjian wrote:\n\n> I guess we could try looking for pg_bsd_indent-$MAJOR_VERSION first,\n> then pg_bsd_indent.\n\nDo you mean with $MAJOR_VERSION being Postgres' version? That means we\nneed to install a new symlink every year, even when pg_bsd_indent\ndoesn't change. I'd rather have it call pg_bsd_indent-$INDENT_VERSION\nfirst, and pg_bsd_indent if that doesn't exist. I already have it that\nway.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Jun 2023 18:54:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Do you mean with $MAJOR_VERSION being Postgres' version? That means we\n> need to install a new symlink every year, even when pg_bsd_indent\n> doesn't change. I'd rather have it call pg_bsd_indent-$INDENT_VERSION\n> first, and pg_bsd_indent if that doesn't exist. I already have it that\n> way.\n\nSounds reasonable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jun 2023 13:09:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 06:54:56PM +0200, Álvaro Herrera wrote:\n> On 2023-May-31, Bruce Momjian wrote:\n> \n> > I guess we could try looking for pg_bsd_indent-$MAJOR_VERSION first,\n> > then pg_bsd_indent.\n> \n> Do you mean with $MAJOR_VERSION being Postgres' version? That means we\n> need to install a new symlink every year, even when pg_bsd_indent\n> doesn't change. I'd rather have it call pg_bsd_indent-$INDENT_VERSION\n> first, and pg_bsd_indent if that doesn't exist. I already have it that\n> way.\n\nYes, your idea makes more sense.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 19:27:48 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_bsd_indent need to be installed?"
}
] |
[
{
"msg_contents": "Working with PostgreSQL Logical Replication is just great! It helps a lot\ndoing real time replication for analytical purposes without using any other\n3d party service. Although all these years working as a product architect\nof reporting I have noted a few requirements which are always a challenge\nand may help enhance logical replication even better.\n\nTo the point:\nPostgreSQL15 Logical Replication is going to stop while there is a conflict\non the destination database. On the other hand, PGLogical can handle the\nconflicts with more options like error, apply_remote,\nkeep_local, last_update_wins, first_update_wins while streaming.\n\nI was thinking that it would be great to add those capabilities into\nLogical Replication during streaming and even better on snapshot if it is\npossible. This enhancement method is going to solve a lot of issues while\nthere are hybrid solutions which are updating databases with SQL Scripts\nand Logical Replication. At the same time will make Logical Replication\n\"more reliable\" as it will not stop replicating lines in live\nenvironments when the decision of conflict is already decided and\nconfigured.\n\nIn my case I am consolidating data from multiple erp system databases to a\ndestination database for reporting purposes. All erps, have the same table\nschema as the destination database, source database has the tenant_id\nidentifier in non primary keys but has replica identity index. Now there\nare scenarios where we may need to update manually the destination database\nusing scripts which are having the ONCONFLICT statement, but during that\ntime if a new record is inserted into the database and the batch statement\nfinishes earlier than replication, the replication will find a conflict.\n\nWorking with PostgreSQL Logical Replication is just great! It helps a lot doing real time replication for analytical purposes without using any other 3d party service. Although all these years working as a product architect of reporting I have noted a few requirements which are always a challenge and may help enhance logical replication even better.To the point:PostgreSQL15 Logical Replication is going to stop while there is a conflict on the destination database. On the other hand, PGLogical can handle the conflicts with more options like error, apply_remote, keep_local, last_update_wins, first_update_wins while streaming.I was thinking that it would be great to add those capabilities into Logical Replication during streaming and even better on snapshot if it is possible. This enhancement method is going to solve a lot of issues while there are hybrid solutions which are updating databases with SQL Scripts and Logical Replication. At the same time will make Logical Replication \"more reliable\" as it will not stop replicating lines in live environments when the decision of conflict is already decided and configured.In my case I am consolidating data from multiple erp system databases to a destination database for reporting purposes. All erps, have the same table schema as the destination database, source database has the tenant_id identifier in non primary keys but has replica identity index. Now there are scenarios where we may need to update manually the destination database using scripts which are having the ONCONFLICT statement, but during that time if a new record is inserted into the database and the batch statement finishes earlier than replication, the replication will find a conflict.",
"msg_date": "Thu, 25 May 2023 12:10:06 +0300",
"msg_from": "Stavros Koureas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logical Replication Conflict Resolution"
}
] |
[
{
"msg_contents": "Until PG15, calling pgindent without arguments would process the whole \ntree. Now you get\n\nNo files to process at ./src/tools/pgindent/pgindent line 372.\n\nIs that intentional?\n\n\nAlso, pgperltidy accepts no arguments and always processes the whole \ntree. It would be nice if there were a way to process individual files \nor directories, like pgindent can.\n\nAttached is a patch for this.\n\n(It seems that it works ok to pass regular files (not directories) to \n\"find\", but I'm not sure if it's portable.)",
"msg_date": "Thu, 25 May 2023 11:10:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "> On 25 May 2023, at 11:10, Peter Eisentraut <[email protected]> wrote:\n\n> Also, pgperltidy accepts no arguments and always processes the whole tree. It would be nice if there were a way to process individual files or directories, like pgindent can.\n\n+1, thanks! I've wanted that several times but never gotten around to doing\nanything about it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 25 May 2023 11:18:28 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Until PG15, calling pgindent without arguments would process the whole \n> tree. Now you get\n> No files to process at ./src/tools/pgindent/pgindent line 372.\n> Is that intentional?\n\nIt was intentional, cf b16259b3c and the linked discussion.\n\n> Also, pgperltidy accepts no arguments and always processes the whole \n> tree. It would be nice if there were a way to process individual files \n> or directories, like pgindent can.\n\n+1, although I wonder if we shouldn't follow pgindent's new lead\nand require some argument(s).\n\n> Attached is a patch for this.\n> (It seems that it works ok to pass regular files (not directories) to \n> \"find\", but I'm not sure if it's portable.)\n\nThe POSIX spec for find(1) gives an example of applying find to\nwhat they evidently intend to be a plain file:\n\n\tif [ -n \"$(find file1 -prune -newer file2)\" ]; then\n\t printf %s\\\\n \"file1 is newer than file2\"\n\tfi\n\nSo while I don't see it written in so many words, I think you\ncan assume it's portable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 09:20:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 25.05.23 15:20, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> Until PG15, calling pgindent without arguments would process the whole\n>> tree. Now you get\n>> No files to process at ./src/tools/pgindent/pgindent line 372.\n>> Is that intentional?\n> \n> It was intentional, cf b16259b3c and the linked discussion.\n> \n>> Also, pgperltidy accepts no arguments and always processes the whole\n>> tree. It would be nice if there were a way to process individual files\n>> or directories, like pgindent can.\n> \n> +1, although I wonder if we shouldn't follow pgindent's new lead\n> and require some argument(s).\n\nThat makes sense to me. Here is a small update with this behavior \nchange and associated documentation update.",
"msg_date": "Wed, 14 Jun 2023 09:37:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 2023-06-14 We 03:37, Peter Eisentraut wrote:\n> On 25.05.23 15:20, Tom Lane wrote:\n>> Peter Eisentraut <[email protected]> writes:\n>>> Until PG15, calling pgindent without arguments would process the whole\n>>> tree. Now you get\n>>> No files to process at ./src/tools/pgindent/pgindent line 372.\n>>> Is that intentional?\n>>\n>> It was intentional, cf b16259b3c and the linked discussion.\n>>\n>>> Also, pgperltidy accepts no arguments and always processes the whole\n>>> tree. It would be nice if there were a way to process individual files\n>>> or directories, like pgindent can.\n>>\n>> +1, although I wonder if we shouldn't follow pgindent's new lead\n>> and require some argument(s).\n>\n> That makes sense to me. Here is a small update with this behavior \n> change and associated documentation update.\n\n\nI'm intending to add some of the new pgindent features to pgperltidy. \nPreparatory to that here's a rewrite of pgperltidy in perl - no new \nfeatures yet but it does remove the hardcoded path, and requires you to \npass in one or more files / directories as arguments.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Tue, 20 Jun 2023 11:38:10 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n\n> I'm intending to add some of the new pgindent features to\n> pgperltidy. Preparatory to that here's a rewrite of pgperltidy in perl -\n> no new features yet but it does remove the hardcoded path, and requires\n> you to pass in one or more files / directories as arguments.\n\nGood idea, here's some comments.\n\n> #!/usr/bin/perl\n>\n> # Copyright (c) 2023, PostgreSQL Global Development Group\n>\n> # src/tools/pgindent/pgperltidy\n>\n> use strict;\n> use warnings;\n>\n> use File::Find;\n>\n> my $perltidy = $ENV{PERLTIDY} || 'perltidy';\n>\n> my @files;\n>\n> die \"No directories or files specified\" unless @ARGV;\n\nIt's not really useful to have the file name and line in errors like\nthis, adding a \"\\n\" to the end of the message suppresses that.\n\n> sub is_perl_exec\n> {\n> \tmy $name = shift;\n> \tmy $out = `file $name 2>/dev/null`;\n> \treturn $out =~ /:.*perl[0-9]*\\b/i;\n> }\n\n> my $wanted = sub {\n>\n> \tmy $name = $File::Find::name;\n> \tmy ($dev, $ino, $mode, $nlink, $uid, $gid);\n>\n> \t# check it's a plain file and either it has a perl extension (.p[lm])\n> \t# or it's executable and `file` thinks it's a perl script.\n>\n> \t(($dev, $ino, $mode, $nlink, $uid, $gid) = lstat($_))\n> \t && -f _\n> \t && (/\\.p[lm]$/ || ((($mode & 0100) == 0100) && is_perl_exec($_)))\n> \t && push(@files, $name);\n> };\n\nThe core File::stat and Fcntl modules can make this neater:\n\nuse File::stat;\nuse Fcntl ':mode';\n \nmy $wanted = sub {\n\tmy $st;\n\tpush @files, $File::Find::name\n\t\tif $st = lstat($_) && -f $st\n\t\t\t&& (/\\.p[lm]$/ || (($st->mode & S_IXUSR) && is_perl_exec($_)));\n};\n\n> File::Find::find({ wanted => $wanted }, @ARGV);\n>\n> my $list = join(\" \", @files);\n>\n> system \"$perltidy --profile=src/tools/pgindent/perltidyrc $list\";\n\nIt's better to use the list form of system, to avoid shell escaping\nissues. Also, since this is the last thing in the script we might as\nwell exec it instead:\n\nexec $perltidy, '--profile=src/tools/pgindent/perltidyrc', @files;\n\n- ilmari\n\n\n",
"msg_date": "Tue, 20 Jun 2023 17:08:33 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 20.06.23 17:38, Andrew Dunstan wrote:\n>>> +1, although I wonder if we shouldn't follow pgindent's new lead\n>>> and require some argument(s).\n>>\n>> That makes sense to me. Here is a small update with this behavior \n>> change and associated documentation update.\n> \n> I'm intending to add some of the new pgindent features to pgperltidy. \n> Preparatory to that here's a rewrite of pgperltidy in perl - no new \n> features yet but it does remove the hardcoded path, and requires you to \n> pass in one or more files / directories as arguments.\n\nAre you planning to touch pgperlcritic and pgperlsyncheck as well? If \nnot, part of my patch would still be useful. Maybe I should commit my \nposted patch for PG16, to keep consistency with pgindent, and then your \nwork would presumably be considered for PG17.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 11:09:02 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 2023-06-21 We 05:09, Peter Eisentraut wrote:\n> On 20.06.23 17:38, Andrew Dunstan wrote:\n>>>> +1, although I wonder if we shouldn't follow pgindent's new lead\n>>>> and require some argument(s).\n>>>\n>>> That makes sense to me. Here is a small update with this behavior \n>>> change and associated documentation update.\n>>\n>> I'm intending to add some of the new pgindent features to pgperltidy. \n>> Preparatory to that here's a rewrite of pgperltidy in perl - no new \n>> features yet but it does remove the hardcoded path, and requires you \n>> to pass in one or more files / directories as arguments.\n>\n> Are you planning to touch pgperlcritic and pgperlsyncheck as well? \n\n\nYeah, it would make sense to.\n\n\n> If not, part of my patch would still be useful. Maybe I should commit \n> my posted patch for PG16, to keep consistency with pgindent, and then \n> your work would presumably be considered for PG17.\n\n\nThat sounds like a good plan.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-21 We 05:09, Peter\n Eisentraut wrote:\n\nOn\n 20.06.23 17:38, Andrew Dunstan wrote:\n \n\n\n+1, although I wonder if we shouldn't\n follow pgindent's new lead\n \n and require some argument(s).\n \n\n\n That makes sense to me. Here is a small update with this\n behavior change and associated documentation update.\n \n\n\n I'm intending to add some of the new pgindent features to\n pgperltidy. Preparatory to that here's a rewrite of pgperltidy\n in perl - no new features yet but it does remove the hardcoded\n path, and requires you to pass in one or more files /\n directories as arguments.\n \n\n\n Are you planning to touch pgperlcritic and pgperlsyncheck as\n well? \n\n\nYeah, it would make sense to.\n\n\n\nIf\n not, part of my patch would still be useful. Maybe I should\n commit my posted patch for PG16, to keep consistency with\n pgindent, and then your work would presumably be considered for\n PG17.\n \n\n\n\nThat sounds like a good plan.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 21 Jun 2023 07:35:15 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 21.06.23 13:35, Andrew Dunstan wrote:\n>> If not, part of my patch would still be useful. Maybe I should commit \n>> my posted patch for PG16, to keep consistency with pgindent, and then \n>> your work would presumably be considered for PG17.\n> \n> That sounds like a good plan.\n\ndone\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 16:36:06 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
},
{
"msg_contents": "On 2023-06-21 We 07:35, Andrew Dunstan wrote:\n>\n>\n> On 2023-06-21 We 05:09, Peter Eisentraut wrote:\n>> On 20.06.23 17:38, Andrew Dunstan wrote:\n>>>>> +1, although I wonder if we shouldn't follow pgindent's new lead\n>>>>> and require some argument(s).\n>>>>\n>>>> That makes sense to me. Here is a small update with this behavior \n>>>> change and associated documentation update.\n>>>\n>>> I'm intending to add some of the new pgindent features to \n>>> pgperltidy. Preparatory to that here's a rewrite of pgperltidy in \n>>> perl - no new features yet but it does remove the hardcoded path, \n>>> and requires you to pass in one or more files / directories as \n>>> arguments.\n>>\n>> Are you planning to touch pgperlcritic and pgperlsyncheck as well? \n>\n>\n> Yeah, it would make sense to.\n>\n\n\nHere's a patch that turns all these into perl scripts.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Thu, 6 Jul 2023 11:47:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. pgperltidy command-line arguments"
}
] |
[
{
"msg_contents": "hi.\nhttps://www.postgresql.org/docs/current/pgwalinspect.html\n\nlast query should be:\nSELECT * FROM pg_get_wal_stats('0/1E847D00', '0/1E84F500')\nWHERE count > 0 AND\n \"resource_manager/record_type\" = 'Transaction'\nLIMIT 1;\n\nhi.https://www.postgresql.org/docs/current/pgwalinspect.htmllast query should be:SELECT * FROM pg_get_wal_stats('0/1E847D00', '0/1E84F500')WHERE count > 0 AND \"resource_manager/record_type\" = 'Transaction'LIMIT 1;",
"msg_date": "Thu, 25 May 2023 17:13:06 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_walinspect last query typo"
},
{
"msg_contents": "> On 25 May 2023, at 11:13, jian he <[email protected]> wrote:\n> \n> hi.\n> https://www.postgresql.org/docs/current/pgwalinspect.html\n> \n> last query should be:\n> SELECT * FROM pg_get_wal_stats('0/1E847D00', '0/1E84F500')\n> WHERE count > 0 AND\n> \"resource_manager/record_type\" = 'Transaction'\n> LIMIT 1;\n\nNice catch, the LIMIT 1 has indeed landed in the wrong place. Will fix.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 25 May 2023 11:16:38 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect last query typo"
}
] |
[
{
"msg_contents": "The PostgreSQL Global Development Group announces that the first beta release of\nPostgreSQL 16 is now [available for download](https://www.postgresql.org/download/).\nThis release contains previews of all features that will be available when\nPostgreSQL 16 is made generally available, though some details of the release\ncan change during the beta period.\n\nYou can find information about all of the features and changes found in\nPostgreSQL 16 in the [release notes](https://www.postgresql.org/docs/16/release-16.html):\n\n [https://www.postgresql.org/docs/16/release-16.html](https://www.postgresql.org/docs/16/release-16.html)\n\nIn the spirit of the open source PostgreSQL community, we strongly encourage you\nto test the new features of PostgreSQL 16 on your systems to help us eliminate\nbugs or other issues that may exist. While we do not advise you to run\nPostgreSQL 16 Beta 1 in production environments, we encourage you to find ways\nto run your typical application workloads against this beta release.\n\nYour testing and feedback will help the community ensure that the PostgreSQL 16\nrelease upholds our standards of delivering a stable, reliable release of the\nworld's most advanced open source relational database. Please read more about\nour [beta testing process](https://www.postgresql.org/developer/beta/) and how\nyou can contribute:\n\n [https://www.postgresql.org/developer/beta/](https://www.postgresql.org/developer/beta/)\n\nPostgreSQL 16 Feature Highlights\n--------------------------------\n\n### Performance\n\nPostgreSQL 16 includes performance improvements in query execution. This release\nadds more query parallelism, including allowing `FULL` and `RIGHT` joins to\nexecute in parallel, and parallel execution of the `string_agg` and `array_agg`\naggregate functions. Additionally, PostgreSQL 16 can use incremental sorts in\n`SELECT DISTINCT` queries. There are also several optimizations for\n[window queries](https://www.postgresql.org/docs/16/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS),\nimprovements in lookups for `RANGE` and `LIST` partitions, and support for\n\"anti-joins\" in `RIGHT` and `OUTER` queries.\n\nPostgreSQL 16 can also improve the performance of concurrent bulk loading of\ndata using [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) up to\n300%.\n\nThis release also introduces support for CPU acceleration using SIMD for both\nx86 and ARM architectures, including optimizations for processing ASCII and JSON\nstrings, and array and subtransaction searches. Additionally, PostgreSQL 16\nintroduces [load balancing](https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNECT-LOAD-BALANCE-HOSTS)\nto libpq, the client library for PostgreSQL.\n\n### Logical Replication Enhancements\n\nLogical replication lets PostgreSQL users stream data in real-time to other\nPostgreSQL or other external systems that implement the logical protocol. Until\nPostgreSQL 16, users could only create logical replication publishers on primary\ninstances. PostgreSQL 16 adds the ability to perform logical decoding on a\nstandby instance, giving users more options to distribute their workload, for\nexample, use a standby that's less busy than a primary to logically replicate\nchanges.\n\nPostgreSQL 16 also includes several performance improvements to logical\nreplication. This includes allowing the subscriber to apply large transactions\nin parallel, use indexes other than the `PRIMARY KEY` to perform lookups during\n`UPDATE` or `DELETE` operations, and allow for tables to be copied using binary\nformat during initialization.\n\n### Developer Experience\n\nPostgreSQL 16 continues to implement the [SQL/JSON](https://www.postgresql.org/docs/16/functions-json.html)\nstandard for manipulating [JSON](https://www.postgresql.org/docs/16/datatype-json.html)\ndata, including support for SQL/JSON constructors (e.g. `JSON_ARRAY()`,\n`JSON_ARRAYAGG()` et al), and identity functions (`IS JSON`). This release also\nadds the SQL standard [`ANY_VALUE`](https://www.postgresql.org/docs/16/functions-aggregate.html#id-1.5.8.27.5.2.4.1.1.1.1)\naggregate function, which returns any arbitrary value from the aggregate set.\nFor convenience, PostgreSQL 16 now lets you specify non-decimal integer\nliterals, such as `0xff`, `0o777`, and `0b101010`, and use underscores as\nthousands separators, such as `5_432`.\n\nThis release adds support for the extended query protocol to the [`psql`](https://www.postgresql.org/docs/16/app-psql.html)\nclient. Users can execute a query, e.g. `SELECT $1 + $2`, and use the\n[`\\bind`](https://www.postgresql.org/docs/16/app-psql.html#APP-PSQL-META-COMMAND-BIND)\ncommand to substitute the variables.\n\n### Security Features\n\nPostgreSQL 16 continues to give users the ability to grant privileged access to\nfeatures without requiring superuser with new\n[predefined roles](https://www.postgresql.org/docs/16/predefined-roles.html).\nThese include `pg_maintain`, which enables execution of operations such as\n`VACUUM`, `ANALYZE`, `REINDEX`, and others, and `pg_create_subscription`, which\nallows users to create a logical replication subscription. Additionally,\nstarting with this release, logical replication subscribers execute transactions\non a table as the table owner, not the superuser.\n\nPostgreSQL 16 now lets you use regular expressions in the [`pg_hba.conf`](https://www.postgresql.org/docs/16/auth-pg-hba-conf.html)\nand [`pg_ident.conf`](https://www.postgresql.org/docs/16/auth-username-maps.html)\nfiles for matching user and databases names. Additionally, PostgreSQL 16 adds\nthe ability to include other files in both `pg_hba.conf` and `pg_ident.conf`.\nPostgreSQL 16 also adds support for the SQL standard [`SYSTEM_USER`](https://www.postgresql.org/docs/16/functions-info.html#id-1.5.8.32.3.4.2.2.24.1.1.1)\nkeyword, which returns the username and authentication method used to establish\na session. \n\nPostgreSQL 16 also adds support for Kerberos credential delegation, which allows\nextensions such as `postgres_fdw` and `dblink` to use the authenticated\ncredentials to connect to other services. This release also adds several new\nsecurity-oriented connection parameters for clients. This includes [`require_auth`](https://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNECT-REQUIRE-AUTH),\nwhere a client can specify which authentication methods it is willing to accept\nfrom the server. You can now set `sslrootcert` to `system` to instruct\nPostgreSQL to use the trusted certificate authority (CA) store provided by the\nclient's operating system.\n\n### Monitoring and Management\n\nPostgreSQL 16 adds several new monitoring features, including the new\n[`pg_stat_io`](https://www.postgresql.org/docs/16/monitoring-stats.html#MONITORING-PG-STAT-IO-VIEW)\nview that provides information on I/O statistics. This release also provides a\ntimestamp for the last time that a [table or index was scanned](https://www.postgresql.org/docs/16/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW).\nThere are also improvements to the normalization algorithm used for\n`pg_stat_activity`.\n\nThis release includes improvements to the page freezing strategy, which helps\nthe performance of vacuuming and other maintenance operations. PostgreSQL 16\nalso improves general support for text collations, which provide rules for how\ntext is sorted. PostgreSQL 16 sets ICU to be the default collation provider, and\nalso adds support for the predefined `unicode` and `ucs_basic` collations.\n\nPostgreSQL 16 adds additional compression options to `pg_dump`, including\nsupport for both `lz4` and `zstd` compression.\n\n### Other Notable Changes\n\nPostgreSQL 16 removes the `promote_trigger_file` option to enable the promotion\nof a standby. Users should use the `pg_ctl promote` command or `pg_promote()`\nfunction to promote a standby.\n\nPostgreSQL 16 introduced the Meson build system, which will ultimately replace\nAutoconf. This release also adds foundational support for developmental features\nthat will be improved upon in future releases. This includes a developer flag to\nenable DirectIO and the ability to use logical replication to bidirectionally\nreplicate between two tables when `origin=none` is specified in the subscriber.\n\nFor Windows installations, PostgreSQL 16 now supports a minimum version of\nWindows 10.\n\nAdditional Features\n-------------------\n\nMany other new features and improvements have been added to PostgreSQL 16. Many\nof these may also be helpful for your use cases. Please see the\n[release notes](https://www.postgresql.org/docs/16/release-16.html) for a\ncomplete list of new and changed features:\n\n [https://www.postgresql.org/docs/16/release-16.html](https://www.postgresql.org/docs/16/release-16.html)\n\nTesting for Bugs & Compatibility\n--------------------------------\n\nThe stability of each PostgreSQL release greatly depends on you, the community,\nto test the upcoming version with your workloads and testing tools in order to\nfind bugs and regressions before the general availability of PostgreSQL 16. As\nthis is a Beta, minor changes to database behaviors, feature details, and APIs\nare still possible. Your feedback and testing will help determine the final\ntweaks on the new features, so please test in the near future. The quality of\nuser testing helps determine when we can make a final release.\n\nA list of [open issues](https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items)\nis publicly available in the PostgreSQL wiki. You can\n[report bugs](https://www.postgresql.org/account/submitbug/) using this form on\nthe PostgreSQL website:\n\n [https://www.postgresql.org/account/submitbug/](https://www.postgresql.org/account/submitbug/)\n\nBeta Schedule\n-------------\n\nThis is the first beta release of version 16. The PostgreSQL Project will\nrelease additional betas as required for testing, followed by one or more\nrelease candidates, until the final release in late 2023. For further\ninformation please see the [Beta Testing](https://www.postgresql.org/developer/beta/)\npage.\n\nLinks\n-----\n\n* [Download](https://www.postgresql.org/download/)\n* [Beta Testing Information](https://www.postgresql.org/developer/beta/)\n* [PostgreSQL 16 Beta Release Notes](https://www.postgresql.org/docs/16/release-16.html)\n* [PostgreSQL 16 Open Issues](https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items)\n* [Feature Matrix](https://www.postgresql.org/about/featurematrix/#configuration-management)\n* [Submit a Bug](https://www.postgresql.org/account/submitbug/)",
"msg_date": "Thu, 25 May 2023 13:08:16 +0000",
"msg_from": "PostgreSQL Global Development Group <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 16 Beta 1 Released!"
},
{
"msg_contents": "On 25/05/2023 15:08, PostgreSQL Global Development Group wrote:\n> \n> PostgreSQL 16 Beta 1 Released!\n> \n\njust me ?\n\nmake -C backend/snowball install\nmake[2]: Entering directory \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/build/src/backend/snowball'\n/usr/bin/mkdir -p \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/inst/usr/lib/postgresql'\n/usr/bin/mkdir -p \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/inst/usr/share/postgresql' \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/inst/usr/share/postgresql/tsearch_data'\n/usr/bin/install -c -m 755 dict_snowball.dll \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/inst/usr/lib/postgresql/dict_snowball.dll'\n/usr/bin/install -c -m 644 snowball_create.sql \n'/pub/devel/postgresql/postgresql-16.0-0.1.x86_64/inst/usr/share/postgresql'\n/usr/bin/install: cannot stat 'snowball_create.sql': No such file or \ndirectory\nmake[2]: *** [Makefile:110: install] Error 1\n\nfor what I can see the file is in the source tree, not in the build tree\n\n$ tar -tf postgresql-16beta1.tar.bz2 | grep snowball_create.sql\npostgresql-16beta1/src/backend/snowball/snowball_create.sql\n\nRegards\nMarco\n\n\n\n",
"msg_date": "Sat, 3 Jun 2023 07:35:51 +0200",
"msg_from": "Marco Atzeri <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Beta 1 Released!"
},
{
"msg_contents": "Marco Atzeri <[email protected]> writes:\n> just me ?\n\nNo.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5df5bea29070b420452bdb257c3dec1cf0419fca\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Jun 2023 08:27:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 16 Beta 1 Released!"
}
] |
[
{
"msg_contents": "Hi,\n\nThe recovery tap test has 4 implementations of find_in_log sub routine\nfor various uses, I felt we can generalize these and have a single\nfunction for the same. The attached patch is an attempt to have a\ngeneralized sub routine find_in_log which can be used by all of them.\nThoughts?\n\nRegards,\nVIgnesh",
"msg_date": "Thu, 25 May 2023 19:53:51 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "vignesh C <[email protected]> writes:\n\n> Hi,\n>\n> The recovery tap test has 4 implementations of find_in_log sub routine\n> for various uses, I felt we can generalize these and have a single\n> function for the same. The attached patch is an attempt to have a\n> generalized sub routine find_in_log which can be used by all of them.\n> Thoughts?\n\n+1 on factoring out this common code. Just a few comments on the implementation.\n\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Utils.pm b/src/test/perl/PostgreSQL/Test/Utils.pm\n> index a27fac83d2..5c9b2f6c03 100644\n> --- a/src/test/perl/PostgreSQL/Test/Utils.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n> @@ -67,6 +67,7 @@ our @EXPORT = qw(\n> slurp_file\n> append_to_file\n> string_replace_file\n> + find_in_log\n> check_mode_recursive\n> chmod_recursive\n> check_pg_config\n> @@ -579,6 +580,28 @@ sub string_replace_file\n> \n> =pod\n>\n> +\n> +=item find_in_log(node, pattern, offset)\n> +\n> +Find pattern in logfile of node after offset byte.\n> +\n> +=cut\n> +\n> +sub find_in_log\n> +{\n> +\tmy ($node, $pattern, $offset) = @_;\n> +\n> +\t$offset = 0 unless defined $offset;\n> +\tmy $log = PostgreSQL::Test::Utils::slurp_file($node->logfile);\n\nSince this function is in the same package, there's no need to qualify\nit with the full name. I know the callers you copied it from did, but\nthey wouldn't have had to either, since it's exported by default (in the\n@EXPORT array above), unless the use statement has an explicit argument\nlist that excludes it.\n\n> +\treturn 0 if (length($log) <= 0 || length($log) <= $offset);\n> +\n> +\t$log = substr($log, $offset);\n\nAlso, the existing callers don't seem to have got the memo that\nslurp_file() takes an optinal offset parameter, which will cause it to\nseek to that postion before slurping the file, which is more efficient\nthan reading the whole file in and substr-ing it. There's not much\npoint in the length checks either, since regex-matching against an empty\nstring is very cheap (and if the provide pattern can match the empty\nstring the whole function call is rather pointless).\n\n> +\treturn $log =~ m/$pattern/;\n> +}\n\nAll in all, it could be simplified to:\n\n sub find_in_log {\n my ($node, $pattern, $offset) = @_;\n\n return slurp_file($node->logfile, $offset) =~ $pattern;\n }\n\nHowever, none of the other functions in ::Utils know anything about node\nobjects, which makes me think it should be a method on the node itself\n(i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\nwould be a better name, since it just returns a boolean. The name\nfind_in_log makes me think it would return the log lines matching the\npattern, or the position of the match in the file.\n\nIn that case, the slurp_file() call would have to be fully qualified,\nsince ::Cluster uses an empty import list to avoid polluting the method\nnamespace with imported functions.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 25 May 2023 18:34:20 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Thu, May 25, 2023 at 06:34:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> However, none of the other functions in ::Utils know anything about node\n> objects, which makes me think it should be a method on the node itself\n> (i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\n> would be a better name, since it just returns a boolean. The name\n> find_in_log makes me think it would return the log lines matching the\n> pattern, or the position of the match in the file.\n> \n> In that case, the slurp_file() call would have to be fully qualified,\n> since ::Cluster uses an empty import list to avoid polluting the method\n> namespace with imported functions.\n\nHmm. connect_ok() and connect_fails() in Cluster.pm have a similar\nlog comparison logic, feeding from the offset of a log file. Couldn't\nyou use the same code across the board for everything? Note that this\nstuff is parameterized so as it is possible to check if patterns match\nor do not match, for multiple patterns. It seems to me that we could\nuse the new log finding routine there as well, so how about extending\nit a bit more? You would need, at least:\n- One parameter for log entries matching.\n- One parameter for log entries not matching.\n--\nMichael",
"msg_date": "Fri, 26 May 2023 07:39:17 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Thu, 25 May 2023 at 23:04, Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> vignesh C <[email protected]> writes:\n>\n> > Hi,\n> >\n> > The recovery tap test has 4 implementations of find_in_log sub routine\n> > for various uses, I felt we can generalize these and have a single\n> > function for the same. The attached patch is an attempt to have a\n> > generalized sub routine find_in_log which can be used by all of them.\n> > Thoughts?\n>\n> +1 on factoring out this common code. Just a few comments on the implementation.\n>\n>\n> > diff --git a/src/test/perl/PostgreSQL/Test/Utils.pm b/src/test/perl/PostgreSQL/Test/Utils.pm\n> > index a27fac83d2..5c9b2f6c03 100644\n> > --- a/src/test/perl/PostgreSQL/Test/Utils.pm\n> > +++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n> > @@ -67,6 +67,7 @@ our @EXPORT = qw(\n> > slurp_file\n> > append_to_file\n> > string_replace_file\n> > + find_in_log\n> > check_mode_recursive\n> > chmod_recursive\n> > check_pg_config\n> > @@ -579,6 +580,28 @@ sub string_replace_file\n> >\n> > =pod\n> >\n> > +\n> > +=item find_in_log(node, pattern, offset)\n> > +\n> > +Find pattern in logfile of node after offset byte.\n> > +\n> > +=cut\n> > +\n> > +sub find_in_log\n> > +{\n> > + my ($node, $pattern, $offset) = @_;\n> > +\n> > + $offset = 0 unless defined $offset;\n> > + my $log = PostgreSQL::Test::Utils::slurp_file($node->logfile);\n>\n> Since this function is in the same package, there's no need to qualify\n> it with the full name. I know the callers you copied it from did, but\n> they wouldn't have had to either, since it's exported by default (in the\n> @EXPORT array above), unless the use statement has an explicit argument\n> list that excludes it.\n\nI have moved this function to Cluster.pm file now, since it is moved,\nI had to qualify the name with the full name.\n\n> > + return 0 if (length($log) <= 0 || length($log) <= $offset);\n> > +\n> > + $log = substr($log, $offset);\n>\n> Also, the existing callers don't seem to have got the memo that\n> slurp_file() takes an optinal offset parameter, which will cause it to\n> seek to that postion before slurping the file, which is more efficient\n> than reading the whole file in and substr-ing it. There's not much\n> point in the length checks either, since regex-matching against an empty\n> string is very cheap (and if the provide pattern can match the empty\n> string the whole function call is rather pointless).\n>\n> > + return $log =~ m/$pattern/;\n> > +}\n>\n> All in all, it could be simplified to:\n>\n> sub find_in_log {\n> my ($node, $pattern, $offset) = @_;\n>\n> return slurp_file($node->logfile, $offset) =~ $pattern;\n> }\n\nModified in similar lines\n\n> However, none of the other functions in ::Utils know anything about node\n> objects, which makes me think it should be a method on the node itself\n> (i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\n> would be a better name, since it just returns a boolean. The name\n> find_in_log makes me think it would return the log lines matching the\n> pattern, or the position of the match in the file.\n\nModified\n\n> In that case, the slurp_file() call would have to be fully qualified,\n> since ::Cluster uses an empty import list to avoid polluting the method\n> namespace with imported functions.\n\nModified.\n\nThanks for the comments, the attached v2 version patch has the changes\nfor the same.\n\nRegards,\nVignesh",
"msg_date": "Fri, 26 May 2023 22:30:12 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Fri, 26 May 2023 at 04:09, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, May 25, 2023 at 06:34:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > However, none of the other functions in ::Utils know anything about node\n> > objects, which makes me think it should be a method on the node itself\n> > (i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\n> > would be a better name, since it just returns a boolean. The name\n> > find_in_log makes me think it would return the log lines matching the\n> > pattern, or the position of the match in the file.\n> >\n> > In that case, the slurp_file() call would have to be fully qualified,\n> > since ::Cluster uses an empty import list to avoid polluting the method\n> > namespace with imported functions.\n>\n> Hmm. connect_ok() and connect_fails() in Cluster.pm have a similar\n> log comparison logic, feeding from the offset of a log file. Couldn't\n> you use the same code across the board for everything? Note that this\n> stuff is parameterized so as it is possible to check if patterns match\n> or do not match, for multiple patterns. It seems to me that we could\n> use the new log finding routine there as well, so how about extending\n> it a bit more? You would need, at least:\n> - One parameter for log entries matching.\n> - One parameter for log entries not matching.\n\nI felt adding these to log_contains was making the function slightly\ncomplex with multiple checks. I was not able to make it simple with\nthe approach I tried. How about having a common function\ncheck_connect_log_contents which has the common log contents check for\nconnect_ok and connect_fails function like the v2-0002 patch attached.\n\nRegards,\nVignesh",
"msg_date": "Sat, 27 May 2023 06:05:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On 2023-05-26 Fr 20:35, vignesh C wrote:\n> On Fri, 26 May 2023 at 04:09, Michael Paquier<[email protected]> wrote:\n>> On Thu, May 25, 2023 at 06:34:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>>> However, none of the other functions in ::Utils know anything about node\n>>> objects, which makes me think it should be a method on the node itself\n>>> (i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\n>>> would be a better name, since it just returns a boolean. The name\n>>> find_in_log makes me think it would return the log lines matching the\n>>> pattern, or the position of the match in the file.\n>>>\n>>> In that case, the slurp_file() call would have to be fully qualified,\n>>> since ::Cluster uses an empty import list to avoid polluting the method\n>>> namespace with imported functions.\n>> Hmm. connect_ok() and connect_fails() in Cluster.pm have a similar\n>> log comparison logic, feeding from the offset of a log file. Couldn't\n>> you use the same code across the board for everything? Note that this\n>> stuff is parameterized so as it is possible to check if patterns match\n>> or do not match, for multiple patterns. It seems to me that we could\n>> use the new log finding routine there as well, so how about extending\n>> it a bit more? You would need, at least:\n>> - One parameter for log entries matching.\n>> - One parameter for log entries not matching.\n> I felt adding these to log_contains was making the function slightly\n> complex with multiple checks. I was not able to make it simple with\n> the approach I tried. How about having a common function\n> check_connect_log_contents which has the common log contents check for\n> connect_ok and connect_fails function like the v2-0002 patch attached.\n\n\n+ $offset = 0 unless defined $offset;\n\n\nThis is unnecessary, as slurp_file() handles it appropriately, and in \nfact doing this is slightly inefficient, as it will cause slurp_file to \ndo a redundant seek.\n\nFYI there's a simpler way to say it if we wanted to:\n\n $offset //= 0;\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-26 Fr 20:35, vignesh C\n wrote:\n\n\nOn Fri, 26 May 2023 at 04:09, Michael Paquier <[email protected]> wrote:\n\n\n\nOn Thu, May 25, 2023 at 06:34:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n\n\nHowever, none of the other functions in ::Utils know anything about node\nobjects, which makes me think it should be a method on the node itself\n(i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\nwould be a better name, since it just returns a boolean. The name\nfind_in_log makes me think it would return the log lines matching the\npattern, or the position of the match in the file.\n\nIn that case, the slurp_file() call would have to be fully qualified,\nsince ::Cluster uses an empty import list to avoid polluting the method\nnamespace with imported functions.\n\n\n\nHmm. connect_ok() and connect_fails() in Cluster.pm have a similar\nlog comparison logic, feeding from the offset of a log file. Couldn't\nyou use the same code across the board for everything? Note that this\nstuff is parameterized so as it is possible to check if patterns match\nor do not match, for multiple patterns. It seems to me that we could\nuse the new log finding routine there as well, so how about extending\nit a bit more? You would need, at least:\n- One parameter for log entries matching.\n- One parameter for log entries not matching.\n\n\n\nI felt adding these to log_contains was making the function slightly\ncomplex with multiple checks. I was not able to make it simple with\nthe approach I tried. How about having a common function\ncheck_connect_log_contents which has the common log contents check for\nconnect_ok and connect_fails function like the v2-0002 patch attached.\n\n\n\n+ $offset = 0 unless defined $offset;\n\n\nThis is unnecessary, as slurp_file() handles it appropriately,\n and in fact doing this is slightly inefficient, as it will cause\n slurp_file to do a redundant seek.\nFYI there's a simpler way to say it if we wanted to:\n $offset //= 0;\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 27 May 2023 08:01:58 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Sat, 27 May 2023 at 17:32, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-05-26 Fr 20:35, vignesh C wrote:\n>\n> On Fri, 26 May 2023 at 04:09, Michael Paquier <[email protected]> wrote:\n>\n> On Thu, May 25, 2023 at 06:34:20PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>\n> However, none of the other functions in ::Utils know anything about node\n> objects, which makes me think it should be a method on the node itself\n> (i.e. in PostgreSQL::Test::Cluster) instead. Also, I think log_contains\n> would be a better name, since it just returns a boolean. The name\n> find_in_log makes me think it would return the log lines matching the\n> pattern, or the position of the match in the file.\n>\n> In that case, the slurp_file() call would have to be fully qualified,\n> since ::Cluster uses an empty import list to avoid polluting the method\n> namespace with imported functions.\n>\n> Hmm. connect_ok() and connect_fails() in Cluster.pm have a similar\n> log comparison logic, feeding from the offset of a log file. Couldn't\n> you use the same code across the board for everything? Note that this\n> stuff is parameterized so as it is possible to check if patterns match\n> or do not match, for multiple patterns. It seems to me that we could\n> use the new log finding routine there as well, so how about extending\n> it a bit more? You would need, at least:\n> - One parameter for log entries matching.\n> - One parameter for log entries not matching.\n>\n> I felt adding these to log_contains was making the function slightly\n> complex with multiple checks. I was not able to make it simple with\n> the approach I tried. How about having a common function\n> check_connect_log_contents which has the common log contents check for\n> connect_ok and connect_fails function like the v2-0002 patch attached.\n>\n>\n> + $offset = 0 unless defined $offset;\n>\n>\n> This is unnecessary, as slurp_file() handles it appropriately, and in fact doing this is slightly inefficient, as it will cause slurp_file to do a redundant seek.\n>\n> FYI there's a simpler way to say it if we wanted to:\n>\n> $offset //= 0;\n\nThanks for the comment, the attached v3 version patch has the changes\nfor the same.\n\nRegards,\nVignesh",
"msg_date": "Mon, 29 May 2023 07:49:52 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Mon, May 29, 2023 at 07:49:52AM +0530, vignesh C wrote:\n> Thanks for the comment, the attached v3 version patch has the changes\n> for the same.\n\n-if (find_in_log(\n-\t\t$node, $log_offset,\n-\t\tqr/peer authentication is not supported on this platform/))\n+if ($node->log_contains(\n+\t\tqr/peer authentication is not supported on this platform/),\n+\t\t$log_offset,)\n\nThis looks like a typo to me, the log offset is eaten.\n\nExcept of that, I am on board with log_contains().\n\nThere are two things that bugged me with the refactoring related to\nconnect_ok and connect_fails:\n- check_connect_log_contents() is a name too complicated. While\nlooking at that I have settled to a simpler log_check(), as we could\nperfectly use that for something else than connections as long as we\nwant to check multiple patterns at once.\n- The refactoring of the documentation for the routines of Cluster.pm\nbecame incorrect. For example, the patch does not list anymore\nlog_like and log_unlike for connect_ok.\n\nWith all that in mind I have hacked a few adjustments in a 0003,\nthough I agree with the separation between 0001 and 0002.\n\n033_replay_tsp_drops and 019_replslot_limit are not new to v16, but\n003_peer.pl and 035_standby_logical_decoding.pl, making the number of\nplaces where find_in_log() exists twice as much. So I would be\ntempted to refactor these tests in v16. Perhaps anybody from the RMT\ncould comment? We've usually been quite flexible with the tests even\nin beta.\n\nThoughts?\n--\nMichael",
"msg_date": "Sat, 3 Jun 2023 18:21:27 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Sun, 4 Jun 2023 at 03:51, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, May 29, 2023 at 07:49:52AM +0530, vignesh C wrote:\n> > Thanks for the comment, the attached v3 version patch has the changes\n> > for the same.\n>\n> -if (find_in_log(\n> - $node, $log_offset,\n> - qr/peer authentication is not supported on this platform/))\n> +if ($node->log_contains(\n> + qr/peer authentication is not supported on this platform/),\n> + $log_offset,)\n>\n> This looks like a typo to me, the log offset is eaten.\n>\n> Except of that, I am on board with log_contains().\n\nThanks for fixing this.\n\n> There are two things that bugged me with the refactoring related to\n> connect_ok and connect_fails:\n> - check_connect_log_contents() is a name too complicated. While\n> looking at that I have settled to a simpler log_check(), as we could\n> perfectly use that for something else than connections as long as we\n> want to check multiple patterns at once.\n> - The refactoring of the documentation for the routines of Cluster.pm\n> became incorrect. For example, the patch does not list anymore\n> log_like and log_unlike for connect_ok.\n\nThis new name suggested by you looks simpler, your documentation of\nhaving it in connect_ok and connect_fails and referring it to\nlog_check makes it more clearer.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 5 Jun 2023 21:39:22 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Sun, Jun 4, 2023 at 3:51 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, May 29, 2023 at 07:49:52AM +0530, vignesh C wrote:\n> > Thanks for the comment, the attached v3 version patch has the changes\n> > for the same.\n>\n> -if (find_in_log(\n> - $node, $log_offset,\n> - qr/peer authentication is not supported on this platform/))\n> +if ($node->log_contains(\n> + qr/peer authentication is not supported on this platform/),\n> + $log_offset,)\n>\n> This looks like a typo to me, the log offset is eaten.\n>\n> Except of that, I am on board with log_contains().\n>\n> There are two things that bugged me with the refactoring related to\n> connect_ok and connect_fails:\n> - check_connect_log_contents() is a name too complicated. While\n> looking at that I have settled to a simpler log_check(), as we could\n> perfectly use that for something else than connections as long as we\n> want to check multiple patterns at once.\n> - The refactoring of the documentation for the routines of Cluster.pm\n> became incorrect. For example, the patch does not list anymore\n> log_like and log_unlike for connect_ok.\n>\n> With all that in mind I have hacked a few adjustments in a 0003,\n> though I agree with the separation between 0001 and 0002.\n>\n> 033_replay_tsp_drops and 019_replslot_limit are not new to v16, but\n> 003_peer.pl and 035_standby_logical_decoding.pl, making the number of\n> places where find_in_log() exists twice as much. So I would be\n> tempted to refactor these tests in v16. Perhaps anybody from the RMT\n> could comment? We've usually been quite flexible with the tests even\n> in beta.\n>\n\nPersonally, I don't see any problem to do this refactoring for v16.\nHowever, sometimes, we do decide to backpatch refactoring in tests to\navoid backpatch effort. I am not completely sure if that is the case\nhere.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Jun 2023 08:05:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 08:05:49AM +0530, Amit Kapila wrote:\n> Personally, I don't see any problem to do this refactoring for v16.\n> However, sometimes, we do decide to backpatch refactoring in tests to\n> avoid backpatch effort. I am not completely sure if that is the case\n> here.\n\n033_replay_tsp_drops.pl has one find_in_log() down to 11, and\n019_replslot_limit.pl has four calls down to 14. Making things\nconsistent everywhere is a rather appealing argument to ease future\nbackpatching. So I am OK to spend a few extra cycles in adjusting\nthese routines all the way down where needed. I'll do that tomorrow\nonce I get back in front of my laptop.\n\nNote that connect_ok() and connect_fails() are new to 14, so this\npart has no need to go further down than that.\n--\nMichael",
"msg_date": "Tue, 6 Jun 2023 13:06:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 9:39 PM vignesh C <[email protected]> wrote:\n>\n> On Sun, 4 Jun 2023 at 03:51, Michael Paquier <[email protected]> wrote:\n> >\n> > This looks like a typo to me, the log offset is eaten.\n> >\n> > Except of that, I am on board with log_contains().\n>\n> Thanks for fixing this.\n\n+1 for deduplicating find_in_log. How about deduplicating advance_wal\ntoo so that 019_replslot_limit.pl, 033_replay_tsp_drops.pl,\n035_standby_logical_decoding.pl and 001_stream_rep.pl can benefit\nimmediately?\n\nFWIW, a previous discussion related to this is here\nhttps://www.postgresql.org/message-id/flat/CALj2ACVUcXtLgHRPbx28ZQQyRM6j%2BeSH3jNUALr2pJ4%2Bf%3DHRGA%40mail.gmail.com.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 10:00:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 10:00:00AM +0530, Bharath Rupireddy wrote:\n> +1 for deduplicating find_in_log. How about deduplicating advance_wal\n> too so that 019_replslot_limit.pl, 033_replay_tsp_drops.pl,\n> 035_standby_logical_decoding.pl and 001_stream_rep.pl can benefit\n> immediately?\n\nAs in a small wrapper for pg_switch_wal() that generates N segments at\nwill? I don't see why we could not do that if it proves useful in the\nlong run.\n--\nMichael",
"msg_date": "Tue, 6 Jun 2023 19:41:18 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 4:11 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Jun 06, 2023 at 10:00:00AM +0530, Bharath Rupireddy wrote:\n> > +1 for deduplicating find_in_log. How about deduplicating advance_wal\n> > too so that 019_replslot_limit.pl, 033_replay_tsp_drops.pl,\n> > 035_standby_logical_decoding.pl and 001_stream_rep.pl can benefit\n> > immediately?\n>\n> As in a small wrapper for pg_switch_wal() that generates N segments at\n> will?\n\nYes. A simpler way of doing it would be to move advance_wal() in\n019_replslot_limit.pl to Cluster.pm, something like the attached. CI\nmembers don't complain with it\nhttps://github.com/BRupireddy/postgres/tree/add_a_function_in_Cluster.pm_to_generate_WAL.\nPerhaps, we can name it better instead of advance_wal, say\ngenerate_wal or some other?\n\n> I don't see why we could not do that if it proves useful in the\n> long run.\n\nBesides the beneficiaries listed above, the test case added by\nhttps://commitfest.postgresql.org/43/3663/ can use it. And, the\ntest_table bits in 020_pg_receivewal.pl can use it (?).\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Jun 2023 17:53:40 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, 6 Jun 2023 at 09:36, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Jun 06, 2023 at 08:05:49AM +0530, Amit Kapila wrote:\n> > Personally, I don't see any problem to do this refactoring for v16.\n> > However, sometimes, we do decide to backpatch refactoring in tests to\n> > avoid backpatch effort. I am not completely sure if that is the case\n> > here.\n>\n> 033_replay_tsp_drops.pl has one find_in_log() down to 11, and\n> 019_replslot_limit.pl has four calls down to 14. Making things\n> consistent everywhere is a rather appealing argument to ease future\n> backpatching. So I am OK to spend a few extra cycles in adjusting\n> these routines all the way down where needed. I'll do that tomorrow\n> once I get back in front of my laptop.\n>\n> Note that connect_ok() and connect_fails() are new to 14, so this\n> part has no need to go further down than that.\n\nPlease find the attached patches that can be applied on back branches\ntoo. v5*master.patch can be applied on master, v5*PG15.patch can be\napplied on PG15, v5*PG14.patch can be applied on PG14, v5*PG13.patch\ncan be applied on PG13, v5*PG12.patch can be applied on PG12, PG11 and\nPG10.\n\nRegards,\nVignesh",
"msg_date": "Tue, 6 Jun 2023 18:43:44 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 05:53:40PM +0530, Bharath Rupireddy wrote:\n> Yes. A simpler way of doing it would be to move advance_wal() in\n> 019_replslot_limit.pl to Cluster.pm, something like the attached. CI\n> members don't complain with it\n> https://github.com/BRupireddy/postgres/tree/add_a_function_in_Cluster.pm_to_generate_WAL.\n> Perhaps, we can name it better instead of advance_wal, say\n> generate_wal or some other?\n\nWhy not discussing that on a separate thread? What you are proposing\nis independent of what Vignesh has proposed. Note that the patch\nformat is octet-stream, causing extra CRs to exist in the patch.\nSomething happened on your side when you sent your patch, I guess?\n--\nMichael",
"msg_date": "Fri, 9 Jun 2023 11:59:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 06:43:44PM +0530, vignesh C wrote:\n> Please find the attached patches that can be applied on back branches\n> too. v5*master.patch can be applied on master, v5*PG15.patch can be\n> applied on PG15, v5*PG14.patch can be applied on PG14, v5*PG13.patch\n> can be applied on PG13, v5*PG12.patch can be applied on PG12, PG11 and\n> PG10.\n\nThanks. The amount of minimal conflicts across all these branches is\nalways fun to play with. I have finally got around and applied all\nthat, after doing a proper split, applying one part down to 14 and the\nsecond back to 11.\n--\nMichael",
"msg_date": "Fri, 9 Jun 2023 12:01:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 8:29 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Jun 06, 2023 at 05:53:40PM +0530, Bharath Rupireddy wrote:\n> > Yes. A simpler way of doing it would be to move advance_wal() in\n> > 019_replslot_limit.pl to Cluster.pm, something like the attached. CI\n> > members don't complain with it\n> > https://github.com/BRupireddy/postgres/tree/add_a_function_in_Cluster.pm_to_generate_WAL.\n> > Perhaps, we can name it better instead of advance_wal, say\n> > generate_wal or some other?\n>\n> Why not discussing that on a separate thread? What you are proposing\n> is independent of what Vignesh has proposed.\n\nSure. Here it is -\nhttps://www.postgresql.org/message-id/CALj2ACU3R8QFCvDewHCMKjgb2w_-CMCyd6DAK%3DJb-af14da5eg%40mail.gmail.com.\n\n> Note that the patch\n> format is octet-stream, causing extra CRs to exist in the patch.\n> Something happened on your side when you sent your patch, I guess?\n\nHad to attach the patch in .txt format to not block Vignesh's patch\nfrom testing by CF Bot (if at all this thread was added there).\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 11 Jun 2023 07:17:55 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
},
{
"msg_contents": "On Fri, 9 Jun 2023 at 08:31, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Jun 06, 2023 at 06:43:44PM +0530, vignesh C wrote:\n> > Please find the attached patches that can be applied on back branches\n> > too. v5*master.patch can be applied on master, v5*PG15.patch can be\n> > applied on PG15, v5*PG14.patch can be applied on PG14, v5*PG13.patch\n> > can be applied on PG13, v5*PG12.patch can be applied on PG12, PG11 and\n> > PG10.\n>\n> Thanks. The amount of minimal conflicts across all these branches is\n> always fun to play with. I have finally got around and applied all\n> that, after doing a proper split, applying one part down to 14 and the\n> second back to 11.\n\nThanks for pushing this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 11 Jun 2023 08:42:04 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Implement generalized sub routine find_in_log for tap test"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17946\nLogged by: Guido Brugnara\nEmail address: [email protected]\nPostgreSQL version: 12.15\nOperating system: Ubuntu 20.04\nDescription: \n\nAfter upgrading an application using Postgresql from version 10 to 12,\nfields of type \"money\" are no longer generated with the € symbol but with\n$.\r\nI identified the problem that occurs when making use of functions with\n\"LANGUAGE plperl,\" see with the following queries to be executed in order:\r\n# from shell ...\r\nsudo su -c psql\\ postgres postgres <<'__SQL__';\r\n SET lc_monetary TO 'C';\r\n SELECT 12.34::money AS price;\r\n SET lc_monetary TO 'it_IT.UTF-8';\r\n SELECT 12.34::money AS price;\r\n SET lc_monetary TO 'en_GB.UTF-8';\r\n SELECT 12.34::money AS price;\r\n CREATE EXTENSION plperl;\r\n SET lc_monetary TO 'C';\r\n SELECT 12.34::money AS price;\r\n DO LANGUAGE 'plperl' $$ my $rv = spi_exec_query(q{SELECT 12.34::money AS\nprice;}, 1);elog(NOTICE, $rv->{rows}[0]->{price});$$;\r\n SET lc_monetary TO 'it_IT.UTF-8';\r\n SELECT 12.34::money AS price;\r\n DO LANGUAGE 'plperl' $$ my $rv = spi_exec_query(q{SELECT 12.34::money AS\nprice;}, 1);elog(NOTICE, $rv->{rows}[0]->{price});$$;\r\n SET lc_monetary TO 'en_GB.UTF-8';\r\n SELECT 12.34::money AS price;\r\n DO LANGUAGE 'plperl' $$ my $rv = spi_exec_query(q{SELECT 12.34::money AS\nprice;}, 1);elog(NOTICE, $rv->{rows}[0]->{price});$$;\r\n__SQL__\r\n#end.\r\n\r\nThe first three SELECTs generate content with the currencies Dollar, Euro &\nPound, as expected, while the last three only with Dollar.\r\nIt would appear that after first DO LANGUAGE 'plper' call, LC_MONETARY even\nif it is varied, has no effect in subsequent queries.\r\nAny suggestions?",
"msg_date": "Thu, 25 May 2023 15:21:26 +0000",
"msg_from": "PG Bug reporting form <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "PG Bug reporting form <[email protected]> writes:\n> After upgrading an application using Postgresql from version 10 to 12,\n> fields of type \"money\" are no longer generated with the € symbol but with\n> $.\n\nHmm, seems to work for me:\n\n$ psql\npsql (12.15)\nType \"help\" for help.\n\npostgres=# SET lc_monetary TO 'en_GB.UTF-8';\nSET\npostgres=# SELECT 12.34::money AS price;\n price \n--------\n £12.34\n(1 row)\n\npostgres=# DO LANGUAGE 'plperl' $$ my $rv = spi_exec_query(q{SELECT 12.34::money AS\nprice;}, 1);elog(NOTICE, $rv->{rows}[0]->{price});$$;\nNOTICE: £12.34\nDO\npostgres=# SET lc_monetary TO 'it_IT.UTF-8';\nSET\npostgres=# SELECT 12.34::money AS price;\n price \n---------\n € 12,34\n(1 row)\n\npostgres=# DO LANGUAGE 'plperl' $$ my $rv = spi_exec_query(q{SELECT 12.34::money AS\nprice;}, 1);elog(NOTICE, $rv->{rows}[0]->{price});$$;\nNOTICE: € 12,34\nDO\n\nIIRC, we've seen trouble in the past with some versions of libperl\nclobbering the host application's locale settings. Maybe you\nhave a plperl.on_init or plperl.on_plperl_init action that is\ncausing that to happen? In any case, I'd call it a Perl bug not\na Postgres bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 15:33:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 25/05/2023 15:33, Tom Lane wrote:\n> PG Bug reporting form <[email protected]> writes:\n>> After upgrading an application using Postgresql from version 10 to 12,\n>> fields of type \"money\" are no longer generated with the € symbol but with\n>> $.\n> \n> Hmm, seems to work for me:\n\nI can reproduce this:\n\npsql (16beta1)\nType \"help\" for help.\n\npostgres=# DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\nNOTICE: foo\nDO\npostgres=# SET lc_monetary TO 'en_GB.UTF-8';\nSET\npostgres=# SELECT 12.34::money AS price;\n price\n--------\n $12.34\n(1 row)\n\n\nIf I don't call the plperl function, it works as expected:\n\nsql (16beta1)\nType \"help\" for help.\n\npostgres=# SET lc_monetary TO 'en_GB.UTF-8';\nSET\npostgres=# SELECT 12.34::money AS price;\n price\n--------\n £12.34\n(1 row)\n\nI should note that 'en_GB.UTF-8' is the default locale in my system, and \nthat's what I used in initdb. I don't know if it makes a difference.\n\n> IIRC, we've seen trouble in the past with some versions of libperl\n> clobbering the host application's locale settings. Maybe you\n> have a plperl.on_init or plperl.on_plperl_init action that is\n> causing that to happen? In any case, I'd call it a Perl bug not\n> a Postgres bug\nI did some debugging, initializing the perl interpreter calls uselocale():\n\n#0 __GI___uselocale (newloc=0x7f9f47ff0940 <_nl_C_locobj>) at \n./locale/uselocale.c:31\n#1 0x00007f9f373bd069 in ?? () from \n/usr/lib/x86_64-linux-gnu/libperl.so.5.36\n#2 0x00007f9f373bce74 in ?? () from \n/usr/lib/x86_64-linux-gnu/libperl.so.5.36\n#3 0x00007f9f373bfc15 in Perl_init_i18nl10n () from \n/usr/lib/x86_64-linux-gnu/libperl.so.5.36\n#4 0x00007f9f48b74cfb in plperl_init_interp () at plperl.c:809\n#5 0x00007f9f48b78adc in _PG_init () at plperl.c:483\n#6 0x000055c98b8e9b63 in internal_load_library (libname=0x55c98bebaf90 \n\"/home/heikki/pgsql.fsmfork/lib/plperl.so\") at dfmgr.c:289\n#7 0x000055c98b8ea1c2 in load_external_function \n(filename=filename@entry=0x55c98bebb1c0 \"$libdir/plperl\", \nfuncname=funcname@entry=0x55c98beba378 \"plperl_inline_handler\",\n signalNotFound=signalNotFound@entry=true, \nfilehandle=filehandle@entry=0x7ffd20942b48) at dfmgr.c:116\n#8 0x000055c98b8ea864 in fmgr_info_C_lang (functionId=129304, \nprocedureTuple=0x7f9f4778ccb8, finfo=0x7ffd20942bf0) at fmgr.c:386\n#9 fmgr_info_cxt_security (functionId=129304, finfo=0x7ffd20942bf0, \nmcxt=<optimized out>, ignore_security=<optimized out>) at fmgr.c:246\n#10 0x000055c98b8eba72 in fmgr_info (finfo=0x7ffd20942bf0, \nfunctionId=<optimized out>) at fmgr.c:129\n#11 OidFunctionCall1Coll (functionId=<optimized out>, \ncollation=collation@entry=0, arg1=94324124262840) at fmgr.c:1386\n#12 0x000055c98b5e1385 in ExecuteDoStmt \n(pstate=pstate@entry=0x55c98beba0b0, stmt=stmt@entry=0x55c98be90858, \natomic=atomic@entry=false) at functioncmds.c:2144\n#13 0x000055c98b7c24ce in standard_ProcessUtility (pstmt=0x55c98be908e0, \nqueryString=0x55c98be8fd50 \"DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') \n$$;\", readOnlyTree=<optimized out>,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \ndest=0x55c98be90b80, qc=0x7ffd20942f30) at utility.c:714\n#14 0x000055c98b7c0d9f in PortalRunUtility \n(portal=portal@entry=0x55c98bf0b710, pstmt=pstmt@entry=0x55c98be908e0, \nisTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=0x55c98be90b80, \nqc=0x7ffd20942f30) at pquery.c:1158\n#15 0x000055c98b7c0ecb in PortalRunMulti \n(portal=portal@entry=0x55c98bf0b710, isTopLevel=isTopLevel@entry=true, \nsetHoldSnapshot=setHoldSnapshot@entry=false, \ndest=dest@entry=0x55c98be90b80,\n altdest=altdest@entry=0x55c98be90b80, qc=qc@entry=0x7ffd20942f30) \nat pquery.c:1322\n#16 0x000055c98b7c139d in PortalRun (portal=portal@entry=0x55c98bf0b710, \ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \nrun_once=run_once@entry=true,\n dest=dest@entry=0x55c98be90b80, \naltdest=altdest@entry=0x55c98be90b80, qc=0x7ffd20942f30) at pquery.c:791\n#17 0x000055c98b7bd85d in exec_simple_query (query_string=0x55c98be8fd50 \n\"DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\") at postgres.c:1274\n#18 0x000055c98b7bf978 in PostgresMain (dbname=<optimized out>, \nusername=<optimized out>) at postgres.c:4632\n#19 0x000055c98b73f743 in BackendRun (port=<optimized out>, \nport=<optimized out>) at postmaster.c:4461\n#20 BackendStartup (port=<optimized out>) at postmaster.c:4189\n#21 ServerLoop () at postmaster.c:1779\n#22 0x000055c98b74077a in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0x55c98be88fc0) at postmaster.c:1463\n#23 0x000055c98b4a96be in main (argc=3, argv=0x55c98be88fc0) at main.c:198\n\nI think the uselocale() call renders ineffective the setlocale() calls \nthat we make later. Maybe we should replace our setlocale() calls with \nuselocale(), too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 19:00:34 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\n> On 25/05/2023 15:33, Tom Lane wrote:\n> > PG Bug reporting form <[email protected]> writes:\n> >> After upgrading an application using Postgresql from version 10 to 12,\n> >> fields of type \"money\" are no longer generated with the € symbol but with\n> >> $.\n> > \n> > Hmm, seems to work for me:\n>\n> I can reproduce this:\n>\n> psql (16beta1)\n> Type \"help\" for help.\n>\n> postgres=# DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\n> NOTICE: foo\n> DO\n> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n> SET\n> postgres=# SELECT 12.34::money AS price;\n> price\n> --------\n> $12.34\n> (1 row)\n>\n>\n> If I don't call the plperl function, it works as expected:\n>\n> sql (16beta1)\n> Type \"help\" for help.\n>\n> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n> SET\n> postgres=# SELECT 12.34::money AS price;\n> price\n> --------\n> £12.34\n> (1 row)\n>\n> I should note that 'en_GB.UTF-8' is the default locale in my system, and \n> that's what I used in initdb. I don't know if it makes a difference.\n\nI am looking into this bug. I have also reproduced it.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 09 Jun 2023 10:31:14 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/9/23 11:31, Tristan Partin wrote:\n> On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\n>> On 25/05/2023 15:33, Tom Lane wrote:\n>> > PG Bug reporting form <[email protected]> writes:\n>> >> After upgrading an application using Postgresql from version 10 to 12,\n>> >> fields of type \"money\" are no longer generated with the € symbol but with\n>> >> $.\n>> > \n>> > Hmm, seems to work for me:\n>>\n>> I can reproduce this:\n>>\n>> psql (16beta1)\n>> Type \"help\" for help.\n>>\n>> postgres=# DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\n>> NOTICE: foo\n>> DO\n>> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n>> SET\n>> postgres=# SELECT 12.34::money AS price;\n>> price\n>> --------\n>> $12.34\n>> (1 row)\n>>\n>>\n>> If I don't call the plperl function, it works as expected:\n>>\n>> sql (16beta1)\n>> Type \"help\" for help.\n>>\n>> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n>> SET\n>> postgres=# SELECT 12.34::money AS price;\n>> price\n>> --------\n>> £12.34\n>> (1 row)\n>>\n>> I should note that 'en_GB.UTF-8' is the default locale in my system, and \n>> that's what I used in initdb. I don't know if it makes a difference.\n> \n> I am looking into this bug. I have also reproduced it.\n\nIt reproduces for me on both pg16beta1 and pg10. I wonder if it isn't a \nbehavior change in libperl itself. It seems that merely doing \"load \n'plperl';\" is enough to cause the issue as long as it is done prior to \ndoing \"SET lc_monetary TO 'en_GB.UTF-8'; SELECT 12.34::money AS price;\". \nWhen done in the opposite order the problem does not occur.\n\n8<------------------------------\n# On pg10 with perl v5.34.0\n# note that on my system\n# LC_NUMERIC=\"\"\n# LC_ALL=\"\"\n# LANG=\"en_US.UTF-8\"\n#\n# this works correctly\npsql nmx << EOF\nSET lc_monetary TO 'en_GB.UTF-8';\nSELECT 12.34::money AS price;\nload 'plperl';\nSELECT 12.34::money AS price;\nEOF\nSET\n price\n--------\n £12.34\n(1 row)\n\nLOAD\n price\n--------\n £12.34\n(1 row)\n\n# this does not\npsql nmx << EOF\nSET lc_monetary TO 'en_GB.UTF-8';\nload 'plperl';\nSELECT 12.34::money AS price;\nEOF\nSET\nLOAD\n price\n--------\n $12.34\n(1 row)\n8<------------------------------\n\nSince I am also seeing this on pg10, I wonder if it is a change in \nperl.I found this[1]:\n\n \"What did change is that perl space code no\n longer pays attention to the LC_NUMERIC\n category outside 'use locale'. This is the way\n it has always worked, AFAIK, for LC_COLLATE\n and, mostly, LC_CTYPE, and for some uses of\n LC_NUMERIC.\"\n\n\n[1] \"locale changes in 5.19.1 break LC_NUMERIC\n handling\"\n https://github.com/Perl/perl5/issues/13089\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 9 Jun 2023 15:05:40 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/9/23 15:05, Joe Conway wrote:\n> I wonder if it isn't a behavior change in libperl itself. It seems\n> that merely doing \"load 'plperl';\" is enough to cause the issue\nI can reproduce with a simple test program by linking libperl:\n\n8<-------- test.c ----------------\n#include <locale.h>\n#include <stdio.h>\n\n#define off64_t __off64_t\n#include <EXTERN.h>\n#include <perl.h>\n\nint\nmain(int argc, char *argv[])\n{\n struct lconv *extlconv;\n#ifdef WITH_PERL\n PerlInterpreter *plperl;\n plperl = perl_alloc();\n perl_construct(plperl);\n#endif\n setlocale(LC_MONETARY, \"en_GB.UTF-8\");\n extlconv = localeconv();\n printf(\"currency symbol = \\\"%s\\\"\\n\",\n extlconv->currency_symbol);\n return 0;\n}\n8<-------- test.c ----------------\n\nAdjust the perl paths to suit:\n\n8<------------------------\ngcc -O0 -ggdb3 -o test \\\n -I /usr/lib64/perl5/CORE \\\n -lperl \\\n test.c\n\n./test\ncurrency symbol = \"£\"\n\ngcc -O0 -ggdb3 -o test \\\n -I /usr/lib64/perl5/CORE \\\n -lperl -DWITH_PERL \\\n test.c\n\n./test\ncurrency symbol = \"$\"\n8<------------------------\n\nIt happens because somehow loading libperl prevents localeconv() from \nreturning the correct values, even though libperl only seems to call \n\"setlocale(LC_ALL, NULL)\" which ought not change anything.\n\n8<------------------------\ngdb ./test\n\nReading symbols from ./test...\n(gdb) b setlocale\nBreakpoint 1 at 0x10f0\n(gdb) r\nStarting program: /opt/src/pgsql-\n\nBreakpoint 1, __GI_setlocale (category=6, locale=0x0) at \n./locale/setlocale.c:218\n218 ./locale/setlocale.c: No such file or directory.\n(gdb) bt\n#0 __GI_setlocale (category=6, locale=0x0) at ./locale/setlocale.c:218\n#1 0x00007ffff7d96b97 in Perl_init_i18nl10n () from \n/lib/x86_64-linux-gnu/libperl.so.5.34\n#2 0x0000555555555225 in main (argc=1, argv=0x7fffffffe1d8) at test.c:18\n(gdb) c\nContinuing.\n\nBreakpoint 1, __GI_setlocale (category=4, locale=0x55555555602e \n\"en_GB.UTF-8\") at ./locale/setlocale.c:218\n218 in ./locale/setlocale.c\n(gdb) bt\n#0 __GI_setlocale (category=4, locale=0x55555555602e \"en_GB.UTF-8\") at \n./locale/setlocale.c:218\n#1 0x0000555555555239 in main (argc=1, argv=0x7fffffffe1d8) at test.c:20\n\nmain (argc=1, argv=0x7fffffffe1d8) at test.c:21\n21 extlconv = localeconv();\n(gdb)\n22 printf(\"currency symbol = \\\"%s\\\"\\n\",\n(gdb)\ncurrency symbol = \"$\"\n24 return 0;\n(gdb)\n8<------------------------\n\nWill continue to dig in the morning.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 9 Jun 2023 22:10:20 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/9/23 22:10, Joe Conway wrote:\n> On 6/9/23 15:05, Joe Conway wrote:\n>> I wonder if it isn't a behavior change in libperl itself. It seems\n>> that merely doing \"load 'plperl';\" is enough to cause the issue\n> I can reproduce with a simple test program by linking libperl:\n> \n> 8<-------- test.c ----------------\n\nA bit more spelunking leads me to the following observations and \nconclusions:\n\n1/ On RHEL7 with perl v5.16.3 the problem does not occur\n\n2/ On RHEL9 with perl v5.32.1 the problem does occur\n\n3/ The difference in behavior is triggered by the newer perl doing a \nbunch of newlocale/uselocale calls not done by the older perl, combined \nwith a glibc behavior which seems surprising at best.\n\n From localeinfo.h in glibc source tree:\n8<------------------------\n/* This fetches the thread-local locale_t pointer, either one set with\n uselocale or &_nl_global_locale. */\n#define _NL_CURRENT_LOCALE (__libc_tsd_get (locale_t, LOCALE))\n8<------------------------\n\n4/ I successfully tested a fix in the simplified reproducer program sent \nearlier. It amounts to adding:\n\n8<------------------------\n uselocale(LC_GLOBAL_LOCALE);\n8<------------------------\n\nprior to calling\n\n8<------------------------\n extlconv = localeconv();\n8<------------------------\n\n5/ The attached fixes the issue for me on pg10 and passes check-world.\n\nComments?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 10 Jun 2023 12:12:36 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> 5/ The attached fixes the issue for me on pg10 and passes check-world.\n> Comments?\n\nThe call in PGLC_localeconv seems *very* oddly placed. Why not\ndo that before it does any other locale calls? Otherwise you don't\nreally have reason to believe you're saving the appropriate\nvalues to restore later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 14:42:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/10/23 14:42, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> 5/ The attached fixes the issue for me on pg10 and passes check-world.\n>> Comments?\n> \n> The call in PGLC_localeconv seems *very* oddly placed. Why not\n> do that before it does any other locale calls? Otherwise you don't\n> really have reason to believe you're saving the appropriate\n> values to restore later.\n\n\nAs far as I can tell it really only affects localeconv(), so I tried to \nplace it close to those. But I am fine with moving it up.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 10 Jun 2023 15:07:05 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/10/23 15:07, Joe Conway wrote:\n> On 6/10/23 14:42, Tom Lane wrote:\n>> Joe Conway <[email protected]> writes:\n>>> 5/ The attached fixes the issue for me on pg10 and passes check-world.\n>>> Comments?\n>> \n>> The call in PGLC_localeconv seems *very* oddly placed. Why not\n>> do that before it does any other locale calls? Otherwise you don't\n>> really have reason to believe you're saving the appropriate\n>> values to restore later.\n> \n> \n> As far as I can tell it really only affects localeconv(), so I tried to\n> place it close to those. But I am fine with moving it up.\n\nThis version is against pg16 (rather than pg10), moves up that hunk, \nmentions localeconv() in the comment as the reason for the call, and \nfixes some whitespace sloppiness. I will plan to apply to all supported \nbranches.\n\nBetter?\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 10 Jun 2023 15:28:47 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "Joe Conway <[email protected]> writes:\n> This version is against pg16 (rather than pg10), moves up that hunk, \n> mentions localeconv() in the comment as the reason for the call, and \n> fixes some whitespace sloppiness. I will plan to apply to all supported \n> branches.\n\n> Better?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 15:32:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 10/06/2023 22:28, Joe Conway wrote:\n> On 6/10/23 15:07, Joe Conway wrote:\n>> On 6/10/23 14:42, Tom Lane wrote:\n>>> Joe Conway <[email protected]> writes:\n>>>> 5/ The attached fixes the issue for me on pg10 and passes check-world.\n>>>> Comments?\n>>>\n>>> The call in PGLC_localeconv seems *very* oddly placed. Why not\n>>> do that before it does any other locale calls? Otherwise you don't\n>>> really have reason to believe you're saving the appropriate\n>>> values to restore later.\n>>\n>>\n>> As far as I can tell it really only affects localeconv(), so I tried to\n>> place it close to those. But I am fine with moving it up.\n> \n> This version is against pg16 (rather than pg10), moves up that hunk,\n> mentions localeconv() in the comment as the reason for the call, and\n> fixes some whitespace sloppiness. I will plan to apply to all supported\n> branches.\n> \n> Better?\n\nThe man page for uselocale(LC_GLOBAL_LOCALE) says: \"The calling thread's \ncurrent locale is set to the global locale determined by setlocale(3).\" \nDoes that undo the effect of calling uselocale() previously, so if you \nlater call setlocale(), the new locale takes effect in the thread too? \nOr is it equivalent to \"uselocale(LC_ALL, setlocale(NULL))\", so that it \nsets the thread's locale to the current global locale, but later \nsetlocale() calls have no effect on it?\n\nIn any case, this still doesn't feel like the right place. We have many \nmore setlocale() calls. Shouldn't we sprinkle them all with \nuselocale(LC_GLOBAL_LOCALE)? cache_locale_time() for example. Or rather, \nall the places where we use any functions that depend on the current locale.\n\nHow about we replace all setlocale() calls with uselocale()?\n\nShouldn't we restore the old thread-specific locale after the calls? I'm \nnot sure why libperl calls uselocale(), but we are now overwriting the \nlocale that it sets. We have a few other uselocale() calls in \npg_locale.c, and we take care to restore the old locale in those.\n\nThere are a few uselocale() calls in ecpg, and they are protected by \nHAVE_USELOCALE. Interestingly, the calls in pg_locale.c are not, but \nthey are protected by HAVE_LOCALE_T. Seems a little inconsistent.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 12:13:53 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "(moving to hackers)\n\nOn 6/12/23 05:13, Heikki Linnakangas wrote:\n> On 10/06/2023 22:28, Joe Conway wrote:\n>> On 6/10/23 15:07, Joe Conway wrote:\n>>> On 6/10/23 14:42, Tom Lane wrote:\n>>>> Joe Conway <[email protected]> writes:\n>>>>> 5/ The attached fixes the issue for me on pg10 and passes check-world.\n>>>>> Comments?\n>>>>\n>>>> The call in PGLC_localeconv seems *very* oddly placed. Why not\n>>>> do that before it does any other locale calls? Otherwise you don't\n>>>> really have reason to believe you're saving the appropriate\n>>>> values to restore later.\n>>>\n>>>\n>>> As far as I can tell it really only affects localeconv(), so I tried to\n>>> place it close to those. But I am fine with moving it up.\n>> \n>> This version is against pg16 (rather than pg10), moves up that hunk,\n>> mentions localeconv() in the comment as the reason for the call, and\n>> fixes some whitespace sloppiness. I will plan to apply to all supported\n>> branches.\n>> \n>> Better?\n> \n> The man page for uselocale(LC_GLOBAL_LOCALE) says: \"The calling thread's\n> current locale is set to the global locale determined by setlocale(3).\"\n> Does that undo the effect of calling uselocale() previously, so if you\n> later call setlocale(), the new locale takes effect in the thread too?\n> Or is it equivalent to \"uselocale(LC_ALL, setlocale(NULL))\", so that it\n> sets the thread's locale to the current global locale, but later\n> setlocale() calls have no effect on it?\n\nsetlocale() changes the global locale, but uselocale() changes the \nlocale that is currently active, as I understand it.\n\nAlso note that uselocale man page says \"Unlike setlocale(3), uselocale() \ndoes not allow selective replacement of individual locale categories. \nTo employ a locale that differs in only a few categories from the \ncurrent locale, use calls to duplocale(3) and newlocale(3) to obtain a \nlocale object equivalent to the current locale and modify the desired \ncategories in that object.\"\n\n> In any case, this still doesn't feel like the right place. We have many\n> more setlocale() calls. Shouldn't we sprinkle them all with\n> uselocale(LC_GLOBAL_LOCALE)? cache_locale_time() for example. Or rather,\n> all the places where we use any functions that depend on the current locale.\n> \n> How about we replace all setlocale() calls with uselocale()?\n\nI don't see us backpatching something that invasive. It might be the \nright thing to do for pg17, or even pg16, but I think that is a \ndifferent discussion\n\n> Shouldn't we restore the old thread-specific locale after the calls? I'm\n> not sure why libperl calls uselocale(), but we are now overwriting the\n> locale that it sets.\n\nThat is a good question. Though arguably perl is doing the wrong thing \nby not resetting the global locale when it is being used embedded.\n\n> We have a few other uselocale() calls in pg_locale.c, and we take\n> care to restore the old locale in those.\n\nI think as long as we are relying on setlocale rather than uselocale in \ngeneral (see above), the global locale is where we want things left.\n\n> There are a few uselocale() calls in ecpg, and they are protected by\n> HAVE_USELOCALE. Interestingly, the calls in pg_locale.c are not, but\n> they are protected by HAVE_LOCALE_T. Seems a little inconsistent.\n\nPossibly something we should clean up, but I think that is separate from \nthis fix.\n\nIn general I think we have 2 or possibly three distinct things here:\n\n1/ how do we fix the misbehavior reported due to libperl in existing \nstable branches\n\n2/ what makes most sense going forward (and does that mean pg16 or pg17)\n\n3/ misc code cleanups\n\nI was mostly trying to concentrate on #1, but 2 & 3 are worthy of \ndiscussion.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:44:52 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/12/23 10:44, Joe Conway wrote:\n> 1/ how do we fix the misbehavior reported due to libperl in existing\n> stable branches\n\n<snip>\n\n> I was mostly trying to concentrate on #1, but 2 & 3 are worthy of\n> discussion.\n\nHmm, browsing through the perl source I came across a reference to this \n(from https://perldoc.perl.org/perllocale):\n\n---------------\nPERL_SKIP_LOCALE_INIT\n\n This environment variable, available starting in Perl v5.20, if set \n(to any value), tells Perl to not use the rest of the environment \nvariables to initialize with. Instead, Perl uses whatever the current \nlocale settings are. This is particularly useful in embedded \nenvironments, see \"Using embedded Perl with POSIX locales\" in perlembed.\n---------------\n\nSeems we ought to be using that.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 17:28:52 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Mon Jun 12, 2023 at 4:13 AM CDT, Heikki Linnakangas wrote:\n> There are a few uselocale() calls in ecpg, and they are protected by \n> HAVE_USELOCALE. Interestingly, the calls in pg_locale.c are not, but \n> they are protected by HAVE_LOCALE_T. Seems a little inconsistent.\n\nPatch is attached. CC-ing hackers.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 14 Jun 2023 11:42:03 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/12/23 17:28, Joe Conway wrote:\n> On 6/12/23 10:44, Joe Conway wrote:\n>> 1/ how do we fix the misbehavior reported due to libperl in existing\n>> stable branches\n> \n> <snip>\n> \n>> I was mostly trying to concentrate on #1, but 2 & 3 are worthy of\n>> discussion.\n> \n> Hmm, browsing through the perl source I came across a reference to this\n> (from https://perldoc.perl.org/perllocale):\n> \n> ---------------\n> PERL_SKIP_LOCALE_INIT\n> \n> This environment variable, available starting in Perl v5.20, if set\n> (to any value), tells Perl to not use the rest of the environment\n> variables to initialize with. Instead, Perl uses whatever the current\n> locale settings are. This is particularly useful in embedded\n> environments, see \"Using embedded Perl with POSIX locales\" in perlembed.\n> ---------------\n> \n> Seems we ought to be using that.\n\nTurns out that that does nothing useful as far as I can tell.\n\nSo I am back to proposing the attached against pg16beta1, to be \nbackpatched to pg11.\n\nSince much of the discussion happened on pgsql-bugs, the background \nsummary for hackers is this:\n\nWhen plperl is first loaded, the init function eventually works its way \nto calling Perl_init_i18nl10n(). In versions of perl >= 5.20, that ends \nup at S_emulate_setlocale() which does a series of uselocale() calls. \nFor reference, RHEL 7 is perl 5.16.3 while RHEL 9 is perl 5.32.1. Older \nversions of perl do not have this behavior.\n\nThe problem with uselocale() is that it changes the active locale away \nfrom the default global locale. Subsequent uses of setlocale() affect \nthe global locale, but if that is not the active locale, it does not \ncontrol the results of locale dependent functions such as localeconv(), \nwhich is what we depend on in PGLC_localeconv().\n\nThe result is illustrated in this example:\n8<------------\npsql test\npsql (16beta1)\nType \"help\" for help.\n\ntest=# show lc_monetary;\n lc_monetary\n-------------\n en_GB.UTF-8\n(1 row)\n\ntest=# SELECT 12.34::money AS price;\n price\n--------\n £12.34\n(1 row)\n\ntest=# \\q\n8<------------\npsql test\npsql (16beta1)\nType \"help\" for help.\n\ntest=# load 'plperl';\nLOAD\ntest=# show lc_monetary;\n lc_monetary\n-------------\n en_GB.UTF-8\n(1 row)\n\ntest=# SELECT 12.34::money AS price;\n price\n--------\n $12.34\n(1 row)\n8<------------\n\nNotice that merely loading plperl makes the currency symbol wrong.\n\nI have proposed a targeted fix that I believe is safe to backpatch -- \nattached.\n\nIIUC, Tom was +1, but Heikki was looking for a more general solution.\n\nMy issue with the more general solution is that it will likely be too \ninvasive to backpatch, and at the moment at least, there are no other \nconfirmed bugs related to all of this (even if the current code is more \nfragile than we would prefer).\n\nI would like to commit this to all supported branches in the next few \ndays, unless there are other suggestions or objections.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 18 Jun 2023 14:27:13 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 18/06/2023 21:27, Joe Conway wrote:\n> I have proposed a targeted fix that I believe is safe to backpatch --\n> attached.\n> \n> IIUC, Tom was +1, but Heikki was looking for a more general solution.\n> \n> My issue with the more general solution is that it will likely be too\n> invasive to backpatch, and at the moment at least, there are no other\n> confirmed bugs related to all of this (even if the current code is more\n> fragile than we would prefer).\n\nOk, I agree switching to uselocale() everywhere is too much to \nbackpatch. We should consider it for master though.\n\nWith the patch you're proposing, do we now have a coding rule that you \nmust call \"uselocale(LC_GLOBAL_LOCALE)\" before every and any call to \nsetlocale()? If so, you missed a few spots: pg_perm_setlocale, \npg_bind_textdomain_codeset, and cache_locale_time.\n\nThe current locale affects a lot of other things than localeconv() \ncalls. For example, LC_MESSAGES affects all strerror() calls. Do we need \nto call \"uselocale(LC_GLOBAL_LOCALE)\" before all possible strerror() \ncalls too?\n\nI think we should call \"uselocale(LC_GLOBAL_LOCALE)\" immediately after \nreturning from the perl interpreter, instead of before setlocale() \ncalls, if we want all Postgres code to run with the global locale. Not \nsure how much performance overhead that would have.\n\nI just found out about perl's \"switch_to_global_locale\" function \n(https://perldoc.perl.org/perlapi#switch_to_global_locale). Should we \nuse that?\n\nTesting the patch, I bumped into this:\n\npostgres=# create or replace function finnish_to_number() returns \nnumeric as $$ select to_number('1,23', '9D99'); $$ language sql set \nlc_numeric to 'fi_FI.utf8';\nCREATE FUNCTION\npostgres=# DO LANGUAGE 'plperlu' $$\nuse POSIX qw(setlocale LC_NUMERIC);\nuse locale;\n\nsetlocale LC_NUMERIC, \"fi_FI.utf8\";\n\n$n = 5/2; # Assign numeric 2.5 to $n\n\nspi_exec_query('SELECT finnish_to_number()');\n\n$a = \" $n\"; # Locale-dependent conversion to string\nelog(NOTICE, \"half five is $n\"); # Locale-dependent output\n$$;\nNOTICE: half five is 2,5\nDO\npostgres=# select to_char(now(), 'Day');\nWARNING: could not determine encoding for locale \"en_GB.UTF-8\": codeset \nis \"ANSI_X3.4-1968\"\n to_char\n-----------\n Tuesday\n(1 row)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 02:30:50 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/19/23 19:30, Heikki Linnakangas wrote:\n> On 18/06/2023 21:27, Joe Conway wrote:\n>> I have proposed a targeted fix that I believe is safe to backpatch --\n>> attached.\n>> \n>> IIUC, Tom was +1, but Heikki was looking for a more general solution.\n>> \n>> My issue with the more general solution is that it will likely be too\n>> invasive to backpatch, and at the moment at least, there are no other\n>> confirmed bugs related to all of this (even if the current code is more\n>> fragile than we would prefer).\n> \n> Ok, I agree switching to uselocale() everywhere is too much to\n> backpatch. We should consider it for master though.\n\nMakes sense\n\n> With the patch you're proposing, do we now have a coding rule that you\n> must call \"uselocale(LC_GLOBAL_LOCALE)\" before every and any call to\n> setlocale()? If so, you missed a few spots: pg_perm_setlocale,\n> pg_bind_textdomain_codeset, and cache_locale_time.\n\nWell I was not proposing such a rule (trying to stay narrowly focused on \nthe demonstrated issue) but I suppose it might make sense. Anywhere we \nuse setlocale() we are depending on subsequent locale operations to use \nthe global locale. And uselocale(LC_GLOBAL_LOCALE) itself looks like it \nought to be pretty cheap.\n\n> The current locale affects a lot of other things than localeconv()\n> calls. For example, LC_MESSAGES affects all strerror() calls. Do we need\n> to call \"uselocale(LC_GLOBAL_LOCALE)\" before all possible strerror()\n> calls too?\n\nThat seems heavy handed\n\n> I think we should call \"uselocale(LC_GLOBAL_LOCALE)\" immediately after\n> returning from the perl interpreter, instead of before setlocale()\n> calls, if we want all Postgres code to run with the global locale. Not\n> sure how much performance overhead that would have.\n\nI don't see how that is practical, or at least it does not really \naddress the issue. I think any loaded shared library could cause the \nsame problem by running newlocale() + uselocale() on init. Perhaps I \nshould go test that theory though.\n\n> I just found out about perl's \"switch_to_global_locale\" function\n> (https://perldoc.perl.org/perlapi#switch_to_global_locale). Should we\n> use that?\n\nMaybe, although it does not seem to exist on the older perl version on \nRHEL7. And same comment as above -- while it might solve the problem \nwith libperl, it doesn't address similar problems with other loaded \nshared libraries.\n\n> Testing the patch, I bumped into this:\n> \n> postgres=# create or replace function finnish_to_number() returns\n> numeric as $$ select to_number('1,23', '9D99'); $$ language sql set\n> lc_numeric to 'fi_FI.utf8';\n> CREATE FUNCTION\n> postgres=# DO LANGUAGE 'plperlu' $$\n> use POSIX qw(setlocale LC_NUMERIC);\n> use locale;\n> \n> setlocale LC_NUMERIC, \"fi_FI.utf8\";\n> \n> $n = 5/2; # Assign numeric 2.5 to $n\n> \n> spi_exec_query('SELECT finnish_to_number()');\n> \n> $a = \" $n\"; # Locale-dependent conversion to string\n> elog(NOTICE, \"half five is $n\"); # Locale-dependent output\n> $$;\n> NOTICE: half five is 2,5\n> DO\n> postgres=# select to_char(now(), 'Day');\n> WARNING: could not determine encoding for locale \"en_GB.UTF-8\": codeset\n> is \"ANSI_X3.4-1968\"\n> to_char\n> -----------\n> Tuesday\n> (1 row)\n\nDo you think that is because uselocale(LC_GLOBAL_LOCALE) pulls out the \nrug from under perl?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 18:02:48 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 21/06/2023 01:02, Joe Conway wrote:\n> On 6/19/23 19:30, Heikki Linnakangas wrote:\n>> On 18/06/2023 21:27, Joe Conway wrote:\n>> With the patch you're proposing, do we now have a coding rule that you\n>> must call \"uselocale(LC_GLOBAL_LOCALE)\" before every and any call to\n>> setlocale()? If so, you missed a few spots: pg_perm_setlocale,\n>> pg_bind_textdomain_codeset, and cache_locale_time.\n> \n> Well I was not proposing such a rule (trying to stay narrowly focused on\n> the demonstrated issue) but I suppose it might make sense. Anywhere we\n> use setlocale() we are depending on subsequent locale operations to use\n> the global locale. And uselocale(LC_GLOBAL_LOCALE) itself looks like it\n> ought to be pretty cheap.\n> \n>> The current locale affects a lot of other things than localeconv()\n>> calls. For example, LC_MESSAGES affects all strerror() calls. Do we need\n>> to call \"uselocale(LC_GLOBAL_LOCALE)\" before all possible strerror()\n>> calls too?\n> \n> That seems heavy handed\n\nYet I think that's exactly where this is heading. See this case (for \ngettext() rather than strerror()):\n\npostgres=# set lc_messages ='sv_SE.UTF8';\nSET\npostgres=# this prints syntax error in Swedish;\nFEL: syntaxfel vid eller nära \"this\"\nLINE 1: this prints syntax error in Swedish;\n ^\npostgres=# load 'plperl';\nLOAD\npostgres=# set lc_messages ='en_GB.utf8';\nSET\npostgres=# this *should* print syntax error in English;\nFEL: syntaxfel vid eller nära \"this\"\nLINE 1: this *should* print syntax error in English;\n ^\n\n>> I think we should call \"uselocale(LC_GLOBAL_LOCALE)\" immediately after\n>> returning from the perl interpreter, instead of before setlocale()\n>> calls, if we want all Postgres code to run with the global locale. Not\n>> sure how much performance overhead that would have.\n> \n> I don't see how that is practical, or at least it does not really\n> address the issue. I think any loaded shared library could cause the\n> same problem by running newlocale() + uselocale() on init. Perhaps I\n> should go test that theory though.\n\nAny shared library could do that, that's true. Any shared library could \nalso call 'chdir'. But most shared libraries don't. I think it's the \nresponsibility of the extension that loads the shared library, plperl in \nthis case, to make sure it doesn't mess up the environment for the \npostgres backend.\n\n>> Testing the patch, I bumped into this:\n>>\n>> postgres=# create or replace function finnish_to_number() returns\n>> numeric as $$ select to_number('1,23', '9D99'); $$ language sql set\n>> lc_numeric to 'fi_FI.utf8';\n>> CREATE FUNCTION\n>> postgres=# DO LANGUAGE 'plperlu' $$\n>> use POSIX qw(setlocale LC_NUMERIC);\n>> use locale;\n>>\n>> setlocale LC_NUMERIC, \"fi_FI.utf8\";\n>>\n>> $n = 5/2; # Assign numeric 2.5 to $n\n>>\n>> spi_exec_query('SELECT finnish_to_number()');\n>>\n>> $a = \" $n\"; # Locale-dependent conversion to string\n>> elog(NOTICE, \"half five is $n\"); # Locale-dependent output\n>> $$;\n>> NOTICE: half five is 2,5\n>> DO\n>> postgres=# select to_char(now(), 'Day');\n>> WARNING: could not determine encoding for locale \"en_GB.UTF-8\": codeset\n>> is \"ANSI_X3.4-1968\"\n>> to_char\n>> -----------\n>> Tuesday\n>> (1 row)\n> \n> Do you think that is because uselocale(LC_GLOBAL_LOCALE) pulls out the\n> rug from under perl?\n\nlibperl is fine in this case. But cache_locale_time() also calls \nsetlocale(), and your patch didn't add the \"uselocale(LC_GLOBAL_LOCALE)\" \nthere.\n\nIt's a valid concern that \"uselocale(LC_GLOBAL_LOCALE)\" could pull the \nrug from under perl. I tried to find issues like that, by calling \nlocale-dependent functions in plperl, with SQL functions that call \n\"uselocale(LC_GLOBAL_LOCALE)\" via PGLC_localeconv() in between. But I \ncouldn't find any case where the perl code would misbehave. I guess \nlibperl calls uselocale() before any locale-dependent function, but I \ndidn't look very closely.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 22 Jun 2023 10:26:27 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/22/23 03:26, Heikki Linnakangas wrote:\n> On 21/06/2023 01:02, Joe Conway wrote:\n>> On 6/19/23 19:30, Heikki Linnakangas wrote:\n>>> I think we should call \"uselocale(LC_GLOBAL_LOCALE)\" immediately after\n>>> returning from the perl interpreter, instead of before setlocale()\n>>> calls, if we want all Postgres code to run with the global locale. Not\n>>> sure how much performance overhead that would have.\n>> \n>> I don't see how that is practical, or at least it does not really\n>> address the issue. I think any loaded shared library could cause the\n>> same problem by running newlocale() + uselocale() on init. Perhaps I\n>> should go test that theory though.\n> \n> Any shared library could do that, that's true. Any shared library could\n> also call 'chdir'. But most shared libraries don't. I think it's the\n> responsibility of the extension that loads the shared library, plperl in\n> this case, to make sure it doesn't mess up the environment for the\n> postgres backend.\nOk, fair enough.\n\nThe attached fixes all of the issues raised on this thread by \nspecifically patching plperl.\n\n8<------------\ncreate or replace function finnish_to_number()\nreturns numeric as\n$$\n select to_number('1,23', '9D99')\n$$ language sql set lc_numeric to 'fi_FI.utf8';\n\npl_regression=# show lc_monetary;\n lc_monetary\n-------------\n C\n(1 row)\n\nDO LANGUAGE 'plperlu'\n$$\n use POSIX qw(setlocale LC_NUMERIC);\n use locale;\n setlocale LC_NUMERIC, \"fi_FI.utf8\";\n $n = 5/2; # Assign numeric 2.5 to $n\n spi_exec_query('SELECT finnish_to_number()');\n # Locale-dependent conversion to string\n $a = \" $n\";\n # Locale-dependent output\n elog(NOTICE, \"half five is $n\");\n$$;\nNOTICE: half five is 2,5\nDO\n\nset lc_messages ='sv_SE.UTF8';\nthis prints syntax error in Swedish;\nFEL: syntaxfel vid eller nära \"this\"\nLINE 1: this prints syntax error in Swedish;\n ^\n\nset lc_messages ='en_GB.utf8';\nthis *should* print syntax error in English;\nERROR: syntax error at or near \"this\"\nLINE 1: this *should* print syntax error in English;\n ^\nset lc_monetary ='sv_SE.UTF8';\nSELECT 12.34::money AS price;\n price\n----------\n 12,34 kr\n(1 row)\n\nset lc_monetary ='en_GB.UTF8';\nSELECT 12.34::money AS price;\n price\n--------\n £12.34\n(1 row)\n\nset lc_monetary ='en_US.UTF8';\nSELECT 12.34::money AS price;\n price\n--------\n $12.34\n(1 row)\n8<------------\n\nThis works correctly from what I can see -- tested against pg16beta1 on \nLinux Mint with perl v5.34.0 as well as against pg15.2 on RHEL 7 with \nperl v5.16.3.\n\nAlthough I have not looked yet, presumably we could have similar \nproblems with plpython. I would like to get agreement on this approach \nagainst plperl before diving into that though.\n\nThoughts?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 24 Jun 2023 09:09:44 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\n> On 25/05/2023 15:33, Tom Lane wrote:\n> > PG Bug reporting form <[email protected]> writes:\n> >> After upgrading an application using Postgresql from version 10 to 12,\n> >> fields of type \"money\" are no longer generated with the € symbol but with\n> >> $.\n> > \n> > Hmm, seems to work for me:\n>\n> I can reproduce this:\n>\n> psql (16beta1)\n> Type \"help\" for help.\n>\n> postgres=# DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\n> NOTICE: foo\n> DO\n> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n> SET\n> postgres=# SELECT 12.34::money AS price;\n> price\n> --------\n> $12.34\n> (1 row)\n>\n>\n> If I don't call the plperl function, it works as expected:\n>\n> sql (16beta1)\n> Type \"help\" for help.\n>\n> postgres=# SET lc_monetary TO 'en_GB.UTF-8';\n> SET\n> postgres=# SELECT 12.34::money AS price;\n> price\n> --------\n> £12.34\n> (1 row)\n>\n> I should note that 'en_GB.UTF-8' is the default locale in my system, and \n> that's what I used in initdb. I don't know if it makes a difference.\n>\n> > IIRC, we've seen trouble in the past with some versions of libperl\n> > clobbering the host application's locale settings. Maybe you\n> > have a plperl.on_init or plperl.on_plperl_init action that is\n> > causing that to happen? In any case, I'd call it a Perl bug not\n> > a Postgres bug\n> I did some debugging, initializing the perl interpreter calls uselocale():\n>\n> #0 __GI___uselocale (newloc=0x7f9f47ff0940 <_nl_C_locobj>) at \n> ./locale/uselocale.c:31\n> #1 0x00007f9f373bd069 in ?? () from \n> /usr/lib/x86_64-linux-gnu/libperl.so.5.36\n> #2 0x00007f9f373bce74 in ?? () from \n> /usr/lib/x86_64-linux-gnu/libperl.so.5.36\n> #3 0x00007f9f373bfc15 in Perl_init_i18nl10n () from \n> /usr/lib/x86_64-linux-gnu/libperl.so.5.36\n> #4 0x00007f9f48b74cfb in plperl_init_interp () at plperl.c:809\n> #5 0x00007f9f48b78adc in _PG_init () at plperl.c:483\n> #6 0x000055c98b8e9b63 in internal_load_library (libname=0x55c98bebaf90 \n> \"/home/heikki/pgsql.fsmfork/lib/plperl.so\") at dfmgr.c:289\n> #7 0x000055c98b8ea1c2 in load_external_function \n> (filename=filename@entry=0x55c98bebb1c0 \"$libdir/plperl\", \n> funcname=funcname@entry=0x55c98beba378 \"plperl_inline_handler\",\n> signalNotFound=signalNotFound@entry=true, \n> filehandle=filehandle@entry=0x7ffd20942b48) at dfmgr.c:116\n> #8 0x000055c98b8ea864 in fmgr_info_C_lang (functionId=129304, \n> procedureTuple=0x7f9f4778ccb8, finfo=0x7ffd20942bf0) at fmgr.c:386\n> #9 fmgr_info_cxt_security (functionId=129304, finfo=0x7ffd20942bf0, \n> mcxt=<optimized out>, ignore_security=<optimized out>) at fmgr.c:246\n> #10 0x000055c98b8eba72 in fmgr_info (finfo=0x7ffd20942bf0, \n> functionId=<optimized out>) at fmgr.c:129\n> #11 OidFunctionCall1Coll (functionId=<optimized out>, \n> collation=collation@entry=0, arg1=94324124262840) at fmgr.c:1386\n> #12 0x000055c98b5e1385 in ExecuteDoStmt \n> (pstate=pstate@entry=0x55c98beba0b0, stmt=stmt@entry=0x55c98be90858, \n> atomic=atomic@entry=false) at functioncmds.c:2144\n> #13 0x000055c98b7c24ce in standard_ProcessUtility (pstmt=0x55c98be908e0, \n> queryString=0x55c98be8fd50 \"DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') \n> $$;\", readOnlyTree=<optimized out>,\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \n> dest=0x55c98be90b80, qc=0x7ffd20942f30) at utility.c:714\n> #14 0x000055c98b7c0d9f in PortalRunUtility \n> (portal=portal@entry=0x55c98bf0b710, pstmt=pstmt@entry=0x55c98be908e0, \n> isTopLevel=isTopLevel@entry=true,\n> setHoldSnapshot=setHoldSnapshot@entry=false, dest=0x55c98be90b80, \n> qc=0x7ffd20942f30) at pquery.c:1158\n> #15 0x000055c98b7c0ecb in PortalRunMulti \n> (portal=portal@entry=0x55c98bf0b710, isTopLevel=isTopLevel@entry=true, \n> setHoldSnapshot=setHoldSnapshot@entry=false, \n> dest=dest@entry=0x55c98be90b80,\n> altdest=altdest@entry=0x55c98be90b80, qc=qc@entry=0x7ffd20942f30) \n> at pquery.c:1322\n> #16 0x000055c98b7c139d in PortalRun (portal=portal@entry=0x55c98bf0b710, \n> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \n> run_once=run_once@entry=true,\n> dest=dest@entry=0x55c98be90b80, \n> altdest=altdest@entry=0x55c98be90b80, qc=0x7ffd20942f30) at pquery.c:791\n> #17 0x000055c98b7bd85d in exec_simple_query (query_string=0x55c98be8fd50 \n> \"DO LANGUAGE 'plperl' $$ elog(NOTICE, 'foo') $$;\") at postgres.c:1274\n> #18 0x000055c98b7bf978 in PostgresMain (dbname=<optimized out>, \n> username=<optimized out>) at postgres.c:4632\n> #19 0x000055c98b73f743 in BackendRun (port=<optimized out>, \n> port=<optimized out>) at postmaster.c:4461\n> #20 BackendStartup (port=<optimized out>) at postmaster.c:4189\n> #21 ServerLoop () at postmaster.c:1779\n> #22 0x000055c98b74077a in PostmasterMain (argc=argc@entry=3, \n> argv=argv@entry=0x55c98be88fc0) at postmaster.c:1463\n> #23 0x000055c98b4a96be in main (argc=3, argv=0x55c98be88fc0) at main.c:198\n>\n> I think the uselocale() call renders ineffective the setlocale() calls \n> that we make later. Maybe we should replace our setlocale() calls with \n> uselocale(), too.\n\nFor what it's worth to everyone else in the thread (especially Joe), I\nhave a patch locally that fixes the mentioned bug using uselocale(). I\nam not sure that it is worth committing for v16 given how _large_ (the\npatch is actually quite small, +216 -235) of a change it is. I am going\nto spend tomorrow combing over it a bit more and evaluating other\nsetlocale uses in the codebase.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 29 Jun 2023 21:13:26 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 6/29/23 22:13, Tristan Partin wrote:\r\n> On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\r\n>> I think the uselocale() call renders ineffective the setlocale() calls \r\n>> that we make later. Maybe we should replace our setlocale() calls with \r\n>> uselocale(), too.\r\n> \r\n> For what it's worth to everyone else in the thread (especially Joe), I\r\n> have a patch locally that fixes the mentioned bug using uselocale(). I\r\n> am not sure that it is worth committing for v16 given how _large_ (the\r\n> patch is actually quite small, +216 -235) of a change it is. I am going\r\n> to spend tomorrow combing over it a bit more and evaluating other\r\n> setlocale uses in the codebase.\r\n\r\n(moving thread to hackers)\r\n\r\nI don't see a patch attached -- how is it different than what I posted a \r\nweek ago and added to the commitfest here?\r\n\r\n https://commitfest.postgresql.org/43/4413/\r\n\r\nFWIW, if you are proposing replacing all uses of setlocale() with \r\nuselocale() as Heikki suggested:\r\n\r\n1/ I don't think that is pg16 material, and almost certainly not \r\nback-patchable to earlier.\r\n\r\n2/ It probably does not solve all of the identified issues caused by the \r\nnewer perl libraries by itself, i.e. I believe the patch posted to the \r\nCF is still needed.\r\n\r\n3/ I believe it is probably the right way to go for pg17+, but I would \r\nlove to hear opinions from Jeff Davis, Peter Eisentraut, and/or Thomas \r\nMunroe (the locale code \"usual suspects\" ;-)), and others, about that.\r\n\r\n-- \r\nJoe Conway\r\nPostgreSQL Contributors Team\r\nRDS Open Source Databases\r\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Jun 2023 08:13:10 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Fri Jun 30, 2023 at 7:13 AM CDT, Joe Conway wrote:\n> On 6/29/23 22:13, Tristan Partin wrote:\n> > On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\n> >> I think the uselocale() call renders ineffective the setlocale() calls \n> >> that we make later. Maybe we should replace our setlocale() calls with \n> >> uselocale(), too.\n> > \n> > For what it's worth to everyone else in the thread (especially Joe), I\n> > have a patch locally that fixes the mentioned bug using uselocale(). I\n> > am not sure that it is worth committing for v16 given how _large_ (the\n> > patch is actually quite small, +216 -235) of a change it is. I am going\n> > to spend tomorrow combing over it a bit more and evaluating other\n> > setlocale uses in the codebase.\n>\n> (moving thread to hackers)\n>\n> I don't see a patch attached -- how is it different than what I posted a \n> week ago and added to the commitfest here?\n>\n> https://commitfest.postgresql.org/43/4413/\n>\n> FWIW, if you are proposing replacing all uses of setlocale() with \n> uselocale() as Heikki suggested:\n>\n> 1/ I don't think that is pg16 material, and almost certainly not \n> back-patchable to earlier.\n\nI am in agreement.\n\n> 2/ It probably does not solve all of the identified issues caused by the \n> newer perl libraries by itself, i.e. I believe the patch posted to the \n> CF is still needed.\n\nPerhaps. I do think your patch is still valuable regardless. Works for\nbackpatching and is just good defensive programming. I have added myself\nas a reviewer.\n\n> 3/ I believe it is probably the right way to go for pg17+, but I would \n> love to hear opinions from Jeff Davis, Peter Eisentraut, and/or Thomas \n> Munroe (the locale code \"usual suspects\" ;-)), and others, about that.\n\nThanks for your patience. Attached is a patch that should cover all the\nproblematic use cases of setlocale(). There are some setlocale() calls in\ntests, initdb, and ecpg left. I plan to get to ecpglib before the final\nversion of this patch after I abstract over Windows not having\nuselocale(). I think leaving initdb and tests as is would be fine, but I\nam also happy to just permanently purge setlocale() from the codebase\nif people see value in that. We could also poison[0] setlocale() at that\npoint.\n\n[0]: https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Mon, 03 Jul 2023 09:42:58 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "Joe,\n\nThe Reply-To header in your email is pointing at joe@cd, fyi. Pretty\nstrange.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 11:17:21 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Sat Jun 24, 2023 at 8:09 AM CDT, Joe Conway wrote:\n> Although I have not looked yet, presumably we could have similar \n> problems with plpython. I would like to get agreement on this approach \n> against plperl before diving into that though.\n>\n> Thoughts?\n\nI don't see anything immediately wrong with this. I think doing a\nsimilar thing for plpython would make sense. Might make sense to CC any\nother pl* maintainers too.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 11:25:31 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 7/3/23 12:17, Tristan Partin wrote:\n> The Reply-To header in your email is pointing at joe@cd, fyi. Pretty\n> strange.\n\n\nI noticed that -- it happened only the one time, and I am not sure why. \nSeems fine now though.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:28:01 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 7/3/23 12:25, Tristan Partin wrote:\n> On Sat Jun 24, 2023 at 8:09 AM CDT, Joe Conway wrote:\n>> Although I have not looked yet, presumably we could have similar \n>> problems with plpython. I would like to get agreement on this approach \n>> against plperl before diving into that though.\n>>\n>> Thoughts?\n> \n> I don't see anything immediately wrong with this. I think doing a\n> similar thing for plpython would make sense. Might make sense to CC any\n> other pl* maintainers too.\n\nIn our tree there are only plperl and plpython to worry about.\n\n\"other pl* maintainers\" is a fuzzy concept since other pl's are \nscattered far and wide.\n\nI think it is reasonable to expect such maintainers to be paying \nattention to hackers and pick up on it themselves (I say that as a pl \nmaintainer myself -- plr)\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:31:09 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Mon Jun 5, 2023 at 11:00 AM CDT, Heikki Linnakangas wrote:\n>\n> I think the uselocale() call renders ineffective the setlocale() calls \n> that we make later. Maybe we should replace our setlocale() calls with \n> uselocale(), too.\n\nShould we just stop supporting systems without uselocale() that aren't\nWindows, where you can get thread-safe localization using another\nmethod? I am not aware of other systems that might have their own\nnon-POSIX APIs.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 05 Jul 2023 15:45:11 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Mon Jul 3, 2023 at 9:42 AM CDT, Tristan Partin wrote:\n> Thanks for your patience. Attached is a patch that should cover all the\n> problematic use cases of setlocale(). There are some setlocale() calls in\n> tests, initdb, and ecpg left. I plan to get to ecpglib before the final\n> version of this patch after I abstract over Windows not having\n> uselocale(). I think leaving initdb and tests as is would be fine, but I\n> am also happy to just permanently purge setlocale() from the codebase\n> if people see value in that. We could also poison[0] setlocale() at that\n> point.\n>\n> [0]: https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html\n\nHere is a v2 with best effort Windows support. My patch currently\nassumes that you either have uselocale() or are Windows. I dropped the\nenvironment variable hacks, but could bring them back if we didn't like\nthis requirement.\n\nI tried to add an email[0] to discuss this with hackers, but failed to add\nthe CC. Let's discuss here instead given my complete inability to manage\nmailing lists :).\n\n[0]: https://www.postgresql.org/message-id/CTUJ604ZWHI1.3PFZK152XCWLX%40gonk\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 05 Jul 2023 15:53:16 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "Someday I will learn...\n\nAttached is the v2.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 05 Jul 2023 15:55:30 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "Here is an up to date patch given some churn on the master branch.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Mon, 10 Jul 2023 19:52:32 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 7/3/23 12:25, Tristan Partin wrote:\n> On Sat Jun 24, 2023 at 8:09 AM CDT, Joe Conway wrote:\n>> Although I have not looked yet, presumably we could have similar \n>> problems with plpython. I would like to get agreement on this approach \n>> against plperl before diving into that though.\n>>\n>> Thoughts?\n> \n> I don't see anything immediately wrong with this.\n\nAny further comments on the posted patch[1]? I would like to apply/push \nthis prior to the beta and minor releases next week.\n\nJoe\n\n[1] \nhttps://www.postgresql.org/message-id/ec6fa20d-e691-198a-4a13-e761771b9dec%40joeconway.com\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 1 Aug 2023 09:48:43 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Tue Aug 1, 2023 at 8:48 AM CDT, Joe Conway wrote:\n> On 7/3/23 12:25, Tristan Partin wrote:\n> > On Sat Jun 24, 2023 at 8:09 AM CDT, Joe Conway wrote:\n> >> Although I have not looked yet, presumably we could have similar \n> >> problems with plpython. I would like to get agreement on this approach \n> >> against plperl before diving into that though.\n> >>\n> >> Thoughts?\n> > \n> > I don't see anything immediately wrong with this.\n>\n> Any further comments on the posted patch[1]? I would like to apply/push \n> this prior to the beta and minor releases next week.\n>\n> Joe\n>\n> [1] \n> https://www.postgresql.org/message-id/ec6fa20d-e691-198a-4a13-e761771b9dec%40joeconway.com\n\nNone from my end.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 01 Aug 2023 09:02:17 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 01/08/2023 16:48, Joe Conway wrote:\n> Any further comments on the posted patch[1]? I would like to apply/push\n> this prior to the beta and minor releases next week.\n\nI'm not sure about the placement of the uselocale() calls. In \nplperl_spi_exec(), for example, I think we should switch to the global \nlocale right after the check_spi_usage_allowed() call. Otherwise, if an \nerror happens in BeginInternalSubTransaction() or in pg_verifymbstr(), \nthe error would be processed with the perl locale. Maybe that's \nharmless, error processing hardly cares about LC_MONETARY, but seems \nwrong in principle.\n\nHmm, come to think of it, if BeginInternalSubTransaction() throws an \nerror, we just jump out of the perl interpreter? That doesn't seem cool. \nBut that's not new with this patch.\n\nIf I'm reading correctly, compile_plperl_function() calls \nselect_perl_context(), which calls plperl_trusted_init(), which calls \nuselocale(). So it leaves locale set to the perl locale. Who sets it back?\n\nHow about adding a small wrapper around eval_pl() that sets and unsets \nthe locale(), just when we enter the interpreter? It's easier to see \nthat we are doing the calls in right places, if we make them as close as \npossible to entering/exiting the interpreter. Are there other functions \nin addition to eval_pl() that need to be called with the perl locale?\n\n> /*\n> * plperl_xact_callback --- cleanup at main-transaction end.\n> */\n> static void\n> plperl_xact_callback(XactEvent event, void *arg)\n> {\n> \t/* ensure global locale is the current locale */\n> \tif (uselocale((locale_t) 0) != LC_GLOBAL_LOCALE)\n> \t\tperl_locale_obj = uselocale(LC_GLOBAL_LOCALE);\n> }\n\nSo the assumption is that the if current locale is not LC_GLOBAL_LOCALE, \nthen it was the perl locale. Seems true today, but this could confusion \nif anything else calls uselocale(). In particular, if another PL \nimplementation copies this, and you use plperl and the other PL at the \nsame time, they would get mixed up. I think we need another \"bool \nperl_locale_obj_in_use\" variable to track explicitly whether the perl \nlocale is currently active.\n\nIf we are careful to put the uselocale() calls in the right places so \nthat we never ereport() while in perl locale, this callback isn't \nneeded. Maybe it's still a good idea, though, to be extra sure that \nthings get reset to a sane state if something unexpected happens.\n\nIf multiple interpreters are used, is the single perl_locale_obj \nvariable still enough? Each interpreter can have their own locale I believe.\n\nPS. please pgindent\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 17:40:53 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 8/15/23 10:40, Heikki Linnakangas wrote:\n> On 01/08/2023 16:48, Joe Conway wrote:\n>> Any further comments on the posted patch[1]? I would like to apply/push\n>> this prior to the beta and minor releases next week.\n> \n> I'm not sure about the placement of the uselocale() calls. In\n> plperl_spi_exec(), for example, I think we should switch to the global\n> locale right after the check_spi_usage_allowed() call. Otherwise, if an\n> error happens in BeginInternalSubTransaction() or in pg_verifymbstr(),\n> the error would be processed with the perl locale. Maybe that's\n> harmless, error processing hardly cares about LC_MONETARY, but seems\n> wrong in principle.\n\nI guess you could probably argue that we should flip this around, and \nonly enter the perl locale when calling into libperl, and exit the perl \nlocale every time we reemerge under plperl.c control. That seems pretty \ndrastic and potentially messy though.\n\n> Hmm, come to think of it, if BeginInternalSubTransaction() throws an\n> error, we just jump out of the perl interpreter? That doesn't seem cool.\n> But that's not new with this patch.\n\nHmm, true enough I guess.\n\n> If I'm reading correctly, compile_plperl_function() calls\n> select_perl_context(), which calls plperl_trusted_init(), which calls\n> uselocale(). So it leaves locale set to the perl locale. Who sets it back?\n\nNo one does it seems, at least not currently\n\n> How about adding a small wrapper around eval_pl() that sets and unsets\n> the locale(), just when we enter the interpreter? It's easier to see\n> that we are doing the calls in right places, if we make them as close as\n> possible to entering/exiting the interpreter. Are there other functions\n> in addition to eval_pl() that need to be called with the perl locale?\n\nI can see that as a better strategy, but \"other functions in addition to \neval_pv()\" (I assume you mean eval_pv rather than eval_pl) is a tricky \none to answer.\n\nI ran the attached script like so (from cwd src/pl/plperl) like so:\n```\nsymbols-used.sh /lib/x86_64-linux-gnu/libperl.so.5.34 plperl.so\n```\nand get a fairly long list of exported libperl functions that get linked \ninto plperl.so:\n\n```\nMatched symbols:\nboot_DynaLoader\nperl_alloc\nPerl_av_extend\nPerl_av_fetch\nPerl_av_len\nPerl_av_push\n*Perl_call_list\n*Perl_call_pv\n*Perl_call_sv\nperl_construct\nPerl_croak\nPerl_croak_nocontext\nPerl_croak_sv\nPerl_croak_xs_usage\nPerl_die\n*Perl_eval_pv\nPerl_free_tmps\nPerl_get_sv\nPerl_gv_add_by_type\nPerl_gv_stashpv\nPerl_hv_clear\nPerl_hv_common\nPerl_hv_common_key_len\nPerl_hv_iterinit\nPerl_hv_iternext\nPerl_hv_iternext_flags\nPerl_hv_iternextsv\nPerl_hv_ksplit\nPerl_looks_like_number\nPerl_markstack_grow\nPerl_mg_get\nPerl_newRV\nPerl_newRV_noinc\nPerl_newSV\nPerl_newSViv\nPerl_newSVpv\nPerl_newSVpvn\nPerl_newSVpvn_flags\nPerl_newSVsv\nPerl_newSVsv_flags\nPerl_newSV_type\nPerl_newSVuv\nPerl_newXS\nPerl_newXS_flags\n*perl_parse\nPerl_pop_scope\nPerl_push_scope\n*perl_run\nPerl_save_item\nPerl_savetmps\nPerl_stack_grow\nPerl_sv_2bool\nPerl_sv_2bool_flags\nPerl_sv_2iv\nPerl_sv_2iv_flags\nPerl_sv_2mortal\nPerl_sv_2pv\nPerl_sv_2pvbyte\nPerl_sv_2pvbyte_flags\nPerl_sv_2pv_flags\nPerl_sv_2pvutf8\nPerl_sv_2pvutf8_flags\nPerl_sv_bless\nPerl_sv_free\nPerl_sv_free2\nPerl_sv_isa\nPerl_sv_newmortal\nPerl_sv_setiv\nPerl_sv_setiv_mg\nPerl_sv_setsv\nPerl_sv_setsv_flags\nPerl_sys_init\nPerl_sys_init3\nPerl_xs_boot_epilog\nPerl_xs_handshake\n```\n\nI marked the ones that look like perhaps we should care about in the \nabove list with an asterisk:\n\n*Perl_call_list\n*Perl_call_pv\n*Perl_call_sv\n*Perl_eval_pv\n*perl_run\n\nbut perhaps there are others?\n\n>> /*\n>> * plperl_xact_callback --- cleanup at main-transaction end.\n>> */\n>> static void\n>> plperl_xact_callback(XactEvent event, void *arg)\n>> {\n>> \t/* ensure global locale is the current locale */\n>> \tif (uselocale((locale_t) 0) != LC_GLOBAL_LOCALE)\n>> \t\tperl_locale_obj = uselocale(LC_GLOBAL_LOCALE);\n>> }\n> \n> So the assumption is that the if current locale is not LC_GLOBAL_LOCALE,\n> then it was the perl locale. Seems true today, but this could confusion\n> if anything else calls uselocale(). In particular, if another PL\n> implementation copies this, and you use plperl and the other PL at the\n> same time, they would get mixed up. I think we need another \"bool\n> perl_locale_obj_in_use\" variable to track explicitly whether the perl\n> locale is currently active.\n\nOr perhaps don't assume that we want the global locale and swap between \npg_locale_obj (whatever it is) and perl_locale_obj?\n\n> If we are careful to put the uselocale() calls in the right places so\n> that we never ereport() while in perl locale, this callback isn't\n> needed. Maybe it's still a good idea, though, to be extra sure that\n> things get reset to a sane state if something unexpected happens.\n\nI feel more comfortable that we have a \"belt and suspenders\" method to \nrestore the locale that was in use by Postgres before entering perl.\n\n> If multiple interpreters are used, is the single perl_locale_obj\n> variable still enough? Each interpreter can have their own locale I believe.\n\nSo in other words plperl and plperlu both used in the same query? I \ndon't see how we could get from one to the other without going through \nthe outer \"postgres\" locale first. Or are you thinking something else?\n\n> PS. please pgindent\n\nok\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 27 Aug 2023 09:41:01 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 27/08/2023 16:41, Joe Conway wrote:\n> On 8/15/23 10:40, Heikki Linnakangas wrote:\n>> If multiple interpreters are used, is the single perl_locale_obj\n>> variable still enough? Each interpreter can have their own locale I believe.\n> \n> So in other words plperl and plperlu both used in the same query? I\n> don't see how we could get from one to the other without going through\n> the outer \"postgres\" locale first. Or are you thinking something else?\n\nI think you got that it backwards. 'perl_locale_obj' is set to the perl \ninterpreter's locale, whenever we are *outside* the interpreter.\n\nThis crashes with the patch:\n\npostgres=# DO LANGUAGE plperlu\n$function$\n use POSIX qw(setlocale LC_NUMERIC);\n use locale;\n\n setlocale LC_NUMERIC, \"sv_SE.utf8\";\n$function$;\nDO\npostgres=# do language plperl $$ $$;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nI was going to test using plperl and plperl in the same session and \nexpected the interpreters to mix up the locales they use. Maybe the \ncrash is because of something like that, although I didn't expect a \ncrash, just weird confusion on which locale is used.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sun, 27 Aug 2023 23:24:59 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On Sun, Aug 27, 2023 at 4:25 PM Heikki Linnakangas <[email protected]> wrote:\n> I think you got that it backwards. 'perl_locale_obj' is set to the perl\n> interpreter's locale, whenever we are *outside* the interpreter.\n\nThis thread has had no update for more than 4 months, so I'm marking\nthe CF entry RwF for now.\n\nIt can always be reopened, if Joe or Tristan or Heikki or someone else\npicks it up again.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:56:49 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
},
{
"msg_contents": "On 1/5/24 12:56, Robert Haas wrote:\n> On Sun, Aug 27, 2023 at 4:25 PM Heikki Linnakangas <[email protected]> wrote:\n>> I think you got that it backwards. 'perl_locale_obj' is set to the perl\n>> interpreter's locale, whenever we are *outside* the interpreter.\n> \n> This thread has had no update for more than 4 months, so I'm marking\n> the CF entry RwF for now.\n> \n> It can always be reopened, if Joe or Tristan or Heikki or someone else\n> picks it up again.\n\n\nIt is definitely a bug, so I do plan to get back to it at some point, \nhopefully sooner rather than later...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 5 Jan 2024 13:19:37 -0500",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17946: LC_MONETARY & DO LANGUAGE plperl - BUG"
}
] |
[
{
"msg_contents": "Commit 61b313e4, which prepared index access method code for the\nlogical decoding on standbys work, made quite a few mechanical\nchanges. Many routines were revised in order to make sure that heaprel\nwas available in _bt_getbuf()'s P_NEW (new page allocation) path. But\nthis went further than it really had to. Many of the changes to nbtree\nseem excessive.\n\nWe only need a heaprel in those code paths that might end up calling\n_bt_getbuf() with blkno = P_NEW. This includes most callers that pass\naccess = BT_WRITE, and all callers that pass access = BT_READ. This\ndoesn't have to be haphazard -- there just aren't that many places\nthat can allocate new nbtree pages. It's just page splits, and new\nroot page allocations (which are actually a slightly different kind of\npage split). The rule doesn't need to be complicated (to be fair it\nlooks more complicated than it really is).\n\nAttached patch completely removes the changes to _bt_getbuf()'s\nsignature from 61b313e4. This is possible without any loss of\nfunctionality. The patch splits _bt_getbuf () in two: the code that\nhandles BT_READ/BT_WRITE stays in _bt_getbuf(), which is now far\nshorter. Handling of new page allocation is moved to a new routine\nI've called _bt_alloc(). This is clearer in general, and makes it\nclear what the rules really are. Any code that might need to call\n_bt_alloc() must be prepared for that, by having a heaprel to pass to\nit (the slightly complicated case is interrupted page splits).\n\nIt's possible that Bertand would have done it this way to begin with\nwere it not for the admittedly pretty bad nbtree convention around\nP_NEW. It would be nice to get rid of P_NEW in the near future, too --\nI gather that there was discussion of that in the context of recent\nwork in this area.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 25 May 2023 18:50:31 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Thu, May 25, 2023 at 06:50:31PM -0700, Peter Geoghegan wrote:\n> It's possible that Bertand would have done it this way to begin with\n> were it not for the admittedly pretty bad nbtree convention around\n> P_NEW. It would be nice to get rid of P_NEW in the near future, too --\n> I gather that there was discussion of that in the context of recent\n> work in this area.\n\nNice cleanup overall.\n\n+ if (XLogStandbyInfoActive() && RelationNeedsWAL(rel))\n {\n- /* Okay to use page. Initialize and return it. */\n- _bt_pageinit(page, BufferGetPageSize(buf));\n- return buf;\n+ safexid = BTPageGetDeleteXid(page);\n+ isCatalogRel = RelationIsAccessibleInLogicalDecoding(heaprel);\n+ _bt_log_reuse_page(rel, blkno, safexid, isCatalogRel);\n\nThere is only one caller of _bt_log_reuse_page(), so assigning a\nboolean rather than the heap relation is a bit strange to me. I think\nthat calling RelationIsAccessibleInLogicalDecoding() within\n_bt_log_reuse_page() where xlrec_reuse is filled with its data is much\nmore natural, like HEAD. One argument in favor of HEAD is that it is\nnot possible to pass down a wrong value for isCatalogRel, but your\npatch would make that possible.\n--\nMichael",
"msg_date": "Fri, 26 May 2023 16:46:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On 2023-May-25, Peter Geoghegan wrote:\n\n> Attached patch completely removes the changes to _bt_getbuf()'s\n> signature from 61b313e4.\n\nI suppose you're not thinking of applying this to current master, but\ninstead just leave it for when pg17 opens, right? I mean, clearly it\nseems far too invasive to put it in after beta1. On the other hand,\nit's painful to know that we're going to have code that exists only on\n16 and not any other release, in an area that's likely to have bugs here\nand there, so we're going to need to heavily adjust backpatches for 16\nespecially.\n\nI can't make up my mind about this. What do others think?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n",
"msg_date": "Fri, 26 May 2023 10:56:53 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 12:46 AM Michael Paquier <[email protected]> wrote:\n> Nice cleanup overall.\n\nThanks.\n\nTo be clear, when I said \"it would be nice to get rid of P_NEW\", what\nI meant was that it would be nice to go much further than what I've\ndone in the patch by getting rid of the general idea of P_NEW. So the\nhandling of P_NEW at the top of ReadBuffer_common() would be removed,\nfor example. (Note that nbtree doesn't actually rely on that code,\neven now; while its use of the P_NEW constant creates the impression\nthat it needs that bufmgr.c code, it actually doesn't, even now.)\n\n> + if (XLogStandbyInfoActive() && RelationNeedsWAL(rel))\n> {\n> - /* Okay to use page. Initialize and return it. */\n> - _bt_pageinit(page, BufferGetPageSize(buf));\n> - return buf;\n> + safexid = BTPageGetDeleteXid(page);\n> + isCatalogRel = RelationIsAccessibleInLogicalDecoding(heaprel);\n> + _bt_log_reuse_page(rel, blkno, safexid, isCatalogRel);\n>\n> There is only one caller of _bt_log_reuse_page(), so assigning a\n> boolean rather than the heap relation is a bit strange to me. I think\n> that calling RelationIsAccessibleInLogicalDecoding() within\n> _bt_log_reuse_page() where xlrec_reuse is filled with its data is much\n> more natural, like HEAD.\n\nAttached is v2, which deals with this by moving the code from\n_bt_log_reuse_page() into _bt_allocbuf() itself -- there is no need\nfor a separate logging function. This structure seems like a clear\nimprovement, since such logging is largely the point of having a\nseparate _bt_allocbuf() function that deals with new page allocations\nand requires a valid heapRel in all cases.\n\nv2 also renames \"heaprel\" to \"heapRel\" in function signatures, for\nconsistency with older code that always used that convention.\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 26 May 2023 09:56:50 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 1:56 AM Alvaro Herrera <[email protected]> wrote:\n> I suppose you're not thinking of applying this to current master, but\n> instead just leave it for when pg17 opens, right? I mean, clearly it\n> seems far too invasive to put it in after beta1.\n\nI was planning on targeting 16 with this. Although I only posted a\npatch recently, I complained about the problems in this area shortly\nafter the code first went in. It's fairly obvious to me that the\nchanges made to nbtree went quite a bit further than they needed to.\nAdmittedly that's partly because I'm an expert on this particular\ncode.\n\n> On the other hand,\n> it's painful to know that we're going to have code that exists only on\n> 16 and not any other release, in an area that's likely to have bugs here\n> and there, so we're going to need to heavily adjust backpatches for 16\n> especially.\n\nRight -- it's important to keep things reasonably consistent to ease\nbackpatching. Though I doubt that changes to nbtree itself will turn\nout to be buggy -- with or without my patch. The changes to nbtree\nwere all pretty mechanical. A little too mechanical, in fact.\n\nAs I said already, there just aren't that many ways that new nbtree\npages can come into existence -- it's naturally limited to page splits\n(including root page splits), and the case where we need to add a new\nroot page that's also a leaf page at the point that the first ever\ntuple is inserted into the index (before that we just have a metapage)\n-- so I only have three _bt_allocbuf() callers to worry about. It's\ncompletely self-evident (even to people that know little about nbtree)\nthat the only type of page access that could possibly need a heapRel\nin the first place is P_NEW (i.e., a new page allocation). Once you\nknow all that, this situation begins to look much more\nstraightforward.\n\nNow, to be fair to Bertrand, it *looks* more complicated than it\nreally is, in large part due to the obscure case where VACUUM finishes\nan interrupted page split (during page deletion), which itself goes on\nto cause a page split one level up. So it's possible (barely) that\nVACUUM will enlarge an index by one page, which requires a heapRel,\njust like any other place where an index is enlarged by a page split\n(I documented all this in commit 35bc0ec7).\n\nI've added several defensive assertions that make it hard to get the\ndetails wrong. These will catch the issue much earlier than the main\n\"heapRel != NULL\" assertion in _bt_allocbuf(). So, the rules are\nreasonably straightforward and enforceable.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 May 2023 10:28:58 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 10:28 AM Peter Geoghegan <[email protected]> wrote:\n> I've added several defensive assertions that make it hard to get the\n> details wrong. These will catch the issue much earlier than the main\n> \"heapRel != NULL\" assertion in _bt_allocbuf(). So, the rules are\n> reasonably straightforward and enforceable.\n\nThough it's not an issue new to 16, or a problem that anybody is\nobligated to deal with on this thread, I wonder: Why is it okay that\nwe're using \"rel\" (the index rel) for our TestForOldSnapshot() calls\nin nbtree, rather than using heapRel? Why wouldn't the rules be the\nsame as they are for the new code paths needed for logical decoding on\nstandbys? (Namely, the \"use heapRel, not rel\" rule.)\n\nMore concretely, I'm pretty sure that RelationIsUsedAsCatalogTable()\n(which TestForOldSnapshot() relies on) gives an answer that would\nchange if we decided to pass heapRel to the main TestForOldSnapshot()\ncall within _bt_moveright(), instead of doing what we actually do,\nwhich is to just pass it the index rel. I suppose that that\ninteraction might have been overlooked when bugfix commit bf9a60ee33\nfirst added RelationIsUsedAsCatalogTable() -- since that happened a\ncouple of months after the initial \"snapshot too old\" commit went in,\na fix that happened under time pressure.\n\nMore generally, the high level rules/invariants that govern when\nTestForOldSnapshot() should be called (and with what rel/snapshot)\nfeel less than worked out. I find it suspicious that there isn't any\nattempt to relate TestForOldSnapshot() behaviors to the conceptually\nsimilar PredicateLockPage() behavior. We don't need predicate locks on\ninternal pages, but TestForOldSnapshot() *does* get called for\ninternal pages. Many PredicateLockPage() calls happen very close to\nTestForOldSnapshot() calls, each of which use the same snapshot -- not\naddressing that seems like a glaring omission to me.\n\nBasically it seems like there should be one standard set of rules for\nall this stuff. Though it's not the fault of Bertrand or Andres, all\nthat we have now is two poorly documented sets of rules that partially\noverlap. This has long bothered me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 May 2023 12:23:44 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-25 18:50:31 -0700, Peter Geoghegan wrote:\n> Commit 61b313e4, which prepared index access method code for the\n> logical decoding on standbys work, made quite a few mechanical\n> changes. Many routines were revised in order to make sure that heaprel\n> was available in _bt_getbuf()'s P_NEW (new page allocation) path. But\n> this went further than it really had to. Many of the changes to nbtree\n> seem excessive.\n> \n> We only need a heaprel in those code paths that might end up calling\n> _bt_getbuf() with blkno = P_NEW. This includes most callers that pass\n> access = BT_WRITE, and all callers that pass access = BT_READ. This\n> doesn't have to be haphazard -- there just aren't that many places\n> that can allocate new nbtree pages.\n\nWhat do we gain by not passing down the heap relation to those places?\nIf you're concerned about the efficiency of passing down the parameters,\nI doubt it will make a meaningful difference, because the parameter can\njust stay in the register to be passed down further.\n\nNote that I do agree with some aspects of the change for other reasons,\nsee below...\n\n> It's just page splits, and new\n> root page allocations (which are actually a slightly different kind of\n> page split). The rule doesn't need to be complicated (to be fair it\n> looks more complicated than it really is).\n> \n> Attached patch completely removes the changes to _bt_getbuf()'s\n> signature from 61b313e4. This is possible without any loss of\n> functionality. The patch splits _bt_getbuf () in two: the code that\n> handles BT_READ/BT_WRITE stays in _bt_getbuf(), which is now far\n> shorter. Handling of new page allocation is moved to a new routine\n> I've called _bt_alloc(). This is clearer in general, and makes it\n> clear what the rules really are. Any code that might need to call\n> _bt_alloc() must be prepared for that, by having a heaprel to pass to\n> it (the slightly complicated case is interrupted page splits).\n\nI think it's a very good idea to split the \"new page\" case off\n_bt_getbuf(). We probably should evolve the design in the area, and\nthat will be easier with such a change.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 May 2023 14:49:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 2:49 PM Andres Freund <[email protected]> wrote:\n> What do we gain by not passing down the heap relation to those places?\n\nJust clearer code, free from noisey changes. Easier when backpatching, too.\n\n> If you're concerned about the efficiency of passing down the parameters,\n> I doubt it will make a meaningful difference, because the parameter can\n> just stay in the register to be passed down further.\n\nI'm not concerned about the efficiency of passing down heapRel in so\nmany places.\n\nAs things stand, there is no suggestion that any _bt_getbuf() call is\nexempt from the requirement to pass down a heaprel -- every caller\n(internal and external) goes to the trouble of making sure that they\ncomply with the apparent requirement to supply a heapRel. In some\ncases callers do so just to be able to read the metapage. Even code as\nfar removed from nbtree as heapam_relation_copy_for_cluster() will now\ngo to the trouble of passing its own heap rel, just to perform a\nCLUSTER-based tuplesort. The relevant tuplesort call site even has\ncomments that try to justify this approach, with a reference to\n_bt_log_reuse_page(). So heapam_handler.c now references a static\nhelper function private to nbtpage.c -- an obvious modularity\nviolation.\n\nIt's not even the modularity violation itself that bothers me. It's\njust 100% unnecessary for heapam_relation_copy_for_cluster() to do any\nof this, because there simply isn't going to be a call to\n_bt_log_reuse_page() during its cluster tuplesort, no matter what.\nThis has nothing to do with any underlying implementation detail from\nnbtree, or from any other index AM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 May 2023 15:19:55 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 10:56:53AM +0200, Alvaro Herrera wrote:\n> I suppose you're not thinking of applying this to current master, but\n> instead just leave it for when pg17 opens, right? I mean, clearly it\n> seems far too invasive to put it in after beta1. On the other hand,\n> it's painful to know that we're going to have code that exists only on\n> 16 and not any other release, in an area that's likely to have bugs here\n> and there, so we're going to need to heavily adjust backpatches for 16\n> especially.\n> \n> I can't make up my mind about this. What do others think?\n\nWhen I looked at the patch yesterday, my impression was that this\nwould be material for v17 as it is refactoring work, not v16.\n--\nMichael",
"msg_date": "Sat, 27 May 2023 08:05:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 4:05 PM Michael Paquier <[email protected]> wrote:\n> On Fri, May 26, 2023 at 10:56:53AM +0200, Alvaro Herrera wrote:\n> > I can't make up my mind about this. What do others think?\n>\n> When I looked at the patch yesterday, my impression was that this\n> would be material for v17 as it is refactoring work, not v16.\n\nI'd have thought the subject line \"Cleaning up nbtree after logical\ndecoding on standby work\" made it quite clear that this patch was\ntargeting 16.\n\nIt's not refactoring work -- not really. The whole idea of outright\nremoving the use of P_NEW in nbtree was where I landed with this after\na couple of hours of work. In fact I almost posted a version without\nthat, though that was worse in every way to my final approach.\n\nI first voiced concerns about this whole area way back on April 4,\nwhich is only 3 days after commit 61b313e4 went in:\n\nhttps://postgr.es/m/CAH2-Wz=jGryxWm74G1khSt0zNPUNhezYJnvSjNo2t3Jswtb8ww@mail.gmail.com\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 May 2023 16:48:37 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, May 26, 2023 at 04:48:37PM -0700, Peter Geoghegan wrote:\n> I'd have thought the subject line \"Cleaning up nbtree after logical\n> decoding on standby work\" made it quite clear that this patch was\n> targeting 16.\n\nHmm, okay. I was understanding that as something for v17, honestly.\n\n> It's not refactoring work -- not really. The whole idea of outright\n> removing the use of P_NEW in nbtree was where I landed with this after\n> a couple of hours of work. In fact I almost posted a version without\n> that, though that was worse in every way to my final approach.\n> \n> I first voiced concerns about this whole area way back on April 4,\n> which is only 3 days after commit 61b313e4 went in:\n> \n> https://postgr.es/m/CAH2-Wz=jGryxWm74G1khSt0zNPUNhezYJnvSjNo2t3Jswtb8ww@mail.gmail.com\n\nSure. My take is that if this patch were to be sent at the beginning\nof April, it could have been considered in v16. However, deciding\nsuch a matter at the end of May after beta1 has been released is a\ndifferent call. You may want to make sure that the RMT is OK with\nthat, at the end.\n--\nMichael",
"msg_date": "Mon, 29 May 2023 09:31:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On 29/05/2023 03:31, Michael Paquier wrote:\n> On Fri, May 26, 2023 at 04:48:37PM -0700, Peter Geoghegan wrote:\n>> I'd have thought the subject line \"Cleaning up nbtree after logical\n>> decoding on standby work\" made it quite clear that this patch was\n>> targeting 16.\n> \n> Hmm, okay. I was understanding that as something for v17, honestly.\n\nIMO this makes sense for v16. These new arguments were introduced in \nv16, so if we have second thoughts, now is the right time to change \nthem, before v16 is released. It will reduce the backpatching effort in \nthe future; if we apply this in v17, then v16 will be the odd one out.\n\nFor the patch itself:\n\n> @@ -75,6 +74,10\n> *\t_bt_search() -- Search the tree for a particular scankey,\n> *\t\tor more precisely for the first leaf page it could be on.\n> *\n> + * rel must always be provided. heaprel must be provided by callers that pass\n> + * access = BT_WRITE, since we may need to allocate a new root page for caller\n> + * in passing (or when finishing a page split results in a parent page split).\n> + *\n> * The passed scankey is an insertion-type scankey (see nbtree/README),\n> * but it can omit the rightmost column(s) of the index.\n> *\n\nMaybe add an assertion for that in _bt_search(), too. I know you added \none in _bt_getroot(), and _bt_search() calls that as the very first \nthing. But I think it would be useful as documentation in _bt_search(), too.\n\nMaybe it would be more straightforward to always require heapRel in \n_bt_search() and _bt_getroot(), regardless of whether it's BT_READ or \nBT_WRITE, even if the functions don't use it with BT_READ. It would be \nless mental effort in the callers to just always pass in 'heapRel'.\n\nOverall, +1 on this patch, and +1 for committing it to v16.\n\n--\nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 May 2023 05:49:52 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "Hi,\n\nOn 5/26/23 7:28 PM, Peter Geoghegan wrote:\n> On Fri, May 26, 2023 at 1:56 AM Alvaro Herrera <[email protected]> wrote:\n>> I suppose you're not thinking of applying this to current master, but\n>> instead just leave it for when pg17 opens, right? I mean, clearly it\n>> seems far too invasive to put it in after beta1.\n> \n> I was planning on targeting 16 with this. Although I only posted a\n> patch recently, I complained about the problems in this area shortly\n> after the code first went in. It's fairly obvious to me that the\n> changes made to nbtree went quite a bit further than they needed to.\n\nThanks Peter for the call out and the follow up on this!\n\nAs you already mentioned in this thread, all the changes I've done in\n61b313e47e were purely \"mechanical\" as the main goal was to move forward the\nlogical decoding on standby thread and..\n\n> Admittedly that's partly because I'm an expert on this particular\n> code.\n> \n\nit was not obvious to me (as I'm not an expert as you are in this area) that\nmany of those changes were \"excessive\".\n\n> Now, to be fair to Bertrand, it *looks* more complicated than it\n> really is, in large part due to the obscure case where VACUUM finishes\n> an interrupted page split (during page deletion), which itself goes on\n> to cause a page split one level up.\n\nThanks ;-)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 May 2023 10:19:00 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Sun, May 28, 2023 at 7:49 PM Heikki Linnakangas <[email protected]> wrote:\n> IMO this makes sense for v16. These new arguments were introduced in\n> v16, so if we have second thoughts, now is the right time to change\n> them, before v16 is released. It will reduce the backpatching effort in\n> the future; if we apply this in v17, then v16 will be the odd one out.\n\nMy current plan is to commit this later in the week, unless there are\nfurther objections. Wednesday or possibly Thursday.\n\n> Maybe add an assertion for that in _bt_search(), too. I know you added\n> one in _bt_getroot(), and _bt_search() calls that as the very first\n> thing. But I think it would be useful as documentation in _bt_search(), too.\n\nAttached revision v3 does it that way.\n\n> Maybe it would be more straightforward to always require heapRel in\n> _bt_search() and _bt_getroot(), regardless of whether it's BT_READ or\n> BT_WRITE, even if the functions don't use it with BT_READ. It would be\n> less mental effort in the callers to just always pass in 'heapRel'.\n\nPerhaps, but it would also necessitate keeping heapRel in\n_bt_get_endpoint()'s signature. That would mean that\n_bt_get_endpoint() would needlessly pass its own superfluous heapRel\narg to _bt_search(), while presumably never passing the same heapRel\nto _bt_gettrueroot() (not to be confused with _bt_getroot) in the\n\"level == 0\" case. These inconsistencies seem kind of jarring.\n\nBesides, every _bt_search() caller must already understand that\n_bt_search does non-obvious extra work for BT_WRITE callers -- that's\nnothing new. The requirement that BT_WRITE callers pass a heapRel\nexists precisely because code that is used only during BT_WRITE calls\nmight ultimately reach _bt_allocbuf() indirectly. The \"no heapRel\nrequired in BT_READ case\" seems directly relevant to callers --\navoiding _bt_allocbuf() during _bt_search() calls during Hot Standby\n(or during VACUUM) is a basic requirement that callers more or less\nask for and expect already. (Bear in mind that the new rule going\nforward is that _bt_allocbuf() is the one and only choke point where\nnew pages/buffers can be allocated by nbtree, and the only possible\nsource of recovery conflicts during REDO besides opportunistic\ndeletion record conflicts -- so it really isn't strange for _bt_search\ncallers to be thinking about whether _bt_allocbuf is safe to call.)\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 5 Jun 2023 12:04:29 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On 2023-Jun-05, Peter Geoghegan wrote:\n\n> My current plan is to commit this later in the week, unless there are\n> further objections. Wednesday or possibly Thursday.\n\nI've added this as an open item for 16, with Peter and Andres as owners.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n",
"msg_date": "Tue, 6 Jun 2023 22:00:09 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-05 12:04:29 -0700, Peter Geoghegan wrote:\n> On Sun, May 28, 2023 at 7:49 PM Heikki Linnakangas <[email protected]> wrote:\n> > IMO this makes sense for v16. These new arguments were introduced in\n> > v16, so if we have second thoughts, now is the right time to change\n> > them, before v16 is released. It will reduce the backpatching effort in\n> > the future; if we apply this in v17, then v16 will be the odd one out.\n>\n> My current plan is to commit this later in the week, unless there are\n> further objections. Wednesday or possibly Thursday.\n\n-1. For me separating the P_NEW path makes a lot of sense, but isn't 16\nmaterial. I don't agree that it's a problem to have heaprel as a parameter in\na bunch of places that don't strictly need it today.\n\nI don't really understand why the patch does s/heaprel/heapRel/. Most of these\nfunctions aren't actually using camelcase parameters? This part of the change\njust blows up the size, making it harder to review.\n\n\n 12 files changed, 317 insertions(+), 297 deletions(-)\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 17:12:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:12 PM Andres Freund <[email protected]> wrote:\n> -1. For me separating the P_NEW path makes a lot of sense, but isn't 16\n> material. I don't agree that it's a problem to have heaprel as a parameter in\n> a bunch of places that don't strictly need it today.\n\nAs I've said, this is primarily about keeping all of the branches\nconsistent. I agree that there is no particular known consequence to\nnot doing this, and have said as much several times.\n\n> I don't really understand why the patch does s/heaprel/heapRel/.\n\nThat has been the style used within nbtree for many years now.\n\n> Most of these\n> functions aren't actually using camelcase parameters? This part of the change\n> just blows up the size, making it harder to review.\n\nI wonder what made it impossibly hard to review the first time around.\nThe nbtree aspects of this work were pretty much written on\nauto-pilot. I had no intention of making a fuss about it, but then I\nnever expected this push back.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 7 Jun 2023 18:10:00 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On 2023-Jun-07, Peter Geoghegan wrote:\n\n> On Wed, Jun 7, 2023 at 5:12 PM Andres Freund <[email protected]> wrote:\n\n> > I don't really understand why the patch does s/heaprel/heapRel/.\n> \n> That has been the style used within nbtree for many years now.\n\nIMO this kind of change definitely does not have place in a post-beta1\nrestructuring patch. We rarely indulge in case-fixing exercises at any\nother time, and I don't see any good argument why post-beta1 is a better\ntime for it. I suggest that you should strive to keep the patch as\nsmall as possible.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n",
"msg_date": "Thu, 8 Jun 2023 16:21:55 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:22 AM Alvaro Herrera <[email protected]> wrote:\n> IMO this kind of change definitely does not have place in a post-beta1\n> restructuring patch. We rarely indulge in case-fixing exercises at any\n> other time, and I don't see any good argument why post-beta1 is a better\n> time for it.\n\nThere is a glaring inconsistency. Now about half of the relevant\nfunctions in nbtree.h use \"heaprel\", while the other half use\n\"heapRel\". Obviously that's not the end of the world, but it's\nannoying. It's exactly the kind of case-fixing exercise that does tend\nto happen.\n\nI'm not going to argue this point any further, though. I will make\nthis change at a later date. That will introduce an inconsistency\nbetween branches, of course, but apparently there isn't any\nalternative.\n\n> I suggest that you should strive to keep the patch as\n> small as possible.\n\nAttached is v4, which goes back to using \"heaprel\" in new-to-16 code.\nAs a result, it is slightly smaller than v3.\n\nMy new plan is to commit this tomorrow, since the clear consensus is\nthat we should go ahead with this for 16.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 8 Jun 2023 08:50:31 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 08:50:31 -0700, Peter Geoghegan wrote:\n> On Thu, Jun 8, 2023 at 7:22 AM Alvaro Herrera <[email protected]> wrote:\n> > IMO this kind of change definitely does not have place in a post-beta1\n> > restructuring patch. We rarely indulge in case-fixing exercises at any\n> > other time, and I don't see any good argument why post-beta1 is a better\n> > time for it.\n>\n> There is a glaring inconsistency. Now about half of the relevant\n> functions in nbtree.h use \"heaprel\", while the other half use\n> \"heapRel\". Obviously that's not the end of the world, but it's\n> annoying. It's exactly the kind of case-fixing exercise that does tend\n> to happen.\n\n From what I can tell it's largely consistent with other parameters of the\nrespective function. E.g. btinsert(), _bt_doinsert() use camelCase for most\nparameters, so heapRel fits in. There are a few cases where it's not obvious\nwhat the pattern is intended to be :/.\n\n\n\n> My new plan is to commit this tomorrow, since the clear consensus is\n> that we should go ahead with this for 16.\n\nI'm not sure there is that concensus (for me half the changes shouldn't be\ndone, the rest should be in 17), but in the end it doesn't matter that much.\n\n\n\n> --- a/src/include/utils/tuplesort.h\n> +++ b/src/include/utils/tuplesort.h\n> @@ -399,9 +399,7 @@ extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc,\n> \t\t\t\t\t\t\t\t\t\t\tint workMem, SortCoordinate coordinate,\n> \t\t\t\t\t\t\t\t\t\t\tint sortopt);\n> extern Tuplesortstate *tuplesort_begin_cluster(TupleDesc tupDesc,\n> -\t\t\t\t\t\t\t\t\t\t\t Relation indexRel,\n> -\t\t\t\t\t\t\t\t\t\t\t Relation heaprel,\n> -\t\t\t\t\t\t\t\t\t\t\t int workMem,\n> +\t\t\t\t\t\t\t\t\t\t\t Relation indexRel, int workMem,\n> \t\t\t\t\t\t\t\t\t\t\t SortCoordinate coordinate,\n> \t\t\t\t\t\t\t\t\t\t\t int sortopt);\n\nI think we should continue to provide the table here, even if we don't need it\ntoday.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 12:03:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 12:03 PM Andres Freund <[email protected]> wrote:\n> From what I can tell it's largely consistent with other parameters of the\n> respective function. E.g. btinsert(), _bt_doinsert() use camelCase for most\n> parameters, so heapRel fits in. There are a few cases where it's not obvious\n> what the pattern is intended to be :/.\n\nIt's not 100% clear what the underlying principle is, but we mix\ncamelCase and underscore styles all the time, so that's always kinda\ntrue.\n\n> > My new plan is to commit this tomorrow, since the clear consensus is\n> > that we should go ahead with this for 16.\n>\n> I'm not sure there is that concensus (for me half the changes shouldn't be\n> done, the rest should be in 17), but in the end it doesn't matter that much.\n\nReally? What parts are you opposed to in principle? I don't see why\nyou wouldn't support everything or nothing for 17 (questions of style\naside). I don't see what's ambiguous about what we should do here,\nbarring the 16-or-17 question.\n\nIt's not like nbtree ever really \"used P_NEW\". It doesn't actually\ndepend on any of the P_NEW handling inside bufmgr.c. It looks a little\nlike it might, but that's just an accident.\n\n> > --- a/src/include/utils/tuplesort.h\n> > +++ b/src/include/utils/tuplesort.h\n> > @@ -399,9 +399,7 @@ extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc,\n> > int workMem, SortCoordinate coordinate,\n> > int sortopt);\n> > extern Tuplesortstate *tuplesort_begin_cluster(TupleDesc tupDesc,\n> > - Relation indexRel,\n> > - Relation heaprel,\n> > - int workMem,\n> > + Relation indexRel, int workMem,\n> > SortCoordinate coordinate,\n> > int sortopt);\n>\n> I think we should continue to provide the table here, even if we don't need it\n> today.\n\nI don't see why, but okay. I'll do it that way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Jun 2023 12:23:36 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 12:23:36 -0700, Peter Geoghegan wrote:\n> On Fri, Jun 9, 2023 at 12:03 PM Andres Freund <[email protected]> wrote:\n> > > My new plan is to commit this tomorrow, since the clear consensus is\n> > > that we should go ahead with this for 16.\n> >\n> > I'm not sure there is that concensus (for me half the changes shouldn't be\n> > done, the rest should be in 17), but in the end it doesn't matter that much.\n> \n> Really? What parts are you opposed to in principle? I don't see why\n> you wouldn't support everything or nothing for 17 (questions of style\n> aside). I don't see what's ambiguous about what we should do here,\n> barring the 16-or-17 question.\n\nI don't think minimizing heaprel being passed around is a worthwhile goal, the\ncontrary actually: It just makes it painful to use heaprel anywhere, because\nit causes precisely these cascading changes of adding/removing the parameter\nto a bunch of functions. If anything we should do the opposite.\n\n\n> It's not like nbtree ever really \"used P_NEW\". It doesn't actually\n> depend on any of the P_NEW handling inside bufmgr.c. It looks a little\n> like it might, but that's just an accident.\n\nThat part I am entirely on board with, as mentioned earlier. It doesn't seem\nlike 16 material though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 21:40:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 9:40 PM Andres Freund <[email protected]> wrote:\n> I don't think minimizing heaprel being passed around is a worthwhile goal, the\n> contrary actually: It just makes it painful to use heaprel anywhere, because\n> it causes precisely these cascading changes of adding/removing the parameter\n> to a bunch of functions. If anything we should do the opposite.\n>\n>\n> > It's not like nbtree ever really \"used P_NEW\". It doesn't actually\n> > depend on any of the P_NEW handling inside bufmgr.c. It looks a little\n> > like it might, but that's just an accident.\n>\n> That part I am entirely on board with, as mentioned earlier. It doesn't seem\n> like 16 material though.\n\nObviously you shouldn't need a heaprel to lock a page. As it happened\nGiST already worked without this sort of P_NEW idiom, which is why\ncommit 61b313e4 hardly made any changes at all to GiST, even though\nthe relevant parts of GiST are heavily based on nbtree. Did you just\nforget to plaster similar heaprel arguments all over GiST and SP-GiST?\n\nI'm really disappointed that you're still pushing back here, even\nafter I got a +1 on backpatching from Heikki. This should have been\nstraightforward.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Jun 2023 21:59:10 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 12:23 PM Peter Geoghegan <[email protected]> wrote:\n> > I'm not sure there is that concensus (for me half the changes shouldn't be\n> > done, the rest should be in 17), but in the end it doesn't matter that much.\n\nI pushed this just now. I have also closed out the open item.\n\n> > > --- a/src/include/utils/tuplesort.h\n> > > +++ b/src/include/utils/tuplesort.h\n> > > @@ -399,9 +399,7 @@ extern Tuplesortstate *tuplesort_begin_heap(TupleDesc tupDesc,\n> > > int workMem, SortCoordinate coordinate,\n> > > int sortopt);\n> > > extern Tuplesortstate *tuplesort_begin_cluster(TupleDesc tupDesc,\n> > > - Relation indexRel,\n> > > - Relation heaprel,\n> > > - int workMem,\n> > > + Relation indexRel, int workMem,\n> > > SortCoordinate coordinate,\n> > > int sortopt);\n> >\n> > I think we should continue to provide the table here, even if we don't need it\n> > today.\n>\n> I don't see why, but okay. I'll do it that way.\n\nI didn't end up doing that in the version that I pushed (heaprel was\nremoved from tuplesort_begin_cluster in the final version after all),\nsince I couldn't justify the use of NewHeap over OldHeap at the call\nsite in heapam_handler.c. If you're interested in following up with\nthis yourself, I have no objections.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 10 Jun 2023 14:10:26 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up nbtree after logical decoding on standby work"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI saw a buildfarm failure on \"dikkop\"[1]. It failed in\r\n035_standby_logical_decoding.pl, because the slots row_removal_inactiveslot and\r\nrow_removal_activeslot are not invalidated after vacuum.\r\n\r\nregress_log_035_standby_logical_decoding:\r\n```\r\n[12:15:05.943](4.442s) not ok 22 - inactiveslot slot invalidation is logged with vacuum on pg_class\r\n[12:15:05.945](0.003s) \r\n[12:15:05.946](0.000s) # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_class'\r\n# at t/035_standby_logical_decoding.pl line 238.\r\n[12:15:05.948](0.002s) not ok 23 - activeslot slot invalidation is logged with vacuum on pg_class\r\n[12:15:05.949](0.001s) \r\n[12:15:05.950](0.000s) # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\r\n# at t/035_standby_logical_decoding.pl line 244.\r\n[13:38:26.977](5001.028s) # poll_query_until timed out executing this query:\r\n# select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb'\r\n# expecting this output:\r\n# t\r\n# last actual query output:\r\n# f\r\n# with stderr:\r\n[13:38:26.980](0.003s) not ok 24 - confl_active_logicalslot updated\r\n[13:38:26.982](0.002s) \r\n[13:38:26.982](0.000s) # Failed test 'confl_active_logicalslot updated'\r\n# at t/035_standby_logical_decoding.pl line 251.\r\nTimed out waiting confl_active_logicalslot to be updated at t/035_standby_logical_decoding.pl line 251.\r\n```\r\n\r\n035_standby_logical_decoding.pl:\r\n```\r\n# This should trigger the conflict\r\n$node_primary->safe_psql(\r\n\t'testdb', qq[\r\n CREATE TABLE conflict_test(x integer, y text);\r\n DROP TABLE conflict_test;\r\n VACUUM pg_class;\r\n INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\r\n]);\r\n\r\n$node_primary->wait_for_replay_catchup($node_standby);\r\n\r\n# Check invalidation in the logfile and in pg_stat_database_conflicts\r\ncheck_for_invalidation('row_removal_', $logstart, 'with vacuum on pg_class');\r\n```\r\n\r\nIs it possible that the vacuum command didn't remove tuples and then the\r\nconflict was not triggered? It seems we can't confirm this because there is not\r\nenough information. Maybe \"vacuum verbose\" can be used to provide more\r\ninformation.\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2023-05-24%2006%3A16%3A18\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Fri, 26 May 2023 07:27:01 +0000",
"msg_from": "\"Yu Shi (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BF animal dikkop reported a failure in 035_standby_logical_decoding"
},
{
"msg_contents": "Hi,\n\nOn 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:\n> Hi hackers,\n> \n> I saw a buildfarm failure on \"dikkop\"[1]. It failed in\n> 035_standby_logical_decoding.pl, because the slots row_removal_inactiveslot and\n> row_removal_activeslot are not invalidated after vacuum.\n\nThanks for sharing!\n\n> \n> regress_log_035_standby_logical_decoding:\n> ```\n> [12:15:05.943](4.442s) not ok 22 - inactiveslot slot invalidation is logged with vacuum on pg_class\n> [12:15:05.945](0.003s)\n> [12:15:05.946](0.000s) # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_class'\n> # at t/035_standby_logical_decoding.pl line 238.\n> [12:15:05.948](0.002s) not ok 23 - activeslot slot invalidation is logged with vacuum on pg_class\n> [12:15:05.949](0.001s)\n> [12:15:05.950](0.000s) # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n> # at t/035_standby_logical_decoding.pl line 244.\n> [13:38:26.977](5001.028s) # poll_query_until timed out executing this query:\n> # select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb'\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> [13:38:26.980](0.003s) not ok 24 - confl_active_logicalslot updated\n> [13:38:26.982](0.002s)\n> [13:38:26.982](0.000s) # Failed test 'confl_active_logicalslot updated'\n> # at t/035_standby_logical_decoding.pl line 251.\n> Timed out waiting confl_active_logicalslot to be updated at t/035_standby_logical_decoding.pl line 251.\n> ```\n> \n> 035_standby_logical_decoding.pl:\n> ```\n> # This should trigger the conflict\n> $node_primary->safe_psql(\n> \t'testdb', qq[\n> CREATE TABLE conflict_test(x integer, y text);\n> DROP TABLE conflict_test;\n> VACUUM pg_class;\n> INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\n> ]);\n> \n> $node_primary->wait_for_replay_catchup($node_standby);\n> \n> # Check invalidation in the logfile and in pg_stat_database_conflicts\n> check_for_invalidation('row_removal_', $logstart, 'with vacuum on pg_class');\n> ```\n> \n> Is it possible that the vacuum command didn't remove tuples and then the\n> conflict was not triggered? \n\nThe flush_wal table added by Andres should guarantee that the WAL is flushed, so\nthe only reason I can think about is indeed that the vacuum did not remove tuples (\nbut I don't get why/how that could be the case).\n\n> It seems we can't confirm this because there is not\n> enough information. \n\nRight, and looking at its status history most of the time the test is green (making it\neven more difficult to diagnose).\n\n> Maybe \"vacuum verbose\" can be used to provide more\n> information.\n\nI can see that dikkop \"belongs\" to Tomas (adding Tomas to this thread).\nTomas, do you think it would be possible to run some 035_standby_logical_decoding.pl\nmanually with \"vacuum verbose\" in the test mentioned above? (or any other way you can think\nabout that would help diagnose this random failure?).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 May 2023 11:22:01 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
},
{
"msg_contents": "On Monday, May 29, 2023 5:22 PM Drouvot, Bertrand <[email protected]> wrote:\r\n> \r\n> Hi,\r\n> \r\n> On 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:\r\n> > Hi hackers,\r\n> >\r\n> > I saw a buildfarm failure on \"dikkop\"[1]. It failed in\r\n> > 035_standby_logical_decoding.pl, because the slots row_removal_inactiveslot\r\n> and\r\n> > row_removal_activeslot are not invalidated after vacuum.\r\n> \r\n> Thanks for sharing!\r\n> \r\n> >\r\n> > regress_log_035_standby_logical_decoding:\r\n> > ```\r\n> > [12:15:05.943](4.442s) not ok 22 - inactiveslot slot invalidation is logged with\r\n> vacuum on pg_class\r\n> > [12:15:05.945](0.003s)\r\n> > [12:15:05.946](0.000s) # Failed test 'inactiveslot slot invalidation is logged\r\n> with vacuum on pg_class'\r\n> > # at t/035_standby_logical_decoding.pl line 238.\r\n> > [12:15:05.948](0.002s) not ok 23 - activeslot slot invalidation is logged with\r\n> vacuum on pg_class\r\n> > [12:15:05.949](0.001s)\r\n> > [12:15:05.950](0.000s) # Failed test 'activeslot slot invalidation is logged with\r\n> vacuum on pg_class'\r\n> > # at t/035_standby_logical_decoding.pl line 244.\r\n> > [13:38:26.977](5001.028s) # poll_query_until timed out executing this query:\r\n> > # select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where\r\n> datname = 'testdb'\r\n> > # expecting this output:\r\n> > # t\r\n> > # last actual query output:\r\n> > # f\r\n> > # with stderr:\r\n> > [13:38:26.980](0.003s) not ok 24 - confl_active_logicalslot updated\r\n> > [13:38:26.982](0.002s)\r\n> > [13:38:26.982](0.000s) # Failed test 'confl_active_logicalslot updated'\r\n> > # at t/035_standby_logical_decoding.pl line 251.\r\n> > Timed out waiting confl_active_logicalslot to be updated at\r\n> t/035_standby_logical_decoding.pl line 251.\r\n> > ```\r\n> >\r\n> > 035_standby_logical_decoding.pl:\r\n> > ```\r\n> > # This should trigger the conflict\r\n> > $node_primary->safe_psql(\r\n> > \t'testdb', qq[\r\n> > CREATE TABLE conflict_test(x integer, y text);\r\n> > DROP TABLE conflict_test;\r\n> > VACUUM pg_class;\r\n> > INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\r\n> > ]);\r\n> >\r\n> > $node_primary->wait_for_replay_catchup($node_standby);\r\n> >\r\n> > # Check invalidation in the logfile and in pg_stat_database_conflicts\r\n> > check_for_invalidation('row_removal_', $logstart, 'with vacuum on pg_class');\r\n> > ```\r\n> >\r\n> > Is it possible that the vacuum command didn't remove tuples and then the\r\n> > conflict was not triggered?\r\n> \r\n> The flush_wal table added by Andres should guarantee that the WAL is flushed,\r\n> so\r\n> the only reason I can think about is indeed that the vacuum did not remove\r\n> tuples (\r\n> but I don't get why/how that could be the case).\r\n> \r\n> > It seems we can't confirm this because there is not\r\n> > enough information.\r\n> \r\n> Right, and looking at its status history most of the time the test is green (making\r\n> it\r\n> even more difficult to diagnose).\r\n> \r\n> > Maybe \"vacuum verbose\" can be used to provide more\r\n> > information.\r\n> \r\n> I can see that dikkop \"belongs\" to Tomas (adding Tomas to this thread).\r\n> Tomas, do you think it would be possible to run some\r\n> 035_standby_logical_decoding.pl\r\n> manually with \"vacuum verbose\" in the test mentioned above? (or any other\r\n> way you can think\r\n> about that would help diagnose this random failure?).\r\n> \r\n\r\nThanks for your reply.\r\n\r\nI saw another failure on \"drongo\" [1], which looks like a similar problem. \r\n\r\nMaybe a temporary patch can be committed to dump the result of \"vacuum verbose\".\r\nAnd we can check this when the test fails.\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-05-26%2018%3A05%3A57\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Mon, 29 May 2023 09:58:13 +0000",
"msg_from": "\"Yu Shi (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <[email protected]> writes:\n> On 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:\n>> Is it possible that the vacuum command didn't remove tuples and then the\n>> conflict was not triggered? \n\n> The flush_wal table added by Andres should guarantee that the WAL is flushed, so\n> the only reason I can think about is indeed that the vacuum did not remove tuples (\n> but I don't get why/how that could be the case).\n\nThis test is broken on its face:\n\n CREATE TABLE conflict_test(x integer, y text);\n DROP TABLE conflict_test;\n VACUUM full pg_class;\n\nThere will be something VACUUM can remove only if there were no other\ntransactions holding back global xmin --- and there's not even a delay\nhere to give any such transaction a chance to finish.\n\nBackground autovacuum is the most likely suspect for breaking that,\nbut I wouldn't be surprised if something in the logical replication\nmechanism itself could be running a transaction at the wrong instant.\n\nSome of the other recovery tests set\nautovacuum = off\nto try to control such problems, but I'm not sure how much of\na solution that really is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 May 2023 07:03:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
},
{
"msg_contents": "Hi,\n\nOn 5/29/23 1:03 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <[email protected]> writes:\n>> On 5/26/23 9:27 AM, Yu Shi (Fujitsu) wrote:\n>>> Is it possible that the vacuum command didn't remove tuples and then the\n>>> conflict was not triggered?\n> \n>> The flush_wal table added by Andres should guarantee that the WAL is flushed, so\n>> the only reason I can think about is indeed that the vacuum did not remove tuples (\n>> but I don't get why/how that could be the case).\n> \n> This test is broken on its face:\n> \n> CREATE TABLE conflict_test(x integer, y text);\n> DROP TABLE conflict_test;\n> VACUUM full pg_class;\n> \n> There will be something VACUUM can remove only if there were no other\n> transactions holding back global xmin --- and there's not even a delay\n> here to give any such transaction a chance to finish.\n> \n> Background autovacuum is the most likely suspect for breaking that,\n\nOh right, I did not think autovacuum could start during this test, but yeah there\nis no reasons it could not.\n\n> but I wouldn't be surprised if something in the logical replication\n> mechanism itself could be running a transaction at the wrong instant.\n> \n> Some of the other recovery tests set\n> autovacuum = off\n> to try to control such problems, but I'm not sure how much of\n> a solution that really is.\n\nOne option I can think of is to:\n\n1) set autovacuum = off (as it looks like the usual suspect).\n2) trigger the vacuum in verbose mode (as suggested by Shi-san) and\ndepending of its output run the \"invalidation\" test or: re-launch the vacuum, re-check the output\nand so on.. (n times max). If n is reached, then skip this test.\n\nAs this test is currently failing randomly (and it looks like there is more success that failures, even without\nautovacuum = off), then the test should still validate that the invalidation works as expected for the large\nmajority of runs (and skipping the test should be pretty rare then).\n\nWould that make sense?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 May 2023 14:31:24 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-29 14:31:24 +0200, Drouvot, Bertrand wrote:\n> On 5/29/23 1:03 PM, Tom Lane wrote:\n> > but I wouldn't be surprised if something in the logical replication\n> > mechanism itself could be running a transaction at the wrong instant.\n> > \n> > Some of the other recovery tests set\n> > autovacuum = off\n> > to try to control such problems, but I'm not sure how much of\n> > a solution that really is.\n> \n> One option I can think of is to:\n> \n> 1) set autovacuum = off (as it looks like the usual suspect).\n> 2) trigger the vacuum in verbose mode (as suggested by Shi-san) and\n> depending of its output run the \"invalidation\" test or: re-launch the vacuum, re-check the output\n> and so on.. (n times max). If n is reached, then skip this test.\n\nI think the best fix would be to wait for a new snapshot that has a newer\nhorizon, before doing the vacuum full.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 May 2023 08:24:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
},
{
"msg_contents": "Hi,\n\nOn 5/30/23 5:24 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-05-29 14:31:24 +0200, Drouvot, Bertrand wrote:\n>> On 5/29/23 1:03 PM, Tom Lane wrote:\n>>> but I wouldn't be surprised if something in the logical replication\n>>> mechanism itself could be running a transaction at the wrong instant.\n>>>\n>>> Some of the other recovery tests set\n>>> autovacuum = off\n>>> to try to control such problems, but I'm not sure how much of\n>>> a solution that really is.\n>>\n>> One option I can think of is to:\n>>\n>> 1) set autovacuum = off (as it looks like the usual suspect).\n>> 2) trigger the vacuum in verbose mode (as suggested by Shi-san) and\n>> depending of its output run the \"invalidation\" test or: re-launch the vacuum, re-check the output\n>> and so on.. (n times max). If n is reached, then skip this test.\n> \n> I think the best fix would be to wait for a new snapshot that has a newer\n> horizon, before doing the vacuum full.\n> \n\nThanks for the proposal! I think that's a great idea, I'll look at it\nand update the patch I've submitted in [1] accordingly.\n\n\n[1]: https://www.postgresql.org/message-id/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 May 2023 07:52:05 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BF animal dikkop reported a failure in\n 035_standby_logical_decoding"
}
] |
[
{
"msg_contents": "I have attempted to convert pg_get_indexdef() to use\nsystable_beginscan() based on transaction-snapshot rather than using\nSearchSysCache(). The latter does not have any old info and thus\nprovides only the latest info as per the committed txns, which could\nresult in errors in some scenarios. One such case is mentioned atop\npg_dump.c. The patch is an attempt to fix the same pg_dump's issue.\nAny feedback is welcome.\n\nThere is a long list of pg_get_* functions which use SearchSysCache()\nand thus may expose similar issues. I can give it a try to review the\npossibility of converting all of them. Thoughts?\n\nthanks\nShveta",
"msg_date": "Fri, 26 May 2023 15:24:42 +0530",
"msg_from": "shveta malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "On Fri, 26 May 2023 at 15:25, shveta malik <[email protected]> wrote:\n>\n> I have attempted to convert pg_get_indexdef() to use\n> systable_beginscan() based on transaction-snapshot rather than using\n> SearchSysCache(). The latter does not have any old info and thus\n> provides only the latest info as per the committed txns, which could\n> result in errors in some scenarios. One such case is mentioned atop\n> pg_dump.c. The patch is an attempt to fix the same pg_dump's issue.\n> Any feedback is welcome.\n>\n> There is a long list of pg_get_* functions which use SearchSysCache()\n> and thus may expose similar issues. I can give it a try to review the\n> possibility of converting all of them. Thoughts?\n\nI could reproduce this issue in HEAD with pg_dump dumping a database\nhaving a table and an index like:\ncreate table t1(c1 int);\ncreate index idx1 on t1(c1);\n\nSteps to reproduce:\na) ./pg_dump -d postgres -f dump.txt -- Debug this statement and hold\na breakpoint at getTables function just before it takes lock on the\ntable t1 b) when the breakpoint is hit, drop the index idx1 c)\nContinue the pg_dump execution after dropping the index d) pg_dump\ncalls pg_get_indexdef to get the index definition e) since\npg_get_indexdef->pg_get_indexdef uses SearchSysCache1 which uses the\nlatest transaction, it will not get the index information as it sees\nthe latest catalog snapshot(it will not get the index information). e)\npg_dump will get an empty index statement in this case like:\n---------------------------------------------------------------------------------------------\n--\n-- Name: idx; Type: INDEX; Schema: public; Owner: vignesh\n--\n\n;\n---------------------------------------------------------------------------------------------\n\nThis issue is solved using shveta's patch as it registers the\ntransaction snapshot and calls systable_beginscan which will not see\nthe transactions that were started after pg_dump's transaction(the\ndrop index will not be seen) and gets the index definition as expected\nlike:\n---------------------------------------------------------------------------------------------\n--\n-- Name: idx; Type: INDEX; Schema: public; Owner: vignesh\n--\n\nCREATE INDEX idx ON public.t1 USING btree (c1);\n---------------------------------------------------------------------------------------------\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 May 2023 15:44:29 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "On Fri, May 26, 2023 at 6:55 PM shveta malik <[email protected]> wrote:\n>\n> I have attempted to convert pg_get_indexdef() to use\n> systable_beginscan() based on transaction-snapshot rather than using\n> SearchSysCache(). The latter does not have any old info and thus\n> provides only the latest info as per the committed txns, which could\n> result in errors in some scenarios. One such case is mentioned atop\n> pg_dump.c. The patch is an attempt to fix the same pg_dump's issue.\n> Any feedback is welcome.\n\nEven only in pg_get_indexdef_worker(), there are still many places\nwhere we use syscache lookup: generate_relation_name(),\nget_relation_name(), get_attname(), get_atttypetypmodcoll(),\nget_opclass_name() etc.\n\n>\n> There is a long list of pg_get_* functions which use SearchSysCache()\n> and thus may expose similar issues. I can give it a try to review the\n> possibility of converting all of them. Thoughts?\n>\n\nit would be going to be a large refactoring and potentially make the\nfuture implementations such as pg_get_tabledef() etc hard. Have you\nconsidered changes to the SearchSysCache() family so that they\ninternally use a transaction snapshot that is registered in advance.\nSince we are already doing similar things for catalog lookup in\nlogical decoding, it might be feasible. That way, the changes to\npg_get_XXXdef() functions would be much easier.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 15:18:30 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "On Tue, 13 Jun 2023 at 11:49, Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, May 26, 2023 at 6:55 PM shveta malik <[email protected]> wrote:\n> >\n> > I have attempted to convert pg_get_indexdef() to use\n> > systable_beginscan() based on transaction-snapshot rather than using\n> > SearchSysCache(). The latter does not have any old info and thus\n> > provides only the latest info as per the committed txns, which could\n> > result in errors in some scenarios. One such case is mentioned atop\n> > pg_dump.c. The patch is an attempt to fix the same pg_dump's issue.\n> > Any feedback is welcome.\n>\n> Even only in pg_get_indexdef_worker(), there are still many places\n> where we use syscache lookup: generate_relation_name(),\n> get_relation_name(), get_attname(), get_atttypetypmodcoll(),\n> get_opclass_name() etc.\n>\n> >\n> > There is a long list of pg_get_* functions which use SearchSysCache()\n> > and thus may expose similar issues. I can give it a try to review the\n> > possibility of converting all of them. Thoughts?\n> >\n>\n> it would be going to be a large refactoring and potentially make the\n> future implementations such as pg_get_tabledef() etc hard. Have you\n> considered changes to the SearchSysCache() family so that they\n> internally use a transaction snapshot that is registered in advance.\n> Since we are already doing similar things for catalog lookup in\n> logical decoding, it might be feasible. That way, the changes to\n> pg_get_XXXdef() functions would be much easier.\n\nI feel registering an active snapshot before accessing the system\ncatalog as suggested by Sawada-san will be a better fix to solve the\nproblem. If this approach is fine, we will have to similarly fix the\nother pg_get_*** functions like pg_get_constraintdef,\npg_get_function_arguments, pg_get_function_result,\npg_get_function_identity_arguments, pg_get_function_sqlbody,\npg_get_expr, pg_get_partkeydef, pg_get_statisticsobjdef and\npg_get_triggerdef.\nThe Attached patch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 14 Jun 2023 12:01:02 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "Hi hackers,\n\nWith '0001-pg_get_indexdef-modification-to-access-catalog-based.patch' \npatch,\nI confirmed that definition information\ncan be collected even if the index is droped during pg_dump.\nThe regression test (make check-world) has passed.\n\nI also tested the view definition for a similar problem.\nAs per the attached patch and test case,\nBy using SetupHistoricSnapshot for pg_get_viewdef() as well,\nSimilarly, definition information can be collected\nfor VIEW definitions even if the view droped during pg_dump.\nThe regression test (make check-world) has passed,\nand pg_dump's SERIALIZABLE results are improved.\nHowever, in a SERIALIZABLE transaction,\nIf you actually try to access it, it will cause ERROR,\nso it seems to cause confusion.\nI think the scope of this improvement should be limited\nto the functions listed as pg_get_*** functions at the moment.\n\n---\n\n# test2 pg_get_indexdef,pg_get_viewdef\n\n## create table,index,view\ndrop table test1;\ncreate table test1(id int);\ncreate index idx1_test1 on test1(id);\ncreate view view1_test1 as select * from test1;\n\n## begin Transaction-A\nbegin transaction isolation level serializable;\nselect pg_current_xact_id();\n\n## begin Transaction-B\nbegin transaction isolation level serializable;\nselect pg_current_xact_id();\n\n## drop index,view in Transaction-A\ndrop index idx1_test1;\ndrop view view1_test1;\ncommit;\n\n## SELECT pg_get_indexdef,pg_get_viewdef in Transaction-B\nSELECT pg_get_indexdef(oid) FROM pg_class WHERE relname = 'idx1_test1';\nSELECT pg_get_viewdef(oid) FROM pg_class WHERE relname = 'view1_test1';\n\n## correct info from index and view\npostgres=*# SELECT pg_get_indexdef(oid) FROM pg_class WHERE relname = \n'idx1_test1';\n pg_get_indexdef\n----------------------------------------------------------\n CREATE INDEX idx1_test1 ON public.test1 USING btree (id)\n(1 row)\n\npostgres=*# SELECT pg_get_viewdef(oid) FROM pg_class WHERE relname = \n'view1_test1';\n pg_get_viewdef\n----------------\n SELECT id +\n FROM test1;\n(1 row)\n\n## However, SELECT * FROM view1_test1 cause ERROR because view does not \nexist\npostgres=*# SELECT * FROM view1_test1;\nERROR: relation \"view1_test1\" does not exist\nLINE 1: SELECT * FROM view1_test1;\n\nBest Regards,\nKeisuke Kuroda\nNTT COMWARE\n\nOn 2023-06-14 15:31, vignesh C wrote:\n> On Tue, 13 Jun 2023 at 11:49, Masahiko Sawada <[email protected]> \n> wrote:\n>> \n>> On Fri, May 26, 2023 at 6:55 PM shveta malik <[email protected]> \n>> wrote:\n>> >\n>> > I have attempted to convert pg_get_indexdef() to use\n>> > systable_beginscan() based on transaction-snapshot rather than using\n>> > SearchSysCache(). The latter does not have any old info and thus\n>> > provides only the latest info as per the committed txns, which could\n>> > result in errors in some scenarios. One such case is mentioned atop\n>> > pg_dump.c. The patch is an attempt to fix the same pg_dump's issue.\n>> > Any feedback is welcome.\n>> \n>> Even only in pg_get_indexdef_worker(), there are still many places\n>> where we use syscache lookup: generate_relation_name(),\n>> get_relation_name(), get_attname(), get_atttypetypmodcoll(),\n>> get_opclass_name() etc.\n>> \n>> >\n>> > There is a long list of pg_get_* functions which use SearchSysCache()\n>> > and thus may expose similar issues. I can give it a try to review the\n>> > possibility of converting all of them. Thoughts?\n>> >\n>> \n>> it would be going to be a large refactoring and potentially make the\n>> future implementations such as pg_get_tabledef() etc hard. Have you\n>> considered changes to the SearchSysCache() family so that they\n>> internally use a transaction snapshot that is registered in advance.\n>> Since we are already doing similar things for catalog lookup in\n>> logical decoding, it might be feasible. That way, the changes to\n>> pg_get_XXXdef() functions would be much easier.\n> \n> I feel registering an active snapshot before accessing the system\n> catalog as suggested by Sawada-san will be a better fix to solve the\n> problem. If this approach is fine, we will have to similarly fix the\n> other pg_get_*** functions like pg_get_constraintdef,\n> pg_get_function_arguments, pg_get_function_result,\n> pg_get_function_identity_arguments, pg_get_function_sqlbody,\n> pg_get_expr, pg_get_partkeydef, pg_get_statisticsobjdef and\n> pg_get_triggerdef.\n> The Attached patch has the changes for the same.\n> \n> Regards,\n> Vignesh",
"msg_date": "Fri, 06 Oct 2023 12:01:13 +0900",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "[email protected] writes:\n> On 2023-06-14 15:31, vignesh C wrote:\n>> I have attempted to convert pg_get_indexdef() to use\n>> systable_beginscan() based on transaction-snapshot rather than using\n>> SearchSysCache().\n\nHas anybody thought about the fact that ALTER TABLE ALTER TYPE\n(specifically RememberIndexForRebuilding) absolutely requires seeing\nthe latest version of the index's definition?\n\n>>> it would be going to be a large refactoring and potentially make the\n>>> future implementations such as pg_get_tabledef() etc hard. Have you\n>>> considered changes to the SearchSysCache() family so that they\n>>> internally use a transaction snapshot that is registered in advance.\n\nA very significant fraction of other SearchSysCache callers likewise\ncannot afford to see stale data. We might be able to fix things so\nthat the SQL-accessible ruleutils functionality works differently, but\nwe can't just up and change the behavior of cache lookups everywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Oct 2023 23:11:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 11:15 PM Tom Lane <[email protected]> wrote:\n> [email protected] writes:\n> > On 2023-06-14 15:31, vignesh C wrote:\n> >> I have attempted to convert pg_get_indexdef() to use\n> >> systable_beginscan() based on transaction-snapshot rather than using\n> >> SearchSysCache().\n>\n> Has anybody thought about the fact that ALTER TABLE ALTER TYPE\n> (specifically RememberIndexForRebuilding) absolutely requires seeing\n> the latest version of the index's definition?\n>\n> >>> it would be going to be a large refactoring and potentially make the\n> >>> future implementations such as pg_get_tabledef() etc hard. Have you\n> >>> considered changes to the SearchSysCache() family so that they\n> >>> internally use a transaction snapshot that is registered in advance.\n>\n> A very significant fraction of other SearchSysCache callers likewise\n> cannot afford to see stale data. We might be able to fix things so\n> that the SQL-accessible ruleutils functionality works differently, but\n> we can't just up and change the behavior of cache lookups everywhere.\n\nThis patch was registered in the CommitFest as a bug fix, but I think\nit's a much more significant change than that label applies, and it\nseems like we have no consensus on what the right design is.\n\nSince there's been no response to these (entirely valid) comments from\nTom in the past 3 months, I've marked this CF entry RwF for now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:49:29 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_indexdef() modification to use TxnSnapshot"
}
] |
[
{
"msg_contents": "I need to implement a trigger that will behave similarly to a foreign key\nconstraint. The trigger itself will be created with:\n\n CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON foo\n\nI'd like to skip execution of the trigger logic if, by the time that the\ntrigger\nis executed, the NEW row is no longer valid. For a normal FOREIGN KEY\ntrigger,\nthis is handled in ri_triggers.c by:\n\n /*\n * We should not even consider checking the row if it is no longer valid,\n * since it was either deleted (so the deferred check should be skipped)\n * or updated (in which case only the latest version of the row should be\n * checked). Test its liveness according to SnapshotSelf. We need pin\n * and lock on the buffer to call HeapTupleSatisfiesVisibility. Caller\n * should be holding pin, but not lock.\n */\n if (!table_tuple_satisfies_snapshot(trigdata->tg_relation, newslot,\nSnapshotSelf))\n return PointerGetDatum(NULL);\n\nThe table_tuple_satisfies_snapshot() function is obviously unavailable from\nPL/pgSQL. Is this a reliable substitute?\n\n IF NOT EXISTS (SELECT FROM foo WHERE ctid = NEW.ctid) THEN\n RETURN NULL;\n END IF;\n\nSpecifically:\n\n1. Is there any possibility that, by the time the trigger function is\ncalled,\n the NEW row's ctid no longer refers to the row version in NEW, but to an\n entirely different row? For example, is it possible for VACUUM to\nreclaim the\n space at that page number and offset in between the INSERT/UPDATE and\nwhen\n the trigger function is called?\n\n2. If I lookup the row by its ctid, will the visibility map be consulted.\nAnd if\n so, is there any material difference between what that would do vs what\n table_tuple_satisfies_snapshot() does?\n\nThanks!\n\nI need to implement a trigger that will behave similarly to a foreign keyconstraint. The trigger itself will be created with: CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON fooI'd like to skip execution of the trigger logic if, by the time that the triggeris executed, the NEW row is no longer valid. For a normal FOREIGN KEY trigger,this is handled in ri_triggers.c by: /* * We should not even consider checking the row if it is no longer valid, * since it was either deleted (so the deferred check should be skipped) * or updated (in which case only the latest version of the row should be * checked). Test its liveness according to SnapshotSelf. We need pin * and lock on the buffer to call HeapTupleSatisfiesVisibility. Caller * should be holding pin, but not lock. */ if (!table_tuple_satisfies_snapshot(trigdata->tg_relation, newslot, SnapshotSelf)) return PointerGetDatum(NULL);The table_tuple_satisfies_snapshot() function is obviously unavailable fromPL/pgSQL. Is this a reliable substitute? IF NOT EXISTS (SELECT FROM foo WHERE ctid = NEW.ctid) THEN RETURN NULL; END IF;Specifically:1. Is there any possibility that, by the time the trigger function is called, the NEW row's ctid no longer refers to the row version in NEW, but to an entirely different row? For example, is it possible for VACUUM to reclaim the space at that page number and offset in between the INSERT/UPDATE and when the trigger function is called?2. If I lookup the row by its ctid, will the visibility map be consulted. And if so, is there any material difference between what that would do vs what table_tuple_satisfies_snapshot() does?Thanks!",
"msg_date": "Fri, 26 May 2023 11:04:08 -0400",
"msg_from": "Kaiting Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is NEW.ctid usable as table_tuple_satisfies_snapshot?"
},
{
"msg_contents": "On Fri, May 26, 2023 at 8:04 AM Kaiting Chen <[email protected]> wrote:\n\n> I need to implement a trigger that will behave similarly to a foreign key\n> constraint. The trigger itself will be created with:\n>\n> CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON foo\n>\n> I'd like to skip execution of the trigger logic if, by the time that the\n> trigger\n> is executed, the NEW row is no longer valid.\n>\n\nTo be clear, when using deferrable constraints and insert then subsequently\ndelete a row you wish to return-early in the insert trigger body as if the\nrow had never been inserted in the first place?\n\nThe table_tuple_satisfies_snapshot() function is obviously unavailable from\n> PL/pgSQL. Is this a reliable substitute?\n>\n\nIf a row is not visible at the time the trigger fires the SELECT will not\nreturn it. When the deferred trigger eventually fires the inserted row\nwill no longer exist and SELECT will not return it.\n\nThe above is not tested; assuming it does indeed behave that way I would\nexpect the behavior to be deterministic given that there are no concurrency\nissues involved.\n\n\n>\n> IF NOT EXISTS (SELECT FROM foo WHERE ctid = NEW.ctid) THEN\n> RETURN NULL;\n> END IF;\n>\n> Specifically:\n>\n> 1. Is there any possibility that, by the time the trigger function is\n> called,\n> the NEW row's ctid no longer refers to the row version in NEW, but to an\n> entirely different row? For example, is it possible for VACUUM to\n> reclaim the\n> space at that page number and offset in between the INSERT/UPDATE and\n> when\n> the trigger function is called?\n>\n\nNo. Transaction and MVCC semantics prevent that from happening.\n\n\n> 2. If I lookup the row by its ctid, will the visibility map be consulted.\n>\n\nNo, but that doesn't seem to be material anyway. Your user-space pl/pgsql\nfunction shouldn't care about such a purely performance optimization.\n\nDavid J.\n\nOn Fri, May 26, 2023 at 8:04 AM Kaiting Chen <[email protected]> wrote:I need to implement a trigger that will behave similarly to a foreign keyconstraint. The trigger itself will be created with: CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON fooI'd like to skip execution of the trigger logic if, by the time that the triggeris executed, the NEW row is no longer valid.To be clear, when using deferrable constraints and insert then subsequently delete a row you wish to return-early in the insert trigger body as if the row had never been inserted in the first place?The table_tuple_satisfies_snapshot() function is obviously unavailable fromPL/pgSQL. Is this a reliable substitute?If a row is not visible at the time the trigger fires the SELECT will not return it. When the deferred trigger eventually fires the inserted row will no longer exist and SELECT will not return it.The above is not tested; assuming it does indeed behave that way I would expect the behavior to be deterministic given that there are no concurrency issues involved. IF NOT EXISTS (SELECT FROM foo WHERE ctid = NEW.ctid) THEN RETURN NULL; END IF;Specifically:1. Is there any possibility that, by the time the trigger function is called, the NEW row's ctid no longer refers to the row version in NEW, but to an entirely different row? For example, is it possible for VACUUM to reclaim the space at that page number and offset in between the INSERT/UPDATE and when the trigger function is called?No. Transaction and MVCC semantics prevent that from happening.2. If I lookup the row by its ctid, will the visibility map be consulted.No, but that doesn't seem to be material anyway. Your user-space pl/pgsql function shouldn't care about such a purely performance optimization.David J.",
"msg_date": "Fri, 26 May 2023 08:33:43 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is NEW.ctid usable as table_tuple_satisfies_snapshot?"
},
{
"msg_contents": "On Fri, May 26, 2023 at 11:34 AM David G. Johnston <\[email protected]> wrote:\n\n> On Fri, May 26, 2023 at 8:04 AM Kaiting Chen <[email protected]> wrote:\n>\n>> I need to implement a trigger that will behave similarly to a foreign key\n>> constraint. The trigger itself will be created with:\n>>\n>> CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON foo\n>>\n>> I'd like to skip execution of the trigger logic if, by the time that the\n>> trigger\n>> is executed, the NEW row is no longer valid.\n>>\n>\n> To be clear, when using deferrable constraints and insert then\n> subsequently delete a row you wish to return-early in the insert trigger\n> body as if the row had never been inserted in the first place?\n>\n\nYes this is exactly the behavior I'm looking for. Furthermore, if the row\nis updated more than once in the same transaction and the constraint has\nbeen deferred, or even if the constraint hasn't been deferred but the row\nhas been updated since the trigger is queued (for example, if there are\nmultiple writeable CTEs), then I'd like to skip the trigger body as if that\nupdate didn't occur. Essentially I'm looking for the same behavior as the\nbuiltin triggers that enforce referential integrity.\n\nSpecifically:\n>\n> 1. Is there any possibility that, by the time the trigger function is\n> called,\n> the NEW row's ctid no longer refers to the row version in NEW, but to an\n> entirely different row? For example, is it possible for VACUUM to\n> reclaim the\n> space at that page number and offset in between the INSERT/UPDATE and\n> when\n> the trigger function is called?\n>\n> No. Transaction and MVCC semantics prevent that from happening.\n>\n\nOkay I think this is exactly what I'm looking for.\n\n\n>> 2. If I lookup the row by its ctid, will the visibility map be consulted.\n>>\n>\n> No, but that doesn't seem to be material anyway. Your user-space pl/pgsql\n> function shouldn't care about such a purely performance optimization.\n>\n\nJust to clarify, there's no way for SELECT FROM foo WHERE ctid = NEW.ctid\nto return a row that ordinary wouldn't be visible right? There's no magic\ngoing on with the qual on ctid that skips a visibility check right?\n\nOn Fri, May 26, 2023 at 11:34 AM David G. Johnston <[email protected]> wrote:On Fri, May 26, 2023 at 8:04 AM Kaiting Chen <[email protected]> wrote:I need to implement a trigger that will behave similarly to a foreign keyconstraint. The trigger itself will be created with: CREATE CONSTRAINT TRIGGER ... AFTER INSERT OR UPDATE OF ... ON fooI'd like to skip execution of the trigger logic if, by the time that the triggeris executed, the NEW row is no longer valid.To be clear, when using deferrable constraints and insert then subsequently delete a row you wish to return-early in the insert trigger body as if the row had never been inserted in the first place?Yes this is exactly the behavior I'm looking for. Furthermore, if the row is updated more than once in the same transaction and the constraint has been deferred, or even if the constraint hasn't been deferred but the row has been updated since the trigger is queued (for example, if there are multiple writeable CTEs), then I'd like to skip the trigger body as if that update didn't occur. Essentially I'm looking for the same behavior as the builtin triggers that enforce referential integrity.Specifically:1. Is there any possibility that, by the time the trigger function is called, the NEW row's ctid no longer refers to the row version in NEW, but to an entirely different row? For example, is it possible for VACUUM to reclaim the space at that page number and offset in between the INSERT/UPDATE and when the trigger function is called?No. Transaction and MVCC semantics prevent that from happening.Okay I think this is exactly what I'm looking for. 2. If I lookup the row by its ctid, will the visibility map be consulted.No, but that doesn't seem to be material anyway. Your user-space pl/pgsql function shouldn't care about such a purely performance optimization.Just to clarify, there's no way for SELECT FROM foo WHERE ctid = NEW.ctid to return a row that ordinary wouldn't be visible right? There's no magic going on with the qual on ctid that skips a visibility check right?",
"msg_date": "Fri, 26 May 2023 12:23:40 -0400",
"msg_from": "Kaiting Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is NEW.ctid usable as table_tuple_satisfies_snapshot?"
},
{
"msg_contents": "Kaiting Chen <[email protected]> writes:\n> On Fri, May 26, 2023 at 11:34 AM David G. Johnston <\n> [email protected]> wrote:\n>> On Fri, May 26, 2023 at 8:04 AM Kaiting Chen <[email protected]> wrote:\n>>> 2. If I lookup the row by its ctid, will the visibility map be consulted.\n\n>> No, but that doesn't seem to be material anyway. Your user-space pl/pgsql\n>> function shouldn't care about such a purely performance optimization.\n\nIt'd be a waste of cycles to consult the map in this usage, since the\ntuple of interest is surely not all-visible and thus the page couldn't\nbe either.\n\n> Just to clarify, there's no way for SELECT FROM foo WHERE ctid = NEW.ctid\n> to return a row that ordinary wouldn't be visible right? There's no magic\n> going on with the qual on ctid that skips a visibility check right?\n\nNo, a ctid test isn't magic in that way; nodeTidscan.c applies the\nsame snapshot check as any other relation scan.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 May 2023 12:49:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is NEW.ctid usable as table_tuple_satisfies_snapshot?"
},
{
"msg_contents": "On Fri, May 26, 2023 at 12:49 PM Tom Lane <[email protected]> wrote:\n\n> > Just to clarify, there's no way for SELECT FROM foo WHERE ctid = NEW.ctid\n> > to return a row that ordinary wouldn't be visible right? There's no magic\n> > going on with the qual on ctid that skips a visibility check right?\n>\n> No, a ctid test isn't magic in that way; nodeTidscan.c applies the\n> same snapshot check as any other relation scan.\n>\n\nOkay thanks!\n\nOn Fri, May 26, 2023 at 12:49 PM Tom Lane <[email protected]> wrote:\n> Just to clarify, there's no way for SELECT FROM foo WHERE ctid = NEW.ctid\n> to return a row that ordinary wouldn't be visible right? There's no magic\n> going on with the qual on ctid that skips a visibility check right?\n\nNo, a ctid test isn't magic in that way; nodeTidscan.c applies the\nsame snapshot check as any other relation scan.Okay thanks!",
"msg_date": "Fri, 26 May 2023 14:41:23 -0400",
"msg_from": "Kaiting Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is NEW.ctid usable as table_tuple_satisfies_snapshot?"
}
] |
[
{
"msg_contents": "Maintenance commands (ANALYZE, CLUSTER, REFRESH MATERIALIZED VIEW,\nREINDEX, and VACUUM) currently run as the table owner, and as a\nSECURITY_RESTRICTED_OPERATION.\n\nI propose that we also fix the search_path to \"pg_catalog, pg_temp\"\nwhen running maintenance commands, for two reasons:\n\n1. Make the behavior of maintenance commands more consistent because\nthey'd always have the same search_path.\n\n2. Now that we have the MAINTAIN privilege in 16, privileged non-\nsuperusers can execute maintenance commands on other users' tables.\nThat raises the possibility that a user with MAINTAIN privilege may be\nable to use search_path tricks to escalate privileges to the table\nowner. The MAINTAIN privilege is only given to highly-privileged users,\nbut there's still some risk. For this reason I also propose that it\ngoes in v16.\n\n\nThere's one interesting part: in the code path for creating a\nmaterialized view, ExecCreateTableAs() has this comment:\n\n/* \n * For materialized views, lock down security-restricted operations and\n * arrange to make GUC variable changes local to this command. This is\n * not necessary for security, but this keeps the behavior similar to \n * REFRESH MATERIALIZED VIEW. Otherwise, one could create a\nmaterialized \n * view not possible to refresh. \n */\n\nMy patch doesn't address this ExecCreateTableAs() check. To do so, we\nwould need to set the search path after DefineRelation(), otherwise it\nwill try to create the object in pg_catalog. But DefineRelation() is\nhappening at execution time, well after we entered the\nSECURITY_RESTRICTED_OPERATION, and it doesn't seem good to separate the\nSECURITY_RESTRICTED_OPERATION from setting search_path.\n\nThis ExecCreateTableAs() check doesn't seem terribly important, so I\ndon't think it's necessary to improve it as a part of this patch (it\nwon't be perfect anyway: functions can behave inconsistently for all\nkinds of reasons). But I'm open to suggestion if someone knows a good\nway to do it.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 26 May 2023 16:21:50 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, 26 May 2023 at 19:22, Jeff Davis <[email protected]> wrote:\n>\n> Maintenance commands (ANALYZE, CLUSTER, REFRESH MATERIALIZED VIEW,\n> REINDEX, and VACUUM) currently run as the table owner, and as a\n> SECURITY_RESTRICTED_OPERATION.\n>\n> I propose that we also fix the search_path to \"pg_catalog, pg_temp\"\n> when running maintenance commands, for two reasons:\n>\n> 1. Make the behavior of maintenance commands more consistent because\n> they'd always have the same search_path.\n\nWhat exactly would this impact? Offhand... expression indexes where\nthe functions in the expression (which would already be schema\nqualified) themselves reference other objects without schema\nqualification?\n\nSo this would negatively affect someone who was using such a dangerous\nfunction definition but was careful to always use the same search_path\non it. Perhaps someone who had created an expression index on their\nown table in their own schema calling their own functions in their own\nschema. As long as nobody else ever calls it that would work but this\nwould cause superuser to no longer be able to reindex it even if\nsuperuser set the same search_path?\n\nI guess that's pretty narrow and a reasonable thing to desupport.\nUsers could just mark those functions with search_path or schema\nqualify the object references in them. Perhaps we should also be\npicking up cases like that sooner so users realize they've created a\nfootgun for themselves?\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 8 Jun 2023 18:08:08 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jun 08, 2023 at 06:08:08PM -0400, Greg Stark wrote:\n> I guess that's pretty narrow and a reasonable thing to desupport.\n> Users could just mark those functions with search_path or schema\n> qualify the object references in them. Perhaps we should also be\n> picking up cases like that sooner so users realize they've created a\n> footgun for themselves?\n\nI'm inclined to agree that this is reasonable to desupport. Relying on the\nsearch_path for the cases Greg describes already seems rather fragile, so\nI'm skeptical that forcing a safe one for maintenance commands would make\nthings significantly worse. At least, it sounds like the right trade-off\nbased on Jeff's note about privilege escalation risks.\n\nI bet we could skip forcing the search_path for maintenance commands run as\nthe table owner, but such a discrepancy seems likely to cause far more\nconfusion than anything else.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 21:55:56 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 2023-06-08 at 21:55 -0700, Nathan Bossart wrote:\n> On Thu, Jun 08, 2023 at 06:08:08PM -0400, Greg Stark wrote:\n> > I guess that's pretty narrow and a reasonable thing to desupport.\n> > Users could just mark those functions with search_path or schema\n> > qualify the object references in them. Perhaps we should also be\n> > picking up cases like that sooner so users realize they've created\n> > a\n> > footgun for themselves?\n\nMany cases will be picked up, for instance CREATE INDEX will error if\nthe safe search path is not good enough.\n\n> I'm inclined to agree that this is reasonable to desupport.\n\nCommitted.\n\n> I bet we could skip forcing the search_path for maintenance commands\n> run as\n> the table owner, but such a discrepancy seems likely to cause far\n> more\n> confusion than anything else.\n\nAgreed.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 09 Jun 2023 14:00:31 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 2:00 PM Jeff Davis <[email protected]> wrote:\n>\n> On Thu, 2023-06-08 at 21:55 -0700, Nathan Bossart wrote:\n> > On Thu, Jun 08, 2023 at 06:08:08PM -0400, Greg Stark wrote:\n> > > I guess that's pretty narrow and a reasonable thing to desupport.\n> > > Users could just mark those functions with search_path or schema\n> > > qualify the object references in them. Perhaps we should also be\n> > > picking up cases like that sooner so users realize they've created\n> > > a\n> > > footgun for themselves?\n>\n> Many cases will be picked up, for instance CREATE INDEX will error if\n> the safe search path is not good enough.\n>\n> > I'm inclined to agree that this is reasonable to desupport.\n>\n> Committed.\n\nI'm not sure if mine is a valid concern, and it has been a long time\nsince I looked at the search_path's and switching Role's implications\n(CVE-2009-4136) so pardon my ignorance.\n\nIt feels a bit late in release cycle to introduce this breaking\nchange. If they depended on search_path, people and utilities that use\nthese maintenance commands may see failures. Although I can't think of\na scenario where such a failure may cause an outage, sometimes these\nmaintenance operations are performed while the users are staring down\nthe barrel of a gun (imminent danger of running out of space, bad\nstatistics causing absurd query plans, etc.). So, if not directly, a\nfailure of these operations may indirectly cause an outage.\n\nI feel more thought needs to be given to the impact of this change,\nand we should to give others more time for feedback.\n\nShort of that, it'd be prudent to allow the user to somehow fall back\nto old behaviour; a command-line option, or GUC, etc. That way we can\nmark the old behaviour \"deprecated\", with a workaround for those who\nmay desperately need it, and in another release or so, finally pull\nthe plug on old behaviour.\n\nMy 2bits.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 9 Jun 2023 16:29:52 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, 2023-06-09 at 16:29 -0700, Gurjeet Singh wrote:\n> I'm not sure if mine is a valid concern, and it has been a long time\n> since I looked at the search_path's and switching Role's implications\n> (CVE-2009-4136) so pardon my ignorance.\n> \n> It feels a bit late in release cycle to introduce this breaking\n> change.\n\nThat's a valid concern. It just needs to be weighed against the\npotential problems of running maintenance code with different search\npaths, and the interaction with the new MAINTAIN privilege.\n\n> I feel more thought needs to be given to the impact of this change,\n> and we should to give others more time for feedback.\n\nFor context, I initially posted to the -security list in case it needed\nto be addressed there, and got some feedback on the patch before\nposting to -hackers two weeks ago. So it has been seen by at least 4\npeople.\n\nBut I'm happy to hear more input and I'll backtrack if necessary.\n\nHere are my thoughts:\n\nLazy VACUUM is by far the most important for the overall system. It's\nunaffected by this change; see comment in vacuum_rel():\n\n /* \n * Switch to the table owner's userid, so that any index functions\nare run\n * as that user. Also lock down security-restricted operations and\n * arrange to make GUC variable changes local to this command. (This\nis\n * unnecessary, but harmless, for lazy VACUUM.)\n */\n\nREINDEX, CLUSTER, and VACUUM FULL are potentially affected because of\nindex functions, but only if the index functions are quite broken (or\nat least very fragile) already.\n\nREFRESH MATERIALIZED VIEW is the most likely to be affected because it\nis more likely to call \"interesting\" functions and the author may not\nanticipate a different search path.\n\nA normal dump/reload cycle for upgrade testing will catch these\nproblems because it will create indexes after loading the data\n(DefineIndex sets the search path), and it will also call REFRESH\nMATERIALIZED VIEW. If using pg_upgrade instead, a post-upgrade ANALYZE\nwill catch index function problems, but I suppose not MV problems.\n\nSo there is some risk to this change. It feels fairly narrow to me, but\nnon-zero. Perhaps we can do more?\n\n> Short of that, it'd be prudent to allow the user to somehow fall back\n> to old behaviour; a command-line option, or GUC, etc. That way we can\n> mark the old behaviour \"deprecated\", with a workaround for those who\n> may desperately need it, and in another release or so, finally pull\n> the plug on old behaviour.\n\nThat sounds wise, though others may not like the idea of a GUC just for\nthis change.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 09 Jun 2023 18:45:50 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, 2023-05-26 at 16:21 -0700, Jeff Davis wrote:\n> Maintenance commands (ANALYZE, CLUSTER, REFRESH MATERIALIZED VIEW,\n> REINDEX, and VACUUM) currently run as the table owner, and as a\n> SECURITY_RESTRICTED_OPERATION.\n> \n> I propose that we also fix the search_path to \"pg_catalog, pg_temp\"\n> when running maintenance commands\n\nNew patch attached.\n\nWe need this patch for several reasons:\n\n* If you have a functional index, and the function depends on the\nsearch_path, then it's easy to corrupt your index if you (or a\nsuperuser) run a REINDE/CLUSTER with the wrong search_path.\n\n* The MAINTAIN privilege needs a safe search_path, and was reverted\nfrom 16 because the search_path in 16 is not restricted.\n\n* In general, it's a good idea for things like functional indexes and\nmaterialized views to be independent of the search_path.\n\n* The search_path is already restricted in some other contexts, like\nlogical replication and autoanalyze.\n\nOthers have raised some concerns though:\n\n* It might break for users who have a functional index where the\nfunction implicitly depends on a search_path containing a namespace\nother than pg_catalog. My opinion is that such functional indexes are\nconceptually broken and we need to desupport them, and there will be\nsome breakage, but I'm open to suggestion about how we minimize that (a\ncompatibility GUC or something?).\n\n* The fix might not go far enough or might be in the wrong place. I'm\nopen to suggestion here, too. Maybe we can make it part of the general\nfunction call mechanism, and can be overridden by explicitly setting\nthe function search path? Or maybe we need new syntax where the\nfunction can acquire the search path from the session explicitly, but\nuses a safe search path by default?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 06 Jul 2023 18:39:27 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 6 Jul 2023 at 21:39, Jeff Davis <[email protected]> wrote:\n\nI apologize in advance if anything I’ve written below is either too obvious\nor too crazy or misinformed to belong here. I hope I have something to say\nthat is on point, but feel unsure what makes sense to say.\n\n* It might break for users who have a functional index where the\n> function implicitly depends on a search_path containing a namespace\n> other than pg_catalog. My opinion is that such functional indexes are\n> conceptually broken and we need to desupport them, and there will be\n> some breakage, but I'm open to suggestion about how we minimize that (a\n> compatibility GUC or something?).\n>\n\nI agree this is OK. If somebody has an index whole meaning depends on the\nsearch_path, then the best that can be said is that their database hasn't\nbeen corrupted yet. At the same time, I can see that somebody would get\nupset if they couldn't upgrade their database because of this. Maybe\npg_upgrade could apply \"SET search_path TO pg_catalog, pg_temp\" to any\nfunction used in a functional index that doesn't have a search_path setting\nof its own? (BEGIN ATOMIC functions count, if I understand correctly, as\nhaving a search_path setting, because the lookups happen at definition time)\n\nNow I'm doing more reading and I'm worried about SET TIME ZONE (or more\nprecisely, its absence) and maybe some other ones.\n\n* The fix might not go far enough or might be in the wrong place. I'm\n> open to suggestion here, too. Maybe we can make it part of the general\n> function call mechanism, and can be overridden by explicitly setting\n> the function search path? Or maybe we need new syntax where the\n> function can acquire the search path from the session explicitly, but\n> uses a safe search path by default?\n>\n\nChange it so by default each function gets handled as if \"SET search_path\nFROM CURRENT\" was applied to it? That's what I do for all my functions\n(maybe hurting performance?). Expand on my pg_upgrade idea above by\napplying it to all functions?\n\nI feel that this may tie into other behaviour issues where to me it is\nobvious that the expected behaviour should be different from the actual\nbehaviour. If a view calls a function, shouldn't it be called in the\ncontext of the view's definer/owner? It's weird that I can write a view\nthat filters a table for users of the view, but as soon as the view calls\nfunctions they run in the security context of the user of the view. Are\nviews security definers or not? Similar comment for triggers. Also as far\nas I can tell there is no way for a security definer function to determine\nwho (which user) invoked it. So I can grant/deny access to run a particular\nfunction using permissions, but I can't have the supposed security definer\ndefine security for different callers.\n\nIs the fundamental problem that we now find ourselves wanting to do things\nthat require different defaults to work smoothly? On some level I suspect\nwe want lexical scoping, which is what most of us have in our programming\nlanguages, in the database; but the database has many elements of dynamic\nscoping, and changing that is both a compatibility break and requires\nsignificant changes in the way the database is designed.\n\nOn Thu, 6 Jul 2023 at 21:39, Jeff Davis <[email protected]> wrote:I apologize in advance if anything I’ve written below is either too obvious or too crazy or misinformed to belong here. I hope I have something to say that is on point, but feel unsure what makes sense to say.\n* It might break for users who have a functional index where the\nfunction implicitly depends on a search_path containing a namespace\nother than pg_catalog. My opinion is that such functional indexes are\nconceptually broken and we need to desupport them, and there will be\nsome breakage, but I'm open to suggestion about how we minimize that (a\ncompatibility GUC or something?).I agree this is OK. If somebody has an index whole meaning depends on the search_path, then the best that can be said is that their database hasn't been corrupted yet. At the same time, I can see that somebody would get upset if they couldn't upgrade their database because of this. Maybe pg_upgrade could apply \"SET search_path TO pg_catalog, pg_temp\" to any function used in a functional index that doesn't have a search_path setting of its own? (BEGIN ATOMIC functions count, if I understand correctly, as having a search_path setting, because the lookups happen at definition time)Now I'm doing more reading and I'm worried about SET TIME ZONE (or more precisely, its absence) and maybe some other ones.\n* The fix might not go far enough or might be in the wrong place. I'm\nopen to suggestion here, too. Maybe we can make it part of the general\nfunction call mechanism, and can be overridden by explicitly setting\nthe function search path? Or maybe we need new syntax where the\nfunction can acquire the search path from the session explicitly, but\nuses a safe search path by default? Change it so by default each function gets handled as if \"SET search_path FROM CURRENT\" was applied to it? That's what I do for all my functions (maybe hurting performance?). Expand on my pg_upgrade idea above by applying it to all functions?I feel that this may tie into other behaviour issues where to me it is obvious that the expected behaviour should be different from the actual behaviour. If a view calls a function, shouldn't it be called in the context of the view's definer/owner? It's weird that I can write a view that filters a table for users of the view, but as soon as the view calls functions they run in the security context of the user of the view. Are views security definers or not? Similar comment for triggers. Also as far as I can tell there is no way for a security definer function to determine who (which user) invoked it. So I can grant/deny access to run a particular function using permissions, but I can't have the supposed security definer define security for different callers.Is the fundamental problem that we now find ourselves wanting to do things that require different defaults to work smoothly? On some level I suspect we want lexical scoping, which is what most of us have in our programming languages, in the database; but the database has many elements of dynamic scoping, and changing that is both a compatibility break and requires significant changes in the way the database is designed.",
"msg_date": "Thu, 6 Jul 2023 23:22:13 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "Hi,\n\nOn Thu, 2023-07-06 at 23:22 -0400, Isaac Morland wrote:\n> Maybe pg_upgrade could apply \"SET search_path TO pg_catalog, pg_temp\"\n> to any function used in a functional index that doesn't have a\n> search_path setting of its own?\n\nI don't think we want to go down the road of trying to solve this at\nupgrade time.\n\n> Now I'm doing more reading and I'm worried about SET TIME ZONE (or\n> more precisely, its absence) and maybe some other ones.\n\nThat's a good point that it's not limited to search_path, but\nsearch_path is by far the biggest problem.\n\nFor one thing, functions affected by TimeZone or other GUCs are\ntypically marked STABLE, and can't be used in index expressions. Also,\nsearch_path affects a lot more functions.\n \n> Change it so by default each function gets handled as if \"SET\n> search_path FROM CURRENT\" was applied to it?\n\nYes, that's one idea, along with some syntax to get the old behavior\n(inherit search_path at runtime) if you want.\n\nIt feels weird to make search_path too special in the syntax though. If\nwe want a general solution, we could do something like:\n\n CREATE FUNCTION ...\n [DEPENDS ON CONFIGURATION {NONE|{some_guc}[, ...]}]\n [CONFIGURATION IS {STATIC|DYNAMIC}]\n\nwhere STATIC means \"all of the GUC dependencies are SET FROM CURRENT\nunless specified otherwise\" and DYNAMIC means \"all of the GUC\ndependencies come from the session at runtime unless specified\notherwise\".\n\nThe default would be \"DEPENDS CONFIGURATION search_path CONFIGURATION\nIS STATIC\".\n\nThat would make search_path special only because, by default, every\nfunction would depend on it. Which I think summarizes the reason\nsearch_path really is special.\n\nThat also opens up opportunities to do other things we might want to\ndo:\n\n * have a compatibility GUC to set the default back to DYNAMIC\n * track other dependencies of functions better (\"DEPENDS ON TABLE\n...\")\n * provide better error messages, like \"can't use function xyz in\nindex expression because it depends on configuration parameter foo\"\n * be more consistent about using STABLE to mean that the function\ndepends on a snapshot, rather than overloading it for GUC dependencies\n\nThe question is, given that the acute problem is search_path, do we\nwant to invent all of the syntax above? Are there other use cases for\nit, or should we just focus on search_path?\n\n> That's what I do for all my functions (maybe hurting performance?).\n\nIt doesn't look cheap, although I think we could optimize it.\n\n> If a view calls a function, shouldn't it be called in the context of\n> the view's definer/owner?\n\nYeah, there are a bunch of problems along those lines. I don't know if\nwe can solve them all in one release.\n\n> Is the fundamental problem that we now find ourselves wanting to do\n> things that require different defaults to work smoothly? On some\n> level I suspect we want lexical scoping, which is what most of us\n> have in our programming languages, in the database; but the database\n> has many elements of dynamic scoping, and changing that is both a\n> compatibility break and requires significant changes in the way the\n> database is designed.\n\nDoes that suggest another approach?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 08:42:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 2023-07-06 at 18:39 -0700, Jeff Davis wrote:\n\n> * The fix might not go far enough or might be in the wrong place. I'm\n> open to suggestion here, too. Maybe we can make it part of the\n> general\n> function call mechanism, and can be overridden by explicitly setting\n> the function search path? Or maybe we need new syntax where the\n> function can acquire the search path from the session explicitly, but\n> uses a safe search path by default?\n\nI'm inclined to proceed with the current approach here, which is to\njust fix search_path for maintenance commands. Other approaches may be\npossible, but:\n\n (a) We already special-case the way functions are executed for\nmaintenance commands in other ways -- we run the functions as the table\nowner (even SECURITY INVOKER functions) and run them as a\nSECURITY_RESTRICTED_OPERATION. Setting the search_path to a safe value\nseems like a natural extension of that; and\n\n (b) The lack of a safe search path is blocking other useful features,\nsuch as the MAINTAIN privilege; and\n\n (c) I don't see other proposals, aside from a few ideas I put forward\nhere[1], which didn't get any responses.\n\nThe current approach seemed to get support from Noah, Nathan, and Greg\nStark.\n\nConcerns were raised by Gurjeet, Tom, and Robert in the 16 cycle; but\nI'm not sure whether those objections were specific to the 16 cycle or\nwhether they are objections to the approach entirely. Comments?\n\nRegards,\n\tJeff Davis\n\n\n[1]\nhttps://www.postgresql.org/message-id/6781cc79580c464a63fc0a1343637ed2b2b0cf09.camel%40j-davis.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 11:56:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 11:56 AM Jeff Davis <[email protected]> wrote:\n>\n> The current approach seemed to get support from Noah, Nathan, and Greg\n> Stark.\n>\n> Concerns were raised by Gurjeet, Tom, and Robert in the 16 cycle; but\n\nI didn't see Tom's or Robert's concerns raised in this thread. I see\nnow that for some reason there are two threads with slightly different\nsubjects. I'll catch-up on that, as well.\n\nThe other thread's subject: \"pgsql: Fix search_path to a safe value\nduring maintenance operations\"\n\n> I'm not sure whether those objections were specific to the 16 cycle or\n> whether they are objections to the approach entirely. Comments?\n\nThe approach seems good to me. My concern is with this change's\npotential to cause an extended database outage. Hence sending it out\nas part of v16, without any escape hatch for the DBA, is my objection.\n\nIt it were some commercial database, we would have simply introduced a\nhidden event, or a GUC, with default off. But a GUC for this feels too\nheavy handed.\n\nPerhaps we can provide an escape hatch as follows (warning, pseudocode ahead).\n\n> if (first_element(search_path) != \"dont_override_search_patch_for_maintenance\")\n> SetConfigOption(\"search_path\", GUC_SAFE_SEARCH_PATH, ...);\n\nSo, if someone desperately wants to just plow through the maintenance\ncommand, and are not ready or able to fix their dependence on their\nsearch_path, the community can show them this escape-hatch.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 13 Jul 2023 12:54:26 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 12:54 PM Gurjeet Singh <[email protected]> wrote:\n\n>\n> The approach seems good to me. My concern is with this change's\n> potential to cause an extended database outage. Hence sending it out\n> as part of v16, without any escape hatch for the DBA, is my objection.\n>\n>\nIf this is limited to MAINT, which I'm in support of, there is no need for\nan \"escape hatch\". A prerequisite for leveraging the new feature is that\nyou fix the code so it conforms to the new way of doing things.\n\nTom's opinion was a general dislike for differing behavior in different\nsituations. I dislike it too, on purist grounds, but would rather do this\nthan not make any forward progress because we made a poor decision in the\npast. And I'm against simply breaking the past without any recourse as what\nwe did for pg_dump/pg_restore still bothers me.\n\nDavid J.\n\nOn Thu, Jul 13, 2023 at 12:54 PM Gurjeet Singh <[email protected]> wrote:\nThe approach seems good to me. My concern is with this change's\npotential to cause an extended database outage. Hence sending it out\nas part of v16, without any escape hatch for the DBA, is my objection.If this is limited to MAINT, which I'm in support of, there is no need for an \"escape hatch\". A prerequisite for leveraging the new feature is that you fix the code so it conforms to the new way of doing things.Tom's opinion was a general dislike for differing behavior in different situations. I dislike it too, on purist grounds, but would rather do this than not make any forward progress because we made a poor decision in the past. And I'm against simply breaking the past without any recourse as what we did for pg_dump/pg_restore still bothers me.David J.",
"msg_date": "Thu, 13 Jul 2023 13:37:24 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 1:37 PM David G. Johnston\n<[email protected]> wrote:\n>\n> I'm against simply breaking the past without any recourse as what we did for pg_dump/pg_restore still bothers me.\n\nI'm sure this is tangential, but can you please provide some\ncontext/links to the change you're referring to here.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 13 Jul 2023 14:00:29 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 2:00 PM Gurjeet Singh <[email protected]> wrote:\n\n> On Thu, Jul 13, 2023 at 1:37 PM David G. Johnston\n> <[email protected]> wrote:\n> >\n> > I'm against simply breaking the past without any recourse as what we\n> did for pg_dump/pg_restore still bothers me.\n>\n> I'm sure this is tangential, but can you please provide some\n> context/links to the change you're referring to here.\n>\n>\nHere is the instigating issue and a discussion thread on the aftermath:\n\nhttps://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058%3A_Protect_Your_Search_Path\n\nhttps://www.postgresql.org/message-id/flat/13033.1531517020%40sss.pgh.pa.us#2aa2e25816d899d62f168926e3ff17b1\n\nDavid J.\n\nOn Thu, Jul 13, 2023 at 2:00 PM Gurjeet Singh <[email protected]> wrote:On Thu, Jul 13, 2023 at 1:37 PM David G. Johnston\n<[email protected]> wrote:\n>\n> I'm against simply breaking the past without any recourse as what we did for pg_dump/pg_restore still bothers me.\n\nI'm sure this is tangential, but can you please provide some\ncontext/links to the change you're referring to here.Here is the instigating issue and a discussion thread on the aftermath:https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058%3A_Protect_Your_Search_Pathhttps://www.postgresql.org/message-id/flat/13033.1531517020%40sss.pgh.pa.us#2aa2e25816d899d62f168926e3ff17b1David J.",
"msg_date": "Thu, 13 Jul 2023 14:07:27 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 2023-07-13 at 13:37 -0700, David G. Johnston wrote:\n> If this is limited to MAINT, which I'm in support of, there is no\n> need for an \"escape hatch\". A prerequisite for leveraging the new\n> feature is that you fix the code so it conforms to the new way of\n> doing things.\n\nThe current patch is not limited to exercise of the MAINTAIN privilege.\n\n> Tom's opinion was a general dislike for differing behavior in\n> different situations. I dislike it too, on purist grounds, but would\n> rather do this than not make any forward progress because we made a\n> poor decision in the past.\n\nI believe the opinion you're referring to is here:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nWhich was a reaction to another version of my patch which implemented\nyour idea to limit the changes to the MAINTAIN privilege.\n\nI agree with you that we should be practical here. The end goal is to\nmove users away from using functions that both (a) implicitly depend on\nthe search_path; and (b) are called implicitly as a side-effect of\naccessing a table. Whatever is the fastest and smoothest way to get\nthere is fine with me.\n\n> And I'm against simply breaking the past without any recourse as what\n> we did for pg_dump/pg_restore still bothers me.\n\nIs a GUC the solution here? Gurjeet called it heavy-handed, and I see\nthe point that carrying such a GUC forever would be unpleasant. But if\nit reduces the risk of breakage (or at least offers an escape hatch)\nthen it may be wise, and hopefully we can remove it later.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 13 Jul 2023 15:19:42 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 02:07:27PM -0700, David G. Johnston wrote:\n> On Thu, Jul 13, 2023 at 2:00 PM Gurjeet Singh <[email protected]> wrote:\n> > On Thu, Jul 13, 2023 at 1:37 PM David G. Johnston <[email protected]> wrote:\n> > > I'm against simply breaking the past without any recourse as what we\n> > did for pg_dump/pg_restore still bothers me.\n> >\n> > I'm sure this is tangential, but can you please provide some\n> > context/links to the change you're referring to here.\n>\n> Here is the instigating issue and a discussion thread on the aftermath:\n> https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058%3A_Protect_Your_Search_Path\n> https://www.postgresql.org/message-id/flat/13033.1531517020%40sss.pgh.pa.us#2aa2e25816d899d62f168926e3ff17b1\n\nI don't blame you for feeling bothered about it. A benefit of having done it\nis that we gained insight into the level of pain it caused. If it had been\nsufficiently painful, someone would have quickly added an escape hatch. Five\nyears later, nobody has added one.\n\nThe 2018 security fixes instigated many function repairs that $SUBJECT would\notherwise instigate. That wasn't too painful. The net new pain of $SUBJECT\nwill be less, since the 2018 security fixes prepared the path. Hence, I\nremain +1 for the latest Davis proposal.\n\n\n",
"msg_date": "Sat, 15 Jul 2023 14:13:33 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Sat, Jul 15, 2023 at 02:13:33PM -0700, Noah Misch wrote:\n> The 2018 security fixes instigated many function repairs that $SUBJECT would\n> otherwise instigate. That wasn't too painful. The net new pain of $SUBJECT\n> will be less, since the 2018 security fixes prepared the path. Hence, I\n> remain +1 for the latest Davis proposal.\n\nI concur.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Jul 2023 10:58:13 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Mon, 2023-07-17 at 10:58 -0700, Nathan Bossart wrote:\n> On Sat, Jul 15, 2023 at 02:13:33PM -0700, Noah Misch wrote:\n> > The 2018 security fixes instigated many function repairs that\n> > $SUBJECT would\n> > otherwise instigate. That wasn't too painful. The net new pain of\n> > $SUBJECT\n> > will be less, since the 2018 security fixes prepared the path. \n> > Hence, I\n> > remain +1 for the latest Davis proposal.\n> \n> I concur.\n\nBased on feedback, I plan to commit soon.\n\nTom's objection seemed specific to v16, and Robert's concern seemed to\nbe about having the MAINTAIN privilege without this patch. If I missed\nany objections to this patch, please let me know.\n\nIf we hear about breakage that suggests we need an escape hatch GUC, we\nhave time to add one later.\n\nI remain open to considering more complete fixes for the search_path\nproblems.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 17 Jul 2023 12:16:25 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Mon, 2023-07-17 at 12:16 -0700, Jeff Davis wrote:\n> Based on feedback, I plan to commit soon.\n\nAttached is a new version.\n\nChanges:\n\n* Also switch the search_path during CREATE MATERIALIZED VIEW, so that\nit's consistent with REFRESH. As a part of this change, I slightly\nreordered things in ExecCreateTableAs() so that the skipData path\nreturns early without entering the SECURITY_RESTRICTED_OPERATION. I\ndon't think that's a problem because (a) that is one place where\nSECURITY_RESTRICTED_OPERATION is not used for security, but rather for\nconsistency; and (b) that path doesn't go through rewriter, planner, or\nexecutor anyway so I don't see why it would matter.\n\n* Use GUC_ACTION_SAVE rather than GUC_ACTION_SET. That was a problem\nwith the previous patch for index functions executed in parallel\nworkers, which can happen calling SQL functions from pg_amcheck.\n\n* I used a wrapper function RestrictSearchPath() rather than calling\nset_config_option() directly. That provides a nice place in case we\nneed to add a compatibility GUC to disable it.\n\nQuestion:\n\nWhy do we switch to the table owner and use\nSECURITY_RESTRICTED_OPERATION in DefineIndex(), when we will switch in\nindex_build (etc.) anyway? Similarly, why do we switch in vacuum_rel(),\nwhen it doesn't matter for lazy vacuum and we will switch in\ncluster_rel() and do_analyze_rel() anyway?\n\nFor now, I left the extra calls to RestrictSearchPath() in for\nconsistency with the switches to the table owner.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 21 Jul 2023 15:32:43 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 03:32:43PM -0700, Jeff Davis wrote:\n> Why do we switch to the table owner and use\n> SECURITY_RESTRICTED_OPERATION in DefineIndex(), when we will switch in\n> index_build (etc.) anyway?\n\nCommit a117ceb added that, and it added some test cases that behaved\ndifferently without that.\n\n> Similarly, why do we switch in vacuum_rel(),\n> when it doesn't matter for lazy vacuum and we will switch in\n> cluster_rel() and do_analyze_rel() anyway?\n\nIt conforms to the \"as soon as possible after locking the relation\" coding\nrule that commit a117ceb wrote into miscinit.c. That provides future\nproofing.\n\n\n",
"msg_date": "Sat, 22 Jul 2023 07:04:38 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Sat, 2023-07-22 at 07:04 -0700, Noah Misch wrote:\n> Commit a117ceb added that, and it added some test cases that behaved\n> differently without that.\n\nThank you. The reasoning there seems to apply to search_path\nrestriction as well, so I will leave it as-is.\n\nI'll wait a few more days for comment since I made some changes (also\nit's the weekend), but I plan to commit something like the latest\nversion soon.\n\nI might adjust the CREATE MATERIALIZED VIEW changes to be more minimal,\nbut that path is not important for security (see pre-existing comment)\nso it's probably fine either way.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 22 Jul 2023 11:52:15 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, 2023-07-21 at 15:32 -0700, Jeff Davis wrote:\n> Attached is a new version.\n\nDo we still want to do this?\n\nRight now, the MAINTAIN privilege is blocking on some way to prevent\nmalicious users from abusing the MAINTAIN privilege and search_path to\nacquire the table owner's privileges.\n\nThe approach of locking down search_path during maintenance commands\nwould solve the problem, but it means that we are enforcing search_path\nin some contexts and not others. That's not great, but it's similar to\nwhat we are doing when we ignore SECURITY INVOKER and run the function\nas the table owner during a maintenance command, or (by default) for\nsubscriptions.\n\nMy attempts to more generally try to lock down search_path for\nfunctions attached to tables didn't seem to get much consensus, so if\nwe do make an exception to lock down search_path for maintenance\ncommands only, it would stay an exception for the foreseeable future.\n\nThoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 16:04:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 04:04:26PM -0700, Jeff Davis wrote:\n> Do we still want to do this?\n> \n> Right now, the MAINTAIN privilege is blocking on some way to prevent\n> malicious users from abusing the MAINTAIN privilege and search_path to\n> acquire the table owner's privileges.\n\nI vote +1 for proceeding with this. You've been threatening to commit this\nsince July, and from a quick skim, I don't sense any sustained objections.\nGiven one of the main objections for v16 was the timing, I would rather\ncommit this relatively early in the v17 cycle so we have ample time to deal\nwith any breakage it reveals or to further discuss any nuances.\n\nOf course, I am a bit biased because I would love to un-revert MAINTAIN,\nbut I believe others would like to see that un-reverted, too.\n\n> The approach of locking down search_path during maintenance commands\n> would solve the problem, but it means that we are enforcing search_path\n> in some contexts and not others. That's not great, but it's similar to\n> what we are doing when we ignore SECURITY INVOKER and run the function\n> as the table owner during a maintenance command, or (by default) for\n> subscriptions.\n\nGiven the experience gained from the 2018 security fixes [0], I think this\nis okay.\n\n[0] https://postgr.es/m/20230715211333.GB3675150%40rfd.leadboat.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 31 Oct 2023 11:58:02 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 19:04, Jeff Davis <[email protected]> wrote:\n\nThe approach of locking down search_path during maintenance commands\n> would solve the problem, but it means that we are enforcing search_path\n> in some contexts and not others. That's not great, but it's similar to\n> what we are doing when we ignore SECURITY INVOKER and run the function\n> as the table owner during a maintenance command, or (by default) for\n> subscriptions.\n>\n\nI don't agree that this is ignoring SECURITY INVOKER. Instead, I see it as\nrunning the maintenance command as the owner of the table, which is\ntherefore the invoker of the function. As far as I can tell we need to do\nthis for security anyway - otherwise as soon as superuser runs a\nmaintenance command (which it can already do), the owner of any function\ncalled in the course of the maintenance operation has an opportunity to get\nsuperuser.\n\nFor that matter, what would it even mean to ignore SECURITY INVOKER? Run\nthe function as its owner if it were SECURITY DEFINER?\n\nI understand what ignoring SECURITY DEFINER would mean: obviously, don't\nadjust the current user on entry and exit.\n\nThe privilege boundary should be at the point where the maintenance command\nstarts: the role with MAINTAIN privilege gets to kick off maintenance, but\ndoesn't get to specify anything after that, including the search_path (of\ncourse, function execution search paths should not normally depend on the\ncaller's search path anyway, but that's a bigger discussion with an\nunfortunate backward compatibility problem).\n\nPerhaps the search_path for running a maintenance command should be the\nsearch_path set for the table owner (ALTER ROLE … SET search_path …)? If\nnone set, the default \"$user\", public. After that, change search_path on\nfunction invocation as usual rather than having special rules for what\nhappens when a function is invoked during a maintenance command.\n\nMy attempts to more generally try to lock down search_path for\n> functions attached to tables didn't seem to get much consensus, so if\n> we do make an exception to lock down search_path for maintenance\n> commands only, it would stay an exception for the foreseeable future.\n>\n\nOn Fri, 27 Oct 2023 at 19:04, Jeff Davis <[email protected]> wrote:\nThe approach of locking down search_path during maintenance commands\nwould solve the problem, but it means that we are enforcing search_path\nin some contexts and not others. That's not great, but it's similar to\nwhat we are doing when we ignore SECURITY INVOKER and run the function\nas the table owner during a maintenance command, or (by default) for\nsubscriptions.I don't agree that this is ignoring SECURITY INVOKER. Instead, I see it as running the maintenance command as the owner of the table, which is therefore the invoker of the function. As far as I can tell we need to do this for security anyway - otherwise as soon as superuser runs a maintenance command (which it can already do), the owner of any function called in the course of the maintenance operation has an opportunity to get superuser.For that matter, what would it even mean to ignore SECURITY INVOKER? Run the function as its owner if it were SECURITY DEFINER? I understand what ignoring SECURITY DEFINER would mean: obviously, don't adjust the current user on entry and exit.The privilege boundary should be at the point where the maintenance command starts: the role with MAINTAIN privilege gets to kick off maintenance, but doesn't get to specify anything after that, including the search_path (of course, function execution search paths should not normally depend on the caller's search path anyway, but that's a bigger discussion with an unfortunate backward compatibility problem).Perhaps the search_path for running a maintenance command should be the search_path set for the table owner (ALTER ROLE … SET search_path …)? If none set, the default \"$user\", public. After that, change search_path on function invocation as usual rather than having special rules for what happens when a function is invoked during a maintenance command.\nMy attempts to more generally try to lock down search_path for\nfunctions attached to tables didn't seem to get much consensus, so if\nwe do make an exception to lock down search_path for maintenance\ncommands only, it would stay an exception for the foreseeable future.",
"msg_date": "Tue, 31 Oct 2023 13:16:07 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Tue, 2023-10-31 at 13:16 -0400, Isaac Morland wrote:\n\n> Perhaps the search_path for running a maintenance command should be\n> the search_path set for the table owner (ALTER ROLE … SET search_path\n> …)?\n\nThat's an interesting idea; I hadn't considered that, or at least not\nvery deeply. I feel like it came up before but I can't remember what\n(if anything) was wrong with it.\n\nIf we expanded this idea a bit, I could imagine it applying to SECURITY\nDEFINER functions as well, and that would make writing SECURITY DEFINER\nfunctions a lot safer.\n\nI was earlier pushing for search_path to be tied to the function (in my\n\"CREATE FUNCTION ... SEARCH\" proposal) on the grounds that the author\n(usually) doesn't want the behavior to depend on the caller's\nsearch_path. That proposal didn't go very well because it required\nextra DDL.\n\nBut if we tie the search_path to the user-switching behavior rather\nthan the function, that still protects the function author from many\nsorts of search_path attacks, because it's either running as the\nfunction author with the function author's search_path; or running as\nthe invoking user with their search_path. And there aren't a lot of\ncases where a function author would want it to run with their\nprivileges and the caller's search_path.\n\n[ It would still leave open the problem of a SECURITY INVOKER function\nin an index expression returning inconsistent results due to a changing\nsearch_path, which would compromise the index structure and any\nconstraints using that index. But that problem is more bounded, at\nleast. ]\n\n> After that, change search_path on function invocation as usual\n> rather than having special rules for what happens when a function is\n> invoked during a maintenance command.\n\nI don't follow what you mean here.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 02 Nov 2023 11:21:59 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 2 Nov 2023 at 14:22, Jeff Davis <[email protected]> wrote:\n\n> On Tue, 2023-10-31 at 13:16 -0400, Isaac Morland wrote:\n>\n> > Perhaps the search_path for running a maintenance command should be\n> > the search_path set for the table owner (ALTER ROLE … SET search_path\n> > …)?\n>\n> That's an interesting idea; I hadn't considered that, or at least not\n> very deeply. I feel like it came up before but I can't remember what\n> (if anything) was wrong with it.\n>\n> If we expanded this idea a bit, I could imagine it applying to SECURITY\n> DEFINER functions as well, and that would make writing SECURITY DEFINER\n> functions a lot safer.\n>\n\nI still think the right default is that CREATE FUNCTION stores the\nsearch_path in effect when it runs with the function, and that is the\nsearch_path used to run the function (and don't \"BEGIN ATOMIC\" functions\npartially work this way already?). But I suggest the owner search_path as\nan option which is clearly better than using the caller's search_path in\nmost cases.\n\nI think the problems are essentially the same for security invoker vs.\nsecurity definer. The difference is that the problems are security problems\nonly for security definers.\n\n> After that, change search_path on function invocation as usual\n> > rather than having special rules for what happens when a function is\n> > invoked during a maintenance command.\n>\n> I don't follow what you mean here.\n>\n\nI’m referring to the idea that the search_path during function execution\nshould be determined at function creation time (or, at least, not at\nfunction execution time). While this is a security requirement for security\ndefiner functions, I think it’s what is wanted about 99.9% of the time\nfor security invoker functions as well. So when the maintenance command\nends up running a function, the search_path in effect during the function\nexecution will be the one established at function definition time; or if we\ngo with this \"search_path associated with function owner\" idea, then again\nthe search_path is determined by the usual rule (function owner), rather\nthan by any special rules associated with maintenance commands.\n\nOn Thu, 2 Nov 2023 at 14:22, Jeff Davis <[email protected]> wrote:On Tue, 2023-10-31 at 13:16 -0400, Isaac Morland wrote:\n\n> Perhaps the search_path for running a maintenance command should be\n> the search_path set for the table owner (ALTER ROLE … SET search_path\n> …)?\n\nThat's an interesting idea; I hadn't considered that, or at least not\nvery deeply. I feel like it came up before but I can't remember what\n(if anything) was wrong with it.\n\nIf we expanded this idea a bit, I could imagine it applying to SECURITY\nDEFINER functions as well, and that would make writing SECURITY DEFINER\nfunctions a lot safer.I still think the right default is that CREATE FUNCTION stores the search_path in effect when it runs with the function, and that is the search_path used to run the function (and don't \"BEGIN ATOMIC\" functions partially work this way already?). But I suggest the owner search_path as an option which is clearly better than using the caller's search_path in most cases.I think the problems are essentially the same for security invoker vs. security definer. The difference is that the problems are security problems only for security definers.\n> After that, change search_path on function invocation as usual\n> rather than having special rules for what happens when a function is\n> invoked during a maintenance command.\n\nI don't follow what you mean here.I’m referring to the idea that the search_path during function execution should be determined at function creation time (or, at least, not at function execution time). While this is a security requirement for security definer functions, I think it’s what is wanted about 99.9% of the time for security invoker functions as well. So when the maintenance command ends up running a function, the search_path in effect during the function execution will be the one established at function definition time; or if we go with this \"search_path associated with function owner\" idea, then again the search_path is determined by the usual rule (function owner), rather than by any special rules associated with maintenance commands.",
"msg_date": "Mon, 6 Nov 2023 15:31:58 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "Isaac Morland <[email protected]> writes:\n> I still think the right default is that CREATE FUNCTION stores the\n> search_path in effect when it runs with the function, and that is the\n> search_path used to run the function (and don't \"BEGIN ATOMIC\" functions\n> partially work this way already?).\n\nI don't see how that would possibly fly. Yeah, that behavior is\noften what you want, but not always; we would break some peoples'\napplications with that rule.\n\nAlso, one place where it's clearly NOT what you want is while\nrestoring a pg_dump script. And we don't have any way that we could\nbootstrap ourselves out of breaking everything for everybody during\ntheir next upgrade --- even if you insist that people use a newer\npg_dump, where is it going to find the info in an existing database?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Nov 2023 15:53:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 15:54, Tom Lane <[email protected]> wrote:\n\n> Isaac Morland <[email protected]> writes:\n> > I still think the right default is that CREATE FUNCTION stores the\n> > search_path in effect when it runs with the function, and that is the\n> > search_path used to run the function (and don't \"BEGIN ATOMIC\" functions\n> > partially work this way already?).\n>\n> I don't see how that would possibly fly. Yeah, that behavior is\n> often what you want, but not always; we would break some peoples'\n> applications with that rule.\n>\n\nThe behaviour I want is just “SET search_path FROM CURRENT\".\n\nI agree there is a backward compatibility issue; if somebody has a schema\ncreation/update script with function definitions with no \"SET search_path\"\nthey would suddenly start getting the search_path from definition time\nrather than the caller's search_path.\n\nI don't like adding GUCs but a single one specifying whether no search_path\nspecification means \"FROM CURRENT\" or the current behaviour (new explicit\nsyntax \"FROM CALLER\"?) would I think address the backward compatibility\nissue. This would allow a script to specify at the top which convention it\nis using; a typical old script could be adapted to a new database by adding\na single line at the top to get the old behaviour.\n\nAlso, one place where it's clearly NOT what you want is while\n> restoring a pg_dump script. And we don't have any way that we could\n> bootstrap ourselves out of breaking everything for everybody during\n> their next upgrade --- even if you insist that people use a newer\n> pg_dump, where is it going to find the info in an existing database?\n>\n\nA function with a stored search_path will have a \"SET search_path\" clause\nin the pg_dump output, so for these functions pg_dump would be unaffected\nby my preferred way of doing things. Already I don't believe pg_dump ever\nputs \"SET search_path FROM CURRENT\" in its output; it puts the actual\nsearch_path. A bigger problem is with existing functions that use the\ncaller's search_path; these would need to specify \"FROM CALLER\" explicitly;\nbut the new GUC could come into this. In effect a pg_dump created by an old\nversion is an old script which would need the appropriate setting at the\ntop.\n\nBut all this is premature if there is still disagreement on the proper\ndefault behaviour. To me it is absolutely clear that the right default, in\nthe absence of an installed base with backward compatibility concerns, is\n\"SET search_path FROM CURRENT\". This is how substantially all programming\nlanguages work: it is quite unusual in modern programming languages to have\nthe meaning of a procedure definition depend on which modules the caller\nhas imported. The tricky bit is dealing smoothly with the installed base.\nBut some of the discussion here makes me think that people have a different\nattitude about stored procedures.\n\nOn Mon, 6 Nov 2023 at 15:54, Tom Lane <[email protected]> wrote:Isaac Morland <[email protected]> writes:\n> I still think the right default is that CREATE FUNCTION stores the\n> search_path in effect when it runs with the function, and that is the\n> search_path used to run the function (and don't \"BEGIN ATOMIC\" functions\n> partially work this way already?).\n\nI don't see how that would possibly fly. Yeah, that behavior is\noften what you want, but not always; we would break some peoples'\napplications with that rule.The behaviour I want is just “SET search_path FROM CURRENT\".I agree there is a backward compatibility issue; if somebody has a schema creation/update script with function definitions with no \"SET search_path\" they would suddenly start getting the search_path from definition time rather than the caller's search_path.I don't like adding GUCs but a single one specifying whether no search_path specification means \"FROM CURRENT\" or the current behaviour (new explicit syntax \"FROM CALLER\"?) would I think address the backward compatibility issue. This would allow a script to specify at the top which convention it is using; a typical old script could be adapted to a new database by adding a single line at the top to get the old behaviour.\nAlso, one place where it's clearly NOT what you want is while\nrestoring a pg_dump script. And we don't have any way that we could\nbootstrap ourselves out of breaking everything for everybody during\ntheir next upgrade --- even if you insist that people use a newer\npg_dump, where is it going to find the info in an existing database?A function with a stored search_path will have a \"SET search_path\" clause in the pg_dump output, so for these functions pg_dump would be unaffected by my preferred way of doing things. Already I don't believe pg_dump ever puts \"SET search_path FROM CURRENT\" in its output; it puts the actual search_path. A bigger problem is with existing functions that use the caller's search_path; these would need to specify \"FROM CALLER\" explicitly; but the new GUC could come into this. In effect a pg_dump created by an old version is an old script which would need the appropriate setting at the top.But all this is premature if there is still disagreement on the proper default behaviour. To me it is absolutely clear that the right default, in the absence of an installed base with backward compatibility concerns, is \"SET search_path FROM CURRENT\". This is how substantially all programming languages work: it is quite unusual in modern programming languages to have the meaning of a procedure definition depend on which modules the caller has imported. The tricky bit is dealing smoothly with the installed base. But some of the discussion here makes me think that people have a different attitude about stored procedures.",
"msg_date": "Mon, 6 Nov 2023 17:31:30 -0500",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Tue, 2023-10-31 at 13:16 -0400, Isaac Morland wrote:\n> Perhaps the search_path for running a maintenance command should be\n> the search_path set for the table owner (ALTER ROLE … SET search_path\n> …)?\n\nAfter some thought, I don't think that's the right approach. It adds\nanother way search path can be changed, which adds to the complexity.\n\nAlso, by default it's \"$user\", public; and given that \"public\" was\nworld-writable until recently, that doesn't seem like a good idea for a\nchange intended to prevent search_path manipulation.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 07 Nov 2023 13:47:05 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, Jan 18, 2024 at 9:19 AM Jeff Davis <[email protected]> wrote:\n>\n> On Mon, 2023-07-17 at 12:16 -0700, Jeff Davis wrote:\n> > Based on feedback, I plan to commit soon.\n>\n> Attached is a new version.\n>\n> Changes:\n>\n> * Also switch the search_path during CREATE MATERIALIZED VIEW, so that\n> it's consistent with REFRESH. As a part of this change, I slightly\n> reordered things in ExecCreateTableAs() so that the skipData path\n> returns early without entering the SECURITY_RESTRICTED_OPERATION. I\n> don't think that's a problem because (a) that is one place where\n> SECURITY_RESTRICTED_OPERATION is not used for security, but rather for\n> consistency; and (b) that path doesn't go through rewriter, planner, or\n> executor anyway so I don't see why it would matter.\n>\n> * Use GUC_ACTION_SAVE rather than GUC_ACTION_SET. That was a problem\n> with the previous patch for index functions executed in parallel\n> workers, which can happen calling SQL functions from pg_amcheck.\n>\n> * I used a wrapper function RestrictSearchPath() rather than calling\n> set_config_option() directly. That provides a nice place in case we\n> need to add a compatibility GUC to disable it.\n>\n> Question:\n>\n> Why do we switch to the table owner and use\n> SECURITY_RESTRICTED_OPERATION in DefineIndex(), when we will switch in\n> index_build (etc.) anyway? Similarly, why do we switch in vacuum_rel(),\n> when it doesn't matter for lazy vacuum and we will switch in\n> cluster_rel() and do_analyze_rel() anyway?\n\nI tried to apply the patch but it is failing at the Head. It is giving\nthe following error:\nHunk #7 succeeded at 3772 (offset -12 lines).\npatching file src/backend/commands/matview.c\npatching file src/backend/commands/vacuum.c\nHunk #2 succeeded at 2169 (offset -19 lines).\npatching file src/backend/utils/init/usercontext.c\npatching file src/bin/scripts/t/100_vacuumdb.pl\nHunk #1 FAILED at 109.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/bin/scripts/t/100_vacuumdb.pl.rej\npatching file src/include/utils/usercontext.h\npatching file src/test/modules/test_oat_hooks/expected/test_oat_hooks.out\npatching file src/test/regress/expected/matview.out\npatching file src/test/regress/expected/privileges.out\npatching file src/test/regress/expected/vacuum.out\npatching file src/test/regress/sql/matview.sql\npatching file src/test/regress/sql/privileges.sql\npatching file src/test/regress/sql/vacuum.sql\n\nPlease send the Re-base version of the patch.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Thu, 18 Jan 2024 09:24:36 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix search_path for all maintenance commands"
},
{
"msg_contents": "On Thu, 2024-01-18 at 09:24 +0530, Shubham Khanna wrote:\n> I tried to apply the patch but it is failing at the Head. It is\n> giving\n> the following error:\n\nI am withdrawing this patch from the CF until it's more clear that this\nis what we really want to do.\n\nThank you for looking into it.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 21:11:00 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix search_path for all maintenance commands"
}
] |
[
{
"msg_contents": "I don't recall if this has come up before.\n\nI'm sometimes mildly annoyed when I get on a new system and find the \nusername missing in my psql prompt. Or if a customer shows me a screen \nand I have to ask \"which user is this\". If we're dealing with several \nroles it can get confusing. My usual .psqlrc has\n\n \\set PROMPT1 '%n@%~%R%x%# '\n\nSo my suggestion is that we prepend '%n@' to the default psql PROMPT1 \n(and maybe PROMPT2).\n\nI realize it's not exactly earth-shattering, but I think it's a bit more \nuser-friendly.\n\n(Would be a good beginner project, the code change would be tiny but \nthere are lots of docs / examples that we might want to change if we did \nit.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nI don't recall if this has come up before.\n\nI'm sometimes mildly annoyed when I get on\n a new system and find the username missing in my psql prompt. Or\n if a customer shows me a screen and I have to ask \"which user is\n this\". If we're dealing with several roles it can get confusing.\n My usual .psqlrc has\n \\set PROMPT1 '%n@%~%R%x%# '\nSo my suggestion is that we prepend '%n@'\n to the default psql PROMPT1 (and maybe PROMPT2).\nI realize it's not exactly\n earth-shattering, but I think it's a bit more user-friendly.\n(Would be a good beginner project, the\n code change would be tiny but there are lots of docs / examples\n that we might want to change if we did it.)\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 27 May 2023 08:52:24 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "session username in default psql prompt?"
},
{
"msg_contents": "I'd like to take this if this is approved ;)\n\nOn Sat, May 27, 2023 at 8:52 PM Andrew Dunstan <[email protected]> wrote:\n>\n> I don't recall if this has come up before.\n>\n> I'm sometimes mildly annoyed when I get on a new system and find the username missing in my psql prompt. Or if a customer shows me a screen and I have to ask \"which user is this\". If we're dealing with several roles it can get confusing. My usual .psqlrc has\n>\n> \\set PROMPT1 '%n@%~%R%x%# '\n>\n> So my suggestion is that we prepend '%n@' to the default psql PROMPT1 (and maybe PROMPT2).\n>\n> I realize it's not exactly earth-shattering, but I think it's a bit more user-friendly.\n>\n> (Would be a good beginner project, the code change would be tiny but there are lots of docs / examples that we might want to change if we did it.)\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 28 May 2023 13:56:57 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "Here’s a patch for this.\n\n- Kori\n\n\n\n\n> On May 27, 2023, at 8:52 AM, Andrew Dunstan <[email protected]> wrote:\n> \n> I don't recall if this has come up before.\n> \n> I'm sometimes mildly annoyed when I get on a new system and find the username missing in my psql prompt. Or if a customer shows me a screen and I have to ask \"which user is this\". If we're dealing with several roles it can get confusing. My usual .psqlrc has\n> \n> \\set PROMPT1 '%n@%~%R%x%# '\n> \n> So my suggestion is that we prepend '%n@' to the default psql PROMPT1 (and maybe PROMPT2).\n> \n> I realize it's not exactly earth-shattering, but I think it's a bit more user-friendly.\n> \n> (Would be a good beginner project, the code change would be tiny but there are lots of docs / examples that we might want to change if we did it.)\n> \n> \n> \n> cheers\n> \n> \n> \n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com <https://www.enterprisedb.com/>",
"msg_date": "Tue, 27 Feb 2024 19:19:48 -0500",
"msg_from": "Kori Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On 2024-02-27 Tu 19:19, Kori Lane wrote:\n> Here’s a patch for this.\n>\n>\n\nReposting as the archive mail processor doesn't seem to like the Apple \nmail attachment.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 13 Mar 2024 04:56:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 4:56 AM Andrew Dunstan <[email protected]> wrote:\n> Reposting as the archive mail processor doesn't seem to like the Apple\n> mail attachment.\n\nI'm not really a fan of this. Right now my prompt looks like this:\n\nrobert.haas=#\n\nIf we did this, it would say:\n\[email protected]=#\n\nI have yet to meet anyone who doesn't think that one Robert Haas is\nquite enough already, and perhaps too much by half.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 16:04:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 4:04 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Mar 13, 2024 at 4:56 AM Andrew Dunstan <[email protected]>\n> wrote:\n> > Reposting as the archive mail processor doesn't seem to like the Apple\n> > mail attachment.\n>\n> I'm not really a fan of this. Right now my prompt looks like this:\n>\n> robert.haas=#\n>\n> If we did this, it would say:\n>\n> [email protected]=#\n>\n\n\n\nHmm. Perhaps we should change the default to \"%n@%~%R%x%# \"\n\nThen when connected to your eponymous database you would see\n\nrobert.haas@~=#\n\nOf course, people can put this in their .psqlrc, and I do. The suggestion\ncame about because I had a couple of instances where people using the\ndefault prompt showed me stuff and the problem arose from confusion about\nwhich user they were connected as.\n\n\n\n> I have yet to meet anyone who doesn't think that one Robert Haas is\n> quite enough already, and perhaps too much by half.\n>\n\n\n\nperish the thought.\n\ncheers\n\nandrew\n\nOn Fri, Mar 22, 2024 at 4:04 PM Robert Haas <[email protected]> wrote:On Wed, Mar 13, 2024 at 4:56 AM Andrew Dunstan <[email protected]> wrote:\n> Reposting as the archive mail processor doesn't seem to like the Apple\n> mail attachment.\n\nI'm not really a fan of this. Right now my prompt looks like this:\n\nrobert.haas=#\n\nIf we did this, it would say:\n\[email protected]=#Hmm. Perhaps we should change the default to \"%n@%~%R%x%# \"Then when connected to your eponymous database you would seerobert.haas@~=#Of course, people can put this in their .psqlrc, and I do. The suggestion came about because I had a couple of instances where people using the default prompt showed me stuff and the problem arose from confusion about which user they were connected as.\n\nI have yet to meet anyone who doesn't think that one Robert Haas is\nquite enough already, and perhaps too much by half.perish the thought. cheersandrew",
"msg_date": "Fri, 22 Mar 2024 17:34:18 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On Fri, Mar 22, 2024 at 4:04 PM Robert Haas <[email protected]> wrote:\n>> I'm not really a fan of this. Right now my prompt looks like this:\n>> robert.haas=#\n>> If we did this, it would say:\n>> [email protected]=#\n\nThere would be similar duplication for, eg, the postgres user\nconnected to the postgres database. However, I'm more worried\nabout the case where they don't match, because then the %~\nsuggestion doesn't help shorten it.\n\n> Of course, people can put this in their .psqlrc, and I do. The suggestion\n> came about because I had a couple of instances where people using the\n> default prompt showed me stuff and the problem arose from confusion about\n> which user they were connected as.\n\nOn the whole, I think we'd get more complaints about the default\nprompt having more-or-less doubled in length than we'd get kudos\nabout this being a great usability improvement. Prompt space is\nexpensive and precious, at least for people who aren't in the\nhabit of working in very wide windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Mar 2024 19:34:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 7:34 PM Tom Lane <[email protected]> wrote:\n\n>\n> On the whole, I think we'd get more complaints about the default\n> prompt having more-or-less doubled in length than we'd get kudos\n> about this being a great usability improvement. Prompt space is\n> expensive and precious, at least for people who aren't in the\n> habit of working in very wide windows.\n>\n>\n>\n\n\nI'm not sure you're right, but in view of the opposition I won't press it.\nThanks to Kori for the patch.\n\ncheers\n\nandrew\n\nOn Fri, Mar 22, 2024 at 7:34 PM Tom Lane <[email protected]> wrote:\nOn the whole, I think we'd get more complaints about the default\nprompt having more-or-less doubled in length than we'd get kudos\nabout this being a great usability improvement. Prompt space is\nexpensive and precious, at least for people who aren't in the\nhabit of working in very wide windows.\n\n I'm not sure you're right, but in view of the opposition I won't press it. Thanks to Kori for the patch.cheersandrew",
"msg_date": "Fri, 22 Mar 2024 22:27:16 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Sat, 23 Mar 2024 at 00:34, Tom Lane <[email protected]> wrote:\n> Prompt space is\n> expensive and precious, at least for people who aren't in the\n> habit of working in very wide windows.\n\nThat problem seems easy to address by adding a newline into the\ndefault prompt. Something like this:\n\n\\set PROMPT1 '%n@%~%R%\\n# '\n\nI myself use:\n\\set PROMPT1 '%[%033[1m%]%M %n@%/:%>-%p%R%[%033[0m%]%\\n> '\n\n\n",
"msg_date": "Mon, 25 Mar 2024 09:30:14 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, 25 Mar 2024 at 09:30, Jelte Fennema-Nio <[email protected]> wrote:\n> \\set PROMPT1 '%n@%~%R%\\n# '\n\nObviously I meant to put the \\n before the %:\n\\set PROMPT1 '%n@%~%R\\n%# '\n\n\n",
"msg_date": "Mon, 25 Mar 2024 11:32:29 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 6:32 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> Obviously I meant to put the \\n before the %:\n> \\set PROMPT1 '%n@%~%R\\n%# '\n>\n\ntransaction related information lost.\n\nfor example:\njian@src6=\n# begin;\nBEGIN\njian@src6=\n# select 1/0;\n2024-03-25 18:37:59.313 CST [15252] ERROR: division by zero\n2024-03-25 18:37:59.313 CST [15252] STATEMENT: select 1/0;\nERROR: division by zero\njian@src6=\n# rollback ;\nROLLBACK\n\n\nmaster behavior:\nsrc6=# begin ;\nBEGIN\nsrc6=*# select 1/0;\n2024-03-25 18:38:31.997 CST [24688] ERROR: division by zero\n2024-03-25 18:38:31.997 CST [24688] STATEMENT: select 1/0;\nERROR: division by zero\nsrc6=!# rollback ;\nROLLBACK\n\n\n",
"msg_date": "Mon, 25 Mar 2024 18:40:30 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, 25 Mar 2024 at 11:40, jian he <[email protected]> wrote:\n> transaction related information lost.\n\nAh yeah, it seems I somehow lost the %x\nHow about:\n\\set PROMPT1 '%n@%~\\n%R%x%# '\n\nOr maybe even this more verbose one, which closely matches the\npostgresql:// connection string format:\n\n\\set PROMPT1 '%n@%M:%>/%/\\n%R%x%# '\n\nor even add in some bold/colors like this, to make the prompt stand\nout from a query:\n\n\\set PROMPT1 '%[%033[1m%]%n@%M:%>/%/\\n%R%x%#%[%033[0m%] '\n\n\n",
"msg_date": "Mon, 25 Mar 2024 12:03:31 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 4:30 AM Jelte Fennema-Nio <[email protected]> wrote:\n> That problem seems easy to address by adding a newline into the\n> default prompt.\n\nUgh. Please, no!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 09:06:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, 25 Mar 2024 at 14:06, Robert Haas <[email protected]> wrote:\n> On Mon, Mar 25, 2024 at 4:30 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > That problem seems easy to address by adding a newline into the\n> > default prompt.\n>\n> Ugh. Please, no!\n\nI guess it's partially a matter of taste, but personally I'm never\ngoing back to a single line prompt. It's so nice for zoomed-in demos\nthat your SQL queries don't get broken up.\n\n\n",
"msg_date": "Mon, 25 Mar 2024 14:14:34 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Mon, Mar 25, 2024 at 9:14 AM Jelte Fennema-Nio <[email protected]>\nwrote:\n\n> On Mon, 25 Mar 2024 at 14:06, Robert Haas <[email protected]> wrote:\n> > On Mon, Mar 25, 2024 at 4:30 AM Jelte Fennema-Nio <[email protected]>\n> wrote:\n> > > That problem seems easy to address by adding a newline into the\n> > > default prompt.\n> >\n> > Ugh. Please, no!\n>\n> I guess it's partially a matter of taste, but personally I'm never\n> going back to a single line prompt. It's so nice for zoomed-in demos\n> that your SQL queries don't get broken up.\n>\n\n\nVery much a matter of taste. I knew when I saw your suggestion there would\nbe some kickback. If horizontal space is at a premium vertical space is\ndoubly so, I suspect.\n\ncheers\n\nandrew\n\nOn Mon, Mar 25, 2024 at 9:14 AM Jelte Fennema-Nio <[email protected]> wrote:On Mon, 25 Mar 2024 at 14:06, Robert Haas <[email protected]> wrote:\n> On Mon, Mar 25, 2024 at 4:30 AM Jelte Fennema-Nio <[email protected]> wrote:\n> > That problem seems easy to address by adding a newline into the\n> > default prompt.\n>\n> Ugh. Please, no!\n\nI guess it's partially a matter of taste, but personally I'm never\ngoing back to a single line prompt. It's so nice for zoomed-in demos\nthat your SQL queries don't get broken up.Very much a matter of taste. I knew when I saw your suggestion there would be some kickback. If horizontal space is at a premium vertical space is doubly so, I suspect.cheersandrew",
"msg_date": "Mon, 25 Mar 2024 19:14:18 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: session username in default psql prompt?"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 7:14 AM Andrew Dunstan <[email protected]> wrote:\n>\n>\n>\n> On Mon, Mar 25, 2024 at 9:14 AM Jelte Fennema-Nio <[email protected]> wrote:\n>>\n>> On Mon, 25 Mar 2024 at 14:06, Robert Haas <[email protected]> wrote:\n>> > On Mon, Mar 25, 2024 at 4:30 AM Jelte Fennema-Nio <[email protected]> wrote:\n>> > > That problem seems easy to address by adding a newline into the\n>> > > default prompt.\n>> >\n>> > Ugh. Please, no!\n>>\n>> I guess it's partially a matter of taste, but personally I'm never\n>> going back to a single line prompt. It's so nice for zoomed-in demos\n>> that your SQL queries don't get broken up.\n>\n>\n>\n> Very much a matter of taste. I knew when I saw your suggestion there would be some kickback. If horizontal space is at a premium vertical space is doubly so, I suspect.\n>\n\nthe change (session username in default psql prompt) is quite visible,\nmaybe this time we can conduct a poll,\nbut in a way the poll can reach more people?\n\n\n",
"msg_date": "Fri, 12 Apr 2024 08:54:26 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: session username in default psql prompt?"
}
] |
[
{
"msg_contents": "The attached patch clarifies that the server will start even if it is\nunable to open the port on some of the TCP/IP addresses listed (or\nimplied by a value of '*' or localhost) in listen_addresses parameter.\n\nI think it is important to call this out, because I was surprised to\nsee that the server started even though the port was occupied by\nanother process. Upon close inspection, I noticed that the other\nprocess was using that port on 127.0.0.1, so Postgres complained about\nthat interface (a warning in server log), but it was able to open the\nport on IPv6 ::1, so it started up as normal.\n\nUpon further testing, I saw that server will not start only if it is\nunable to open the port on _all_ the interfaces/addresses. It it's\nable to open the port on at least one, the server will start.\n\nIf listen_addresses is empty, then server won't try to open any TCP/IP\nports. The patch does not change any language related to that.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sat, 27 May 2023 15:17:21 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Sat, May 27, 2023 at 03:17:21PM -0700, Gurjeet Singh wrote:\n> Upon further testing, I saw that server will not start only if it is\n> unable to open the port on _all_ the interfaces/addresses. It it's\n> able to open the port on at least one, the server will start.\n\nThis surprised me. I would've expected the server to fail to start if it\nfailed for anything in listen_addresses. After some digging, I found what\nI believe is the original justification [0] as well as a follow-up thread\n[1] that seems to call out kernel support for IPv6 as the main objection.\nPerhaps it is time to reevaluate this decision.\n\n> If listen_addresses is empty, then server won't try to open any TCP/IP\n> ports. The patch does not change any language related to that.\n\nYour proposed change notes that the server only starts if it can listen on\nat least one TCP/IP address, which I worry might lead folks to think that\nthe server won't start if listen_addresses is empty.\n\n[0] https://postgr.es/m/6739.1079384078%40sss.pgh.pa.us\n[1] https://postgr.es/m/200506281149.51696.peter_e%40gmx.net\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:59:27 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 10:59 PM Nathan Bossart\n<[email protected]> wrote:\n> On Sat, May 27, 2023 at 03:17:21PM -0700, Gurjeet Singh wrote:\n> > If listen_addresses is empty, then server won't try to open any TCP/IP\n> > ports. The patch does not change any language related to that.\n>\n> Your proposed change notes that the server only starts if it can listen on\n> at least one TCP/IP address, which I worry might lead folks to think that\n> the server won't start if listen_addresses is empty.\n\nPerhaps we can prefix that statement with \"If listen_addresses is not\nempty\", like so:\n\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -661,3 +661,9 @@ include_dir 'conf.d'\n which allows only local TCP/IP <quote>loopback</quote>\nconnections to be\n- made. While client authentication (<xref\n+ made. If <varname>listen_addresses</varname> is not empty, the server\n+ will start only if it can open the <varname>port</varname>\n+ on at least one TCP/IP address. If server is unable to open\n+ <varname>port</varname> on a TCP/IP address, it emits a warning.\n+ <para>\n+ </para>\n+ While client authentication (<xref\n linkend=\"client-authentication\"/>) allows fine-grained control\n\n\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Mon, 12 Jun 2023 23:57:45 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 11:57:45PM -0700, Gurjeet Singh wrote:\n> Perhaps we can prefix that statement with \"If listen_addresses is not\n> empty\", like so:\n\nBefore we spend too much time trying to document the current behavior, I\nthink we should see if we can change it to something less surprising (i.e.,\nfailing to start if the server fails for any address). The original\nobjections around kernel support for IPv6 might no longer stand.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:44:33 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> Before we spend too much time trying to document the current behavior, I\n> think we should see if we can change it to something less surprising (i.e.,\n> failing to start if the server fails for any address). The original\n> objections around kernel support for IPv6 might no longer stand.\n\nI think that'd be more surprising not less.\n\nThe systemd guys certainly believe that daemons ought to auto-adapt\nto changes in the machine's internet connectivity. We aren't there\nyet, but I can imagine somebody trying to fix that someday soon.\nIf the postmaster is able to dynamically acquire and drop ports then\nit would certainly not make sense to behave as you suggest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:28:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 04:28:31PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Before we spend too much time trying to document the current behavior, I\n>> think we should see if we can change it to something less surprising (i.e.,\n>> failing to start if the server fails for any address). The original\n>> objections around kernel support for IPv6 might no longer stand.\n> \n> I think that'd be more surprising not less.\n\nThe reason it surprises me is because it creates uncertainty about the\nserver configuration. Granted, I could look in the logs for any warnings,\nbut I'm not sure that's the best experience. I would expect this to work\nmore like huge_pages. If I set huge_pages to \"on\", I know that the server\nis using huge pages if it starts up.\n\n> The systemd guys certainly believe that daemons ought to auto-adapt\n> to changes in the machine's internet connectivity. We aren't there\n> yet, but I can imagine somebody trying to fix that someday soon.\n> If the postmaster is able to dynamically acquire and drop ports then\n> it would certainly not make sense to behave as you suggest.\n\nAgreed, if listen_addresses became a PGC_SIGHUP parameter, it would make\nsense to avoid shutting down the server if it was dynamically\nmisconfigured, as is done for the configuration files. I think that\nargument applies for changes in connectivity, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 14:38:14 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "At Tue, 13 Jun 2023 14:38:14 -0700, Nathan Bossart <[email protected]> wrote in \n> On Tue, Jun 13, 2023 at 04:28:31PM -0400, Tom Lane wrote:\n> > Nathan Bossart <[email protected]> writes:\n> >> Before we spend too much time trying to document the current behavior, I\n> >> think we should see if we can change it to something less surprising (i.e.,\n> >> failing to start if the server fails for any address). The original\n> >> objections around kernel support for IPv6 might no longer stand.\n> > \n> > I think that'd be more surprising not less.\n> \n> The reason it surprises me is because it creates uncertainty about the\n> server configuration. Granted, I could look in the logs for any warnings,\n> but I'm not sure that's the best experience. I would expect this to work\n> more like huge_pages. If I set huge_pages to \"on\", I know that the server\n> is using huge pages if it starts up.\n> \n> > The systemd guys certainly believe that daemons ought to auto-adapt\n> > to changes in the machine's internet connectivity. We aren't there\n> > yet, but I can imagine somebody trying to fix that someday soon.\n> > If the postmaster is able to dynamically acquire and drop ports then\n> > it would certainly not make sense to behave as you suggest.\n> \n> Agreed, if listen_addresses became a PGC_SIGHUP parameter, it would make\n> sense to avoid shutting down the server if it was dynamically\n> misconfigured, as is done for the configuration files. I think that\n> argument applies for changes in connectivity, too.\n\nIf I had to say, I would feel it rather surprising if server\nsuccessfully starts even when any explicitly-specified port can't be\nopened (which is the current case). The current auto-adaption is fine\niff I use '*' for listen_addresses. IMHO, for \"reliable\"\nauto-adaption, we might want '[+-]?xxx.xxx.xxx.xxx/nn' (and the same\nfor v6), or '[+-]?interface-name' notation to require, allow, or\ndisallow to use specific networks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Jun 2023 11:56:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open\n some TCP/IP ports"
},
{
"msg_contents": "Kyotaro Horiguchi <[email protected]> writes:\n> If I had to say, I would feel it rather surprising if server\n> successfully starts even when any explicitly-specified port can't be\n> opened (which is the current case).\n\nThere is certainly an argument that such a condition indicates that\nsomething's very broken in our configuration and we should complain.\nBut I'm not sure how exciting the case is in practice. The systemd\nguys would really like us to be willing to come up before any network\ninterfaces are up, and then auto-listen to those interfaces when they\ndo come up. On the other hand, the situation with Unix sockets is\nmuch more static: if you can't make a socket in /tmp or /var/run at\nthe instant of postmaster start, it's unlikely you will be able to do\nso later.\n\nMaybe we need different rules for TCP versus Unix-domain sockets?\nI'm not sure what exactly, but lumping those cases together for\na discussion like this feels wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 23:11:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 11:11:04PM -0400, Tom Lane wrote:\n> Kyotaro Horiguchi <[email protected]> writes:\n> > If I had to say, I would feel it rather surprising if server\n> > successfully starts even when any explicitly-specified port can't be\n> > opened (which is the current case).\n> \n> There is certainly an argument that such a condition indicates that\n> something's very broken in our configuration and we should complain.\n> But I'm not sure how exciting the case is in practice. The systemd\n> guys would really like us to be willing to come up before any network\n> interfaces are up, and then auto-listen to those interfaces when they\n> do come up. On the other hand, the situation with Unix sockets is\n> much more static: if you can't make a socket in /tmp or /var/run at\n> the instant of postmaster start, it's unlikely you will be able to do\n> so later.\n> \n> Maybe we need different rules for TCP versus Unix-domain sockets?\n> I'm not sure what exactly, but lumping those cases together for\n> a discussion like this feels wrong.\n\nIf we are going to retry for network configuration changes, it seems we\nwould also retry Unix domain sockets for cases like when the permissions\nare wrong, and then fixed.\n\nHowever, it seem hard to figure out exactly what _is_ working if we take\nthe approach of dynamically retrying listen methods. Do we report\nanything helpful in the server logs when we start and can't listen on\nanything?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 19 Jun 2023 20:48:00 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 5:48 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Tue, Jun 13, 2023 at 11:11:04PM -0400, Tom Lane wrote:\n> >\n> > There is certainly an argument that such a condition indicates that\n> > something's very broken in our configuration and we should complain.\n> > But I'm not sure how exciting the case is in practice. The systemd\n> > guys would really like us to be willing to come up before any network\n> > interfaces are up, and then auto-listen to those interfaces when they\n> > do come up.\n\nThat sounds like a reasonable expectation, as the network conditions\ncan change without any explicit changes made by someone.\n\n> > On the other hand, the situation with Unix sockets is\n> > much more static: if you can't make a socket in /tmp or /var/run at\n> > the instant of postmaster start, it's unlikely you will be able to do\n> > so later.\n\nI think you're describing a setup where Postgres startup is automatic,\nas part of server/OS startup. That is the most common case.\n\nIn cases where someone is performing a Postgres startup manually, they\nare very likely to have the permissions to fix the problem preventing\nthe startup.\n\n> > Maybe we need different rules for TCP versus Unix-domain sockets?\n> > I'm not sure what exactly, but lumping those cases together for\n> > a discussion like this feels wrong.\n\n+1.\n\n> If we are going to retry for network configuration changes, it seems we\n> would also retry Unix domain sockets for cases like when the permissions\n> are wrong, and then fixed.\n\nThe network managers (systemd, etc.) are expected to respond to\ndynamic conditions, and hence they may perform network config changes\nin response to things like network outages, and hardware failures,\ntime of day, etc.\n\nOn the other hand, the permissions required to create files for Unix\ndomain sockets are only expected to change if someone decides to make\nthat change. I wouldn't expect these permissions to be changed\ndynamically.\n\nOn those grounds, keeping the treatment of Unix domain sockets out of\nthis discussion for this patch seems reasonable.\n\n> However, it seem hard to figure out exactly what _is_ working if we take\n> the approach of dynamically retrying listen methods. Do we report\n> anything helpful in the server logs when we start and can't listen on\n> anything?\n\nYes. For every host listed in listen_addresses, if Postgres fails to\nopen the port on that address, we get a WARNING message in the server\nlog. After the end of processing of a non-empty listen_addresses, if\nthere are zero open TCP/IP connections, the server exits (with a FATAL\nmessage, IIRC).\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Tue, 11 Jul 2023 12:01:10 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 11:57:45PM -0700, Gurjeet Singh wrote:\n> On Mon, Jun 12, 2023 at 10:59 PM Nathan Bossart\n> <[email protected]> wrote:\n> > On Sat, May 27, 2023 at 03:17:21PM -0700, Gurjeet Singh wrote:\n> > > If listen_addresses is empty, then server won't try to open any TCP/IP\n> > > ports. The patch does not change any language related to that.\n> >\n> > Your proposed change notes that the server only starts if it can listen on\n> > at least one TCP/IP address, which I worry might lead folks to think that\n> > the server won't start if listen_addresses is empty.\n> \n> Perhaps we can prefix that statement with \"If listen_addresses is not\n> empty\", like so:\n\nI came up with a slightly modified doc patch, attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 7 Sep 2023 15:33:57 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "Thanks for picking this up.\n\nOn Thu, Sep 07, 2023 at 03:33:57PM -0400, Bruce Momjian wrote:\n> The default value is <systemitem class=\"systemname\">localhost</systemitem>,\n> which allows only local TCP/IP <quote>loopback</quote> connections to be\n> - made. While client authentication (<xref\n> + made. If <varname>listen_addresses</varname> is not empty,\n> + the server will start if it can open a <varname>port</varname>\n> + on at least one TCP/IP address. A warning will be emitted for\n> + any TCP/IP address which cannot be opened.\n\nI think we should move this sentence to before the ѕentence about the\ndefault value. That way, \"If the list is empty, ...\" is immediately\nfollowed by \"If the list is not empty, ...\"\n\nIMO the phrase \"open a port\" is kind of nonstandard. I think we should say\nsomething along the lines of\n\n\tIf listen_addresses is not empty, the server will start only if it can\n\tlisten on at least one of the specified addresses. A warning will be\n\temitted for any addresses that the server cannot listen on.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 14:54:13 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n> Thanks for picking this up.\n> \n> On Thu, Sep 07, 2023 at 03:33:57PM -0400, Bruce Momjian wrote:\n> > The default value is <systemitem class=\"systemname\">localhost</systemitem>,\n> > which allows only local TCP/IP <quote>loopback</quote> connections to be\n> > - made. While client authentication (<xref\n> > + made. If <varname>listen_addresses</varname> is not empty,\n> > + the server will start if it can open a <varname>port</varname>\n> > + on at least one TCP/IP address. A warning will be emitted for\n> > + any TCP/IP address which cannot be opened.\n> \n> I think we should move this sentence to before the ѕentence about the\n> default value. That way, \"If the list is empty, ...\" is immediately\n> followed by \"If the list is not empty, ...\"\n> \n> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n> something along the lines of\n> \n> \tIf listen_addresses is not empty, the server will start only if it can\n> \tlisten on at least one of the specified addresses. A warning will be\n> \temitted for any addresses that the server cannot listen on.\n\nGood idea, updated patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 7 Sep 2023 19:13:44 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Thu, Sep 07, 2023 at 07:13:44PM -0400, Bruce Momjian wrote:\n> On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n>> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n>> something along the lines of\n>> \n>> \tIf listen_addresses is not empty, the server will start only if it can\n>> \tlisten on at least one of the specified addresses. A warning will be\n>> \temitted for any addresses that the server cannot listen on.\n> \n> Good idea, updated patch attached.\n\nI still think we should say \"listen on an address\" instead of \"open a\nport,\" but otherwise it LGTM.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 21:21:07 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 09:21:07PM -0700, Nathan Bossart wrote:\n> On Thu, Sep 07, 2023 at 07:13:44PM -0400, Bruce Momjian wrote:\n> > On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n> >> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n> >> something along the lines of\n> >> \n> >> \tIf listen_addresses is not empty, the server will start only if it can\n> >> \tlisten on at least one of the specified addresses. A warning will be\n> >> \temitted for any addresses that the server cannot listen on.\n> > \n> > Good idea, updated patch attached.\n> \n> I still think we should say \"listen on an address\" instead of \"open a\n> port,\" but otherwise it LGTM.\n\nAgreed, I never liked the \"port\" mention. I couldn't figure how to get\n\"open\" out of the warning sentence though. Updated patch attached.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Fri, 8 Sep 2023 10:52:10 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 10:52:10AM -0400, Bruce Momjian wrote:\n> On Thu, Sep 7, 2023 at 09:21:07PM -0700, Nathan Bossart wrote:\n>> On Thu, Sep 07, 2023 at 07:13:44PM -0400, Bruce Momjian wrote:\n>> > On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n>> >> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n>> >> something along the lines of\n>> >> \n>> >> \tIf listen_addresses is not empty, the server will start only if it can\n>> >> \tlisten on at least one of the specified addresses. A warning will be\n>> >> \temitted for any addresses that the server cannot listen on.\n>> > \n>> > Good idea, updated patch attached.\n>> \n>> I still think we should say \"listen on an address\" instead of \"open a\n>> port,\" but otherwise it LGTM.\n> \n> Agreed, I never liked the \"port\" mention. I couldn't figure how to get\n> \"open\" out of the warning sentence though. Updated patch attached.\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Sep 2023 10:54:32 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 7:52 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, Sep 7, 2023 at 09:21:07PM -0700, Nathan Bossart wrote:\n> > On Thu, Sep 07, 2023 at 07:13:44PM -0400, Bruce Momjian wrote:\n> > > On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n> > >> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n> > >> something along the lines of\n> > >>\n> > >> If listen_addresses is not empty, the server will start only if it can\n> > >> listen on at least one of the specified addresses. A warning will be\n> > >> emitted for any addresses that the server cannot listen on.\n> > >\n> > > Good idea, updated patch attached.\n> >\n> > I still think we should say \"listen on an address\" instead of \"open a\n> > port,\" but otherwise it LGTM.\n>\n> Agreed, I never liked the \"port\" mention. I couldn't figure how to get\n> \"open\" out of the warning sentence though. Updated patch attached.\n\nLGTM.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Tue, 12 Sep 2023 17:25:44 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 05:25:44PM -0700, Gurjeet Singh wrote:\n> On Fri, Sep 8, 2023 at 7:52 AM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, Sep 7, 2023 at 09:21:07PM -0700, Nathan Bossart wrote:\n> > > On Thu, Sep 07, 2023 at 07:13:44PM -0400, Bruce Momjian wrote:\n> > > > On Thu, Sep 7, 2023 at 02:54:13PM -0700, Nathan Bossart wrote:\n> > > >> IMO the phrase \"open a port\" is kind of nonstandard. I think we should say\n> > > >> something along the lines of\n> > > >>\n> > > >> If listen_addresses is not empty, the server will start only if it can\n> > > >> listen on at least one of the specified addresses. A warning will be\n> > > >> emitted for any addresses that the server cannot listen on.\n> > > >\n> > > > Good idea, updated patch attached.\n> > >\n> > > I still think we should say \"listen on an address\" instead of \"open a\n> > > port,\" but otherwise it LGTM.\n> >\n> > Agreed, I never liked the \"port\" mention. I couldn't figure how to get\n> > \"open\" out of the warning sentence though. Updated patch attached.\n> \n> LGTM.\n\nPatch applied back to PG 11.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 26 Sep 2023 19:02:37 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 4:02 PM Bruce Momjian <[email protected]> wrote:\n>\n> Patch applied back to PG 11.\n\n+Peter E. since I received the following automated note:\n\n> Closed in commitfest 2023-09 with status: Moved to next CF (petere)\n\nJust a note that this patch has been committed (3fea854691), so I have\nmarked the CF item [1] as 'Committed', and specified Bruce as the\ncommitter.\n\n[1]: https://commitfest.postgresql.org/45/4333/\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sun, 1 Oct 2023 20:20:50 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Document that server will start even if it's unable to open some\n TCP/IP ports"
}
] |
[
{
"msg_contents": "Attached is a html report that was generated by a tool called\nabi-compliance-checker/abi-dumper [1][2] (by using\n\"abi-compliance-checker -l libTest ... \") . I deliberately introduced\n2 ABI compatibility issues affecting postgres, just to see what the\ntool had to say about it.\n\nThe first ABI issue I mocked up involved a breaking change to the\nsignature of a function with external linkage. Sure enough, this issue\n(in CheckForSerializableConflictIn(), as it happens) appears in the\nreport as a medium severity item.\n\nThe second ABI issue I mocked up involved \"filling-in\" a hole in a\nstruct (a struct that appears in a header that could be included by an\nextension) with a new field. In other words, the \"put new field in\nexisting alignment padding\" trick. This kind of difference is\ngenerally believed to be non-breaking, and so is acceptable in point\nreleases. But the issue still appears as a low severity item in the\nreport. The report points out (quite reasonably) that my newly added\nfield won't be initialized by old code. In most cases this will be\nfine, of course. It's just not something that should be taken for\ngranted.\n\nOverall, I like the report format -- especially its severity scale. So\nit seems like abi-compliance-checker has the potential to become a\nstandard release management tool for Postgres point releases. I can\nimagine a community resource along the lines of\nhttps://coverage.postgresql.org; an automatically generated archive of\ntheoretical/actual x86_64 ABI breaks in each point release. I'd\nappreciate having greater visibility into these issues.\n\n[1] https://github.com/lvc/abi-dumper\n[2] https://manpages.debian.org/unstable/abi-dumper/abi-dumper.1.en.html\n-- \nPeter Geoghegan",
"msg_date": "Sat, 27 May 2023 17:52:01 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "abi-compliance-checker"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> Attached is a html report that was generated by a tool called\n> abi-compliance-checker/abi-dumper [1][2] (by using\n> \"abi-compliance-checker -l libTest ... \") . I deliberately introduced\n> 2 ABI compatibility issues affecting postgres, just to see what the\n> tool had to say about it.\n\nThis seems pretty cool. I agree that we're in dire need of some\ntool of this sort for checking back-branch patches. I wonder\nthough if it'll have false-positive problems. Have you tried it\non live rather than mocked-up cases, for instance 13.0 vs 13.11?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 May 2023 09:48:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Sun, May 28, 2023 at 6:48 AM Tom Lane <[email protected]> wrote:\n> This seems pretty cool. I agree that we're in dire need of some\n> tool of this sort for checking back-branch patches. I wonder\n> though if it'll have false-positive problems. Have you tried it\n> on live rather than mocked-up cases, for instance 13.0 vs 13.11?\n\nI tried comparing REL_11_0 to REL_11_20. Attached is the report for that.\n\nI don't have time to study this in detail today, but the report seems\nto have a plausible variety of issues. I noticed that it warns about\nthe breaking signature change to _bt_pagedel(). This is the\ntheoretical ABI break that I mentioned in the commit message of commit\nb0229f26. This is arguably a false positive, since the tool doesn't\nunderstand my reasoning about why it's okay in this particular\ninstance (namely \"any extension that called that function was already\nseverely broken\"). Obviously the tool couldn't possibly be expected to\nknow better in these kinds of situations, though, so whether or not it\ncounts as a false positive is just semantics.\n\nFortunately, there aren't very many issues in the report. Certainly\nnot enough for false positives (however you define them) to be of\ngreat concern. This is nearly 5 years worth of ABI issues.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 28 May 2023 08:15:32 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> writes:\n> I tried comparing REL_11_0 to REL_11_20. Attached is the report for that.\n\nNice!\n\n> I don't have time to study this in detail today, but the report seems\n> to have a plausible variety of issues. I noticed that it warns about\n> the breaking signature change to _bt_pagedel(). This is the\n> theoretical ABI break that I mentioned in the commit message of commit\n> b0229f26. This is arguably a false positive, since the tool doesn't\n> understand my reasoning about why it's okay in this particular\n> instance (namely \"any extension that called that function was already\n> severely broken\"). Obviously the tool couldn't possibly be expected to\n> know better in these kinds of situations, though, so whether or not it\n> counts as a false positive is just semantics.\n\nAgreed. The point of such a tool is to make sure that we notice\nany ABI breaks; it can't be expected to make engineering judgments\nabout whether the alternatives are worse. For instance, I see that\nit noticed commit 1f28ec6be (Rename rbtree.c functions to use \"rbt\"\nprefix not \"rb\" prefix), which is not something we would have done\nof our own choosing, but on balance it seemed the best solution.\n\nI gather it'd catch things like NodeTag enum assignments changing,\nwhich is something we really need to have a check for.\n\n(Which reminds me that I forgot to turn on the ad-hoc check for\nthat in gen_node_support.pl. I'll go do that, but it'd be better\nto have a more general-purpose solution.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 May 2023 11:37:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "I wrote:\n> (Which reminds me that I forgot to turn on the ad-hoc check for\n> that in gen_node_support.pl. I'll go do that, but it'd be better\n> to have a more general-purpose solution.)\n\nOh, scratch that, it's not supposed to happen until we make the\nv16 branch. It'd still be better to not need it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 May 2023 11:39:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Sun, May 28, 2023 at 8:37 AM Tom Lane <[email protected]> wrote:\n> I gather it'd catch things like NodeTag enum assignments changing,\n> which is something we really need to have a check for.\n\nRight. Any ABI break that involves machine-generated translation units\nseems particularly prone to being overlooked. Just eyeballing code\n(and perhaps double-checking struct layout using pahole) seems\ninadequate.\n\nI'll try to come up with a standard abi-compliance-checker Postgres\nworkflow once I'm back from pgCon. It already looks like\nabi-compliance-checker is capable of taking most of the guesswork out\nof ABI compatibility in stable releases, without any real downside,\nwhich is encouraging. I have spent very little time on this, so it's\nquite possible that some detail or other was overlooked.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 28 May 2023 09:34:23 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Sun, May 28, 2023 at 9:34 AM Peter Geoghegan <[email protected]> wrote:\n> I'll try to come up with a standard abi-compliance-checker Postgres\n> workflow once I'm back from pgCon.\n\nIdeally, we'd be able to produce reports that cover an entire stable\nrelease branch at once, including details about how things changed\nover time. It turns out that there is a tool from the same family of\ntools as abi-compliance-checker, that can do just that:\n\nhttps://github.com/lvc/abi-tracker\n\nThere is an abi-tracker example report, here:\n\nhttps://abi-laboratory.pro/?view=timeline&l=glib\n\nIt's exactly the same presentation as the report I posted recently,\nonce you drill down. That seems ideal.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 May 2023 10:01:20 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 27.05.23 02:52, Peter Geoghegan wrote:\n> Attached is a html report that was generated by a tool called\n> abi-compliance-checker/abi-dumper [1][2] (by using\n> \"abi-compliance-checker -l libTest ... \") .\n\nI have been using the libabigail library/set of tools (abidiff, abidw) \nfor this. I was not familiar with the tool you used. The nice thing \nabout abidiff is that it gives you text output and a meaningful exit \nstatus, so you can make it part of the build or test process. You can \nalso write suppression files to silence specific warnings.\n\nI think the way to use this would be to compute the ABI for the .0 \nrelease (or rc1 or something like that) and commit it into the tree. \nAnd then compute the current ABI and compare that against the recorded \nbase ABI.\n\nHere is the workflow:\n\n# build REL_11_0\nabidw src/backend/postgres > src/tools/postgres-abi-REL_11_0.xml\n# build REL_11_20\nabidw src/backend/postgres > src/tools/postgres-abi.xml\nabidiff --no-added-syms src/tools/postgres-abi-REL_11_0.xml \nsrc/tools/postgres-abi.xml\n\nThis prints\n\nFunctions changes summary: 0 Removed, 0 Changed, 0 Added function\nVariables changes summary: 0 Removed, 0 Changed, 0 Added variable\nFunction symbols changes summary: 14 Removed, 0 Added (85 filtered out) \nfunction symbols not referenced by debug info\nVariable symbols changes summary: 1 Removed, 0 Added (3 filtered out) \nvariable symbols not referenced by debug info\n\n14 Removed function symbols not referenced by debug info:\n\n [D] RelationHasUnloggedIndex\n [D] assign_nestloop_param_placeholdervar\n [D] assign_nestloop_param_var\n [D] logicalrep_typmap_gettypname\n [D] logicalrep_typmap_update\n [D] pqsignal_no_restart\n [D] rb_begin_iterate\n [D] rb_create\n [D] rb_delete\n [D] rb_find\n [D] rb_insert\n [D] rb_iterate\n [D] rb_leftmost\n [D] transformCreateSchemaStmt\n\n1 Removed variable symbol not referenced by debug info:\n\n [D] wrconn\n\nThis appears to be similar to what your tool produced, but I haven't \nchecked it in detail.\n\n\n\n",
"msg_date": "Tue, 30 May 2023 00:32:20 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 30.05.23 06:32, Peter Eisentraut wrote:\n> I think the way to use this would be to compute the ABI for the .0 \n> release (or rc1 or something like that) and commit it into the tree. And \n> then compute the current ABI and compare that against the recorded base \n> ABI.\n> \n> Here is the workflow:\n> \n> # build REL_11_0\n> abidw src/backend/postgres > src/tools/postgres-abi-REL_11_0.xml\n> # build REL_11_20\n> abidw src/backend/postgres > src/tools/postgres-abi.xml\n> abidiff --no-added-syms src/tools/postgres-abi-REL_11_0.xml \n> src/tools/postgres-abi.xml\n\nHere is a demo patch that implements this.\n\nRight now, I have only added support for libpq and postgres. For \ncompleteness, the ecpg libraries should be covered as well.\n\n(Unlike the above example, I did not use src/tools/ but each component's \nown subdirectory.)\n\nThe patch as currently written will actually fail the tests because it \ncontains only one base ABI file to compare against, but the build_32 \ntask on cirrus will of course produce a different ABI. But I left it \nfor now to able to see the results. Eventually, the base ABI file names \nshould include something from host_system.cpu_family().\n\nAlso, I commented out the abidiff test for postgres, because the base \nfile is 8 MB and I don't want to send that around.\n\n\nVarious findings while playing with these tools:\n\n* Different Linux distributions produce slightly different ABI reports. \nIn some cases, symbols like 'optarg@GLIBC_2.17' leak out.\n\n* PostgreSQL compilation options affect the exposed ABI. This is \nperhaps expected to some degree, but there are some curious details.\n\n* For example, --enable-cassert exposes additional symbols, and it's \nmaybe not impossible for those to leak into an extension.\n\n* Also, --with-openssl actually *removes* symbols from the ABI (such as \npg_md5_init).\n\nSo it's probably not sensible to try to get some universal ABI \ndefinition that works everywhere. Instead, I think it would be better \nto get one specific case working, which would be the one tested on the \ncirrus linux tasks and/or some equivalent buildfarm machine.",
"msg_date": "Tue, 6 Jun 2023 18:30:38 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "+abidiff = find_program('abidiff', native: false, required: false)\n+abidw = find_program('abidw', native: false, required: false)\n+\n+abidw_flags = [\n+ '--drop-undefined-syms',\n+ '--no-architecture',\n+ '--no-comp-dir-path',\n+ '--no-elf-needed',\n+ '--no-show-locs',\n+ '--type-id-style', 'hash',\n+]\n+abidw_cmd = [abidw, abidw_flags, '--out-file', '@OUTPUT@', '@INPUT@']\n\nIt would make sense to me to mark abidiff and abidw as disabler: true.\n\n+if abidw.found()\n+ libpq_abi = custom_target('libpq.abi.xml',\n+ input: libpq_so,\n+ output: 'libpq.abi.xml',\n+ command: abidw_cmd,\n+ build_by_default: true)\n+endif\n+\n+if abidiff.found()\n+ test('libpq.abidiff',\n+ abidiff,\n+ args: [files('libpq.base.abi.xml'), libpq_abi],\n+ suite: 'abidiff',\n+ verbose: true)\n+endif\n\nWith disabler: true, you can drop the conditionals. Disablers tell Meson\nto disable parts of the build[0].\n\nI also don't think it makes sense to mark the custom_targets as\nbuild_by_default: true, unless you see value in that. I would just have\nthem built when the test is ran.\n\nHowever, it might make sense to create an alias_target of all the ABI\nXML files for people that want to interact with the files outside of the\ntests for whatever reason.\n\n[0]: https://mesonbuild.com/Reference-manual_returned_disabler.html\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 06 Jun 2023 11:52:25 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 06.06.23 18:52, Tristan Partin wrote:\n> It would make sense to me to mark abidiff and abidw as disabler: true.\n\nok\n\n> +if abidiff.found()\n> + test('libpq.abidiff',\n> + abidiff,\n> + args: [files('libpq.base.abi.xml'), libpq_abi],\n> + suite: 'abidiff',\n> + verbose: true)\n> +endif\n> \n> With disabler: true, you can drop the conditionals. Disablers tell Meson\n> to disable parts of the build[0].\n\nok\n\n> I also don't think it makes sense to mark the custom_targets as\n> build_by_default: true, unless you see value in that. I would just have\n> them built when the test is ran.\n> \n> However, it might make sense to create an alias_target of all the ABI\n> XML files for people that want to interact with the files outside of the\n> tests for whatever reason.\n\nThanks for the feedback. Attached is a more complete patch.\n\nI have rearranged this a bit. There are now two build options, called \nabidw and abidiff. The abidw option produces the xml file, that you \nwould then at appropriate times commit into the tree as the base. The \nabidiff option enables the abidiff tests. This doesn't actually require \nabidw, since abidiff can compare the binary directly against the \nrecorded XML file. So these options are distinct and nonoverlapping.\n\nNote that in this setup, you still need a conditional around the abidiff \ntest() call, because otherwise meson setup will fail if the base file \ndoesn't exist (yet), so it would be impossible to bootstrap this system.\n\nThe updated patch also includes the base files for all the ecpg \nlibraries and the files all have OS and architecture specific names. \nThe keep the patch small, I just added a dummy base file for the \npostgres binary and a suppression file that suppresses everything.\n\nThere is something weird going on where the cirrus linux/meson job \ndoesn't upload the produced abidw artifacts, even though they are \napparently built, and the equivalent works for the freebsd job. Maybe \nsomeone can see something that I'm not seeing there.",
"msg_date": "Sat, 10 Jun 2023 16:17:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-06 18:30:38 +0200, Peter Eisentraut wrote:\n> On 30.05.23 06:32, Peter Eisentraut wrote:\n> > I think the way to use this would be to compute the ABI for the .0\n> > release (or rc1 or something like that) and commit it into the tree. And\n> > then compute the current ABI and compare that against the recorded base\n> > ABI.\n> > \n> > Here is the workflow:\n> > \n> > # build REL_11_0\n> > abidw src/backend/postgres > src/tools/postgres-abi-REL_11_0.xml\n> > # build REL_11_20\n> > abidw src/backend/postgres > src/tools/postgres-abi.xml\n> > abidiff --no-added-syms src/tools/postgres-abi-REL_11_0.xml\n> > src/tools/postgres-abi.xml\n> \n> Here is a demo patch that implements this.\n> \n> Right now, I have only added support for libpq and postgres. For\n> completeness, the ecpg libraries should be covered as well.\n\nI think plpgsql would also be good to include, due to things like plpgsql\ndebuggers.\n\n\n> * Different Linux distributions produce slightly different ABI reports. In\n> some cases, symbols like 'optarg@GLIBC_2.17' leak out.\n\nHm, that's somewhat annoying.\n\n\n> * PostgreSQL compilation options affect the exposed ABI. This is perhaps\n> expected to some degree, but there are some curious details.\n> \n> * For example, --enable-cassert exposes additional symbols, and it's maybe\n> not impossible for those to leak into an extension.\n\nThey *definitely* leak into extensions. A single Assert() in an extension or\nuse of an inline function or macro with an Assertion suffices to end up with a\nreference to ExceptionalCondition.\n\n\n\n> diff --git a/src/interfaces/libpq/libpq.base.abi.xml b/src/interfaces/libpq/libpq.base.abi.xml\n> new file mode 100644\n> index 0000000000..691bf192af\n> --- /dev/null\n> +++ b/src/interfaces/libpq/libpq.base.abi.xml\n> @@ -0,0 +1,2634 @@\n> +<abi-corpus path='src/interfaces/libpq/libpq.so.5.16' soname='libpq.so.5'>\n> + <elf-function-symbols>\n> + <elf-symbol name='PQbackendPID' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>\n> + <elf-symbol name='PQbinaryTuples' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>\n> + <elf-symbol name='PQcancel' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>\n> [...]\n> + <elf-symbol name='termPQExpBuffer' type='func-type' binding='global-binding' visibility='default-visibility' is-defined='yes'/>\n> + </elf-function-symbols>\n\nThis seems somewhat painful in its verbosity. We also effectively already have\nit in the tree, in src/interfaces/libpq/exports.txt. But I guess that's\nsomewhat inevitable :/\n\nIt sounds we are planning to mostly rely on CI for this, perhaps we should\nrely on an artifact from a prior build for a major version + specific task,\ninstead of committing this to source? That'd automatically take care of\ndifferences in ABI across different platforms etc.\n\nIf we want to commit something to the tree, I think we need a fairly\ncomplicated \"fingerprint\" to avoid false positives. OS, OS version, configure\noptions, compiler, compiler version at least?\n\n\n> + <abi-instr version='1.0' address-size='64' path='../src/common/encnames.c' language='LANG_C99'>\n> + <array-type-def dimensions='1' type-id='c8dedbef' size-in-bits='5376' id='752c85d9'>\n> + <subrange length='42' type-id='7359adad' id='cb7c937f'/>\n> + </array-type-def>\n> + <array-type-def dimensions='1' type-id='c8dedbef' size-in-bits='infinite' id='ac835593'>\n> + <subrange length='infinite' id='031f2035'/>\n> + </array-type-def>\n> + <array-type-def dimensions='1' type-id='56ef96d7' size-in-bits='5376' id='728d2ee1'>\n> + <subrange length='42' type-id='7359adad' id='cb7c937f'/>\n> + </array-type-def>\n> + <array-type-def dimensions='1' type-id='56ef96d7' size-in-bits='infinite' id='a01b33bb'>\n> + <subrange length='infinite' id='031f2035'/>\n> + </array-type-def>\n> + <typedef-decl name='pg_enc2name' type-id='79f06fd8' id='7a4268c7'/>\n> + <class-decl name='pg_enc2name' size-in-bits='128' is-struct='yes' visibility='default' id='79f06fd8'>\n> + <data-member access='public' layout-offset-in-bits='0'>\n> + <var-decl name='name' type-id='80f4b756' visibility='default'/>\n> + </data-member>\n> + <data-member access='public' layout-offset-in-bits='64'>\n> + <var-decl name='encoding' type-id='66325df6' visibility='default'/>\n> + </data-member>\n> + </class-decl>\n> + <typedef-decl name='pg_enc' type-id='ea65169a' id='66325df6'/>\n> + <enum-decl name='pg_enc' id='ea65169a'>\n> + <underlying-type type-id='9cac1fee'/>\n\nHm - why is all of this stuff even ending up in the external ABI? It should\nall be internal, unless I am missing something?\n\nI might be looking the wrong way, but to me it sure looks like none of that\nends up being externally visible?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Jun 2023 12:48:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-10 12:48:46 -0700, Andres Freund wrote:\n> > + <typedef-decl name='pg_enc' type-id='ea65169a' id='66325df6'/>\n> > + <enum-decl name='pg_enc' id='ea65169a'>\n> > + <underlying-type type-id='9cac1fee'/>\n> \n> Hm - why is all of this stuff even ending up in the external ABI? It should\n> all be internal, unless I am missing something?\n> \n> I might be looking the wrong way, but to me it sure looks like none of that\n> ends up being externally visible?\n\nLooks like we ought to add --exported-interfaces-only?\n\nThat still seems to include things that shouldn't be there, but much\nless. E.g.:\n\n <class-decl name='AddrInfo' size-in-bits='1152' is-struct='yes' naming-typedef-id='79c324ab' visibility='default' id='0b3a01e2'>\n <data-member access='public' layout-offset-in-bits='0'>\n <var-decl name='family' type-id='95e97e5e' visibility='default'/>\n </data-member>\n <data-member access='public' layout-offset-in-bits='64'>\n <var-decl name='addr' type-id='8c37a12f' visibility='default'/>\n </data-member>\n </class-decl>\n\nand things outside of our control:\n\n <class-decl name='_IO_FILE' size-in-bits='1728' is-struct='yes' visibility='default' id='ec1ed955'>\n <data-member access='public' layout-offset-in-bits='0'>\n <var-decl name='_flags' type-id='95e97e5e' visibility='default'/>\n </data-member>\n\nI guess the latter would have to be suppressed via suppression file. But I\ndon't understand why things like AddrInfo ends up being included...\n\n\nI tried using --header-file with --drop-private-types. But that ends up\ndropping all enum definitions for some reason.\n\n\nIndependently, I'm a bit confused as to why we export pgresStatus in\nexports.txt - I don't see any reason for that. Looks like it might be leftover\nfrom before fa0f24165c0?\n\nWe're also a bit schizophrenic about where we install pqexpbuffer.h -\nincludedir_internal. But at the same time we export all the symbols?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Jun 2023 13:24:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Independently, I'm a bit confused as to why we export pgresStatus in\n> exports.txt - I don't see any reason for that. Looks like it might be leftover\n> from before fa0f24165c0?\n\nIt looks like before fa0f24165, the *only* way to convert ExecStatusType\nto text was to access that array directly. That commit invented the\nwrapper function PQresStatus(), but at that point our docs were so poor\nthat there wasn't any good way to mark use of the array as deprecated.\nA bit later, 9ceb5d8a7 moved the array declaration to libpq-int.h\n(without any discussion in the commit message, but maybe there was\nsome on-list).\n\nMaybe there's still application code out there using it, I dunno.\nWhat I do know is that removing the exports.txt entry will provoke\nsquawks from distros' ABI checkers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 17:00:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Sat Jun 10, 2023 at 9:17 AM CDT, Peter Eisentraut wrote:\n> I have rearranged this a bit. There are now two build options, called \n> abidw and abidiff. The abidw option produces the xml file, that you \n> would then at appropriate times commit into the tree as the base. The \n> abidiff option enables the abidiff tests. This doesn't actually require \n> abidw, since abidiff can compare the binary directly against the \n> recorded XML file. So these options are distinct and nonoverlapping.\n>\n> Note that in this setup, you still need a conditional around the abidiff \n> test() call, because otherwise meson setup will fail if the base file \n> doesn't exist (yet), so it would be impossible to bootstrap this system.\n\nCould you speak more to the workflow you see with managing the checked\nin diff files?\n\nAt my previous job, I had tried to do something similar with regard to\nmaking sure we didn't break ABI[0], but I took a different approach\nwhere instead of hooking into the Meson test infrastructure, I used a CI\njob where I checked out the previous major version of the code and the\ncurrent version of the code, built both, and checked the built binaries.\nThe benefit of this workflow is that you don't check anything into the\nsource repo.\n\nI think the same approach might be better here, but instead of writing\nit all into the CI file like I did, use a perl script. Then once you\nhave the perl script, it could be possible to then hook into the Meson\ntest infrastructure.\n\n> There is something weird going on where the cirrus linux/meson job \n> doesn't upload the produced abidw artifacts, even though they are \n> apparently built, and the equivalent works for the freebsd job. Maybe \n> someone can see something that I'm not seeing there.\n\nNothing obvious is wrong to me. Was the failure maybe just a fluke?\n\n[0]: https://github.com/hse-project/hse/blob/master/.github/workflows/abicheck.yaml\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:10:23 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Mon Jun 12, 2023 at 10:10 AM CDT, Tristan Partin wrote:\n> On Sat Jun 10, 2023 at 9:17 AM CDT, Peter Eisentraut wrote:\n> > I have rearranged this a bit. There are now two build options, called \n> > abidw and abidiff. The abidw option produces the xml file, that you \n> > would then at appropriate times commit into the tree as the base. The \n> > abidiff option enables the abidiff tests. This doesn't actually require \n> > abidw, since abidiff can compare the binary directly against the \n> > recorded XML file. So these options are distinct and nonoverlapping.\n> >\n> > Note that in this setup, you still need a conditional around the abidiff \n> > test() call, because otherwise meson setup will fail if the base file \n> > doesn't exist (yet), so it would be impossible to bootstrap this system.\n>\n> Could you speak more to the workflow you see with managing the checked\n> in diff files?\n\nJust saw your other email which talks about the workflow.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:23:55 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 10.06.23 22:24, Andres Freund wrote:\n> Looks like we ought to add --exported-interfaces-only?\n\nBtw., this option requires libabigail 2.1, which isn't available \neverywhere yet. For example, Debian oldstable (used on Cirrus) doesn't \nhave it yet. So I'll leave this patch set as is for now. If it turns \nout that this is the right option and we want to proceed with this patch \nset, we might need to think about a version check or something.\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 14:31:19 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "Here is an updated version of this patch. It doesn't have any new \nfunctionality, just a rebase and some minor adjustments.\n\nI have split up the one patch into several ones, which could be \nconsidered incrementally, namely:\n\nv3-0001-abidw-option.patch\n\nThis adds the abidw meson option, which produces the xml files with the \nABI description. With that, you can then implement a variety of \nworkflows, such as the abidiff proposed in the later patches, or \nsomething rigged up via CI, or you can just build various versions \nlocally and compare them. With this patch, you get the files to compare \nbuilt automatically and don't have to remember to cover all the \nlibraries or which options to use.\n\nI think this patch is mostly pretty straightforward and agreeable, \nsubject to technical review in detail.\n\nTODO: documentation\nTODO: Do we want a configure/make variant of this?\n\nv3-0002-Enable-abidw-option-on-Cirrus-CI.patch\n\nThis adds the abidw option to some CI tasks. This was mostly used by me \nduring development to get feedback from other machines and to produce \nbase files for the subsequent abidiff patch. I'm not sure whether we \nneed it in isolation (other than for general testing that the option \nworks at all).\n\nv3-0003-abidiff-option.patch\n\nThis adds the abidiff test suite that compares base files previously \nproduced with the abidw option to the currently built libraries. There \nis clearly some uncertainty here whether the produced files are stable \nenough, whether we want this particular workflow, what additional \nburdens this would create, etc., so I'm not hung up on this right now, \nit's mostly a demonstration.\n\nv3-0004-abidiff-support-files.patch\n\nThis contains the support files for patch 0003, just split out because \nthey are bulky and boring.\n\n\n\nOn 10.06.23 16:17, Peter Eisentraut wrote:\n> On 06.06.23 18:52, Tristan Partin wrote:\n>> It would make sense to me to mark abidiff and abidw as disabler: true.\n> \n> ok\n> \n>> +if abidiff.found()\n>> + test('libpq.abidiff',\n>> + abidiff,\n>> + args: [files('libpq.base.abi.xml'), libpq_abi],\n>> + suite: 'abidiff',\n>> + verbose: true)\n>> +endif\n>>\n>> With disabler: true, you can drop the conditionals. Disablers tell Meson\n>> to disable parts of the build[0].\n> \n> ok\n> \n>> I also don't think it makes sense to mark the custom_targets as\n>> build_by_default: true, unless you see value in that. I would just have\n>> them built when the test is ran.\n>>\n>> However, it might make sense to create an alias_target of all the ABI\n>> XML files for people that want to interact with the files outside of the\n>> tests for whatever reason.\n> \n> Thanks for the feedback. Attached is a more complete patch.\n> \n> I have rearranged this a bit. There are now two build options, called \n> abidw and abidiff. The abidw option produces the xml file, that you \n> would then at appropriate times commit into the tree as the base. The \n> abidiff option enables the abidiff tests. This doesn't actually require \n> abidw, since abidiff can compare the binary directly against the \n> recorded XML file. So these options are distinct and nonoverlapping.\n> \n> Note that in this setup, you still need a conditional around the abidiff \n> test() call, because otherwise meson setup will fail if the base file \n> doesn't exist (yet), so it would be impossible to bootstrap this system.\n> \n> The updated patch also includes the base files for all the ecpg \n> libraries and the files all have OS and architecture specific names. The \n> keep the patch small, I just added a dummy base file for the postgres \n> binary and a suppression file that suppresses everything.\n> \n> There is something weird going on where the cirrus linux/meson job \n> doesn't upload the produced abidw artifacts, even though they are \n> apparently built, and the equivalent works for the freebsd job. Maybe \n> someone can see something that I'm not seeing there.",
"msg_date": "Wed, 1 Nov 2023 07:09:58 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Wed, 1 Nov 2023 at 16:43, Peter Eisentraut <[email protected]> wrote:\n>\n> Here is an updated version of this patch. It doesn't have any new\n> functionality, just a rebase and some minor adjustments.\n>\n> I have split up the one patch into several ones, which could be\n> considered incrementally, namely:\n>\n> v3-0001-abidw-option.patch\n>\n> This adds the abidw meson option, which produces the xml files with the\n> ABI description. With that, you can then implement a variety of\n> workflows, such as the abidiff proposed in the later patches, or\n> something rigged up via CI, or you can just build various versions\n> locally and compare them. With this patch, you get the files to compare\n> built automatically and don't have to remember to cover all the\n> libraries or which options to use.\n>\n> I think this patch is mostly pretty straightforward and agreeable,\n> subject to technical review in detail.\n>\n> TODO: documentation\n> TODO: Do we want a configure/make variant of this?\n>\n> v3-0002-Enable-abidw-option-on-Cirrus-CI.patch\n>\n> This adds the abidw option to some CI tasks. This was mostly used by me\n> during development to get feedback from other machines and to produce\n> base files for the subsequent abidiff patch. I'm not sure whether we\n> need it in isolation (other than for general testing that the option\n> works at all).\n>\n> v3-0003-abidiff-option.patch\n>\n> This adds the abidiff test suite that compares base files previously\n> produced with the abidw option to the currently built libraries. There\n> is clearly some uncertainty here whether the produced files are stable\n> enough, whether we want this particular workflow, what additional\n> burdens this would create, etc., so I'm not hung up on this right now,\n> it's mostly a demonstration.\n>\n> v3-0004-abidiff-support-files.patch\n>\n> This contains the support files for patch 0003, just split out because\n> they are bulky and boring.\n\nOne of the test has failed in cfbot at [1] with:\nabi-compliance-checker\n[12:04:10.537] The output from the failed tests:\n[12:04:10.537]\n[12:04:10.537] 129/282 postgresql:abidiff / plpgsql.abidiff FAIL 1.25s\n(exit status 4)\n[12:04:10.537]\n[12:04:10.537] --- command ---\n[12:04:10.537] 12:03:00 /usr/bin/abidiff\n/tmp/cirrus-ci-build/build/../src/pl/plpgsql/src/plpgsql.x86_64-linux.abi.xml\nsrc/pl/plpgsql/src/plpgsql.so\n[12:04:10.537] --- Listing only the last 100 lines from a long log. ---\n[12:04:10.537] 'NodeTag::T_RoleSpec' from value '66' to '67' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_FuncCall' from value '67' to '68' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_A_Star' from value '68' to '69' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_A_Indices' from value '69' to '70' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_A_Indirection' from value '70' to '71' at\nnodes.h:26:1\n[12:04:10.537] 'NodeTag::T_A_ArrayExpr' from value '71' to '72' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_ResTarget' from value '72' to '73' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_MultiAssignRef' from value '73' to '74' at\nnodes.h:26:1\n[12:04:10.537] 'NodeTag::T_SortBy' from value '74' to '75' at nodes.h:26:1\n[12:04:10.537] 'NodeTag::T_WindowDef' from value '75' to '76' at nodes.h:26:1\n....\n[12:04:10.592] -------\n[12:04:10.592]\n[12:04:10.592]\n[12:04:10.592] Summary of Failures:\n[12:04:10.592]\n[12:04:10.592] 129/282 postgresql:abidiff / plpgsql.abidiff FAIL 1.25s\n(exit status 4)\n\n[1] - https://cirrus-ci.com/task/5961614579466240\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 6 Jan 2024 22:55:25 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 06.01.24 18:25, vignesh C wrote:\n> One of the test has failed in cfbot at [1] with:\n> abi-compliance-checker\n> [12:04:10.537] The output from the failed tests:\n> [12:04:10.537]\n> [12:04:10.537] 129/282 postgresql:abidiff / plpgsql.abidiff FAIL 1.25s\n> (exit status 4)\n> [12:04:10.537]\n> [12:04:10.537] --- command ---\n> [12:04:10.537] 12:03:00 /usr/bin/abidiff\n> /tmp/cirrus-ci-build/build/../src/pl/plpgsql/src/plpgsql.x86_64-linux.abi.xml\n> src/pl/plpgsql/src/plpgsql.so\n> [12:04:10.537] --- Listing only the last 100 lines from a long log. ---\n> [12:04:10.537] 'NodeTag::T_RoleSpec' from value '66' to '67' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_FuncCall' from value '67' to '68' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_A_Star' from value '68' to '69' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_A_Indices' from value '69' to '70' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_A_Indirection' from value '70' to '71' at\n> nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_A_ArrayExpr' from value '71' to '72' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_ResTarget' from value '72' to '73' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_MultiAssignRef' from value '73' to '74' at\n> nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_SortBy' from value '74' to '75' at nodes.h:26:1\n> [12:04:10.537] 'NodeTag::T_WindowDef' from value '75' to '76' at nodes.h:26:1\n> ....\n> [12:04:10.592] -------\n> [12:04:10.592]\n> [12:04:10.592]\n> [12:04:10.592] Summary of Failures:\n> [12:04:10.592]\n> [12:04:10.592] 129/282 postgresql:abidiff / plpgsql.abidiff FAIL 1.25s\n> (exit status 4)\n> \n> [1] - https://cirrus-ci.com/task/5961614579466240\n\nThis is kind of intentional, as it shows the the test catches ABI changes.\n\nIf the patches were to be committed, then the base ABI file would be \nupdated.\n\n\n\n",
"msg_date": "Tue, 9 Jan 2024 17:35:46 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 2023-Nov-01, Peter Eisentraut wrote:\n\n> v3-0001-abidw-option.patch\n> \n> This adds the abidw meson option, which produces the xml files with the ABI\n> description. With that, you can then implement a variety of workflows, such\n> as the abidiff proposed in the later patches, or something rigged up via CI,\n> or you can just build various versions locally and compare them. With this\n> patch, you get the files to compare built automatically and don't have to\n> remember to cover all the libraries or which options to use.\n> \n> I think this patch is mostly pretty straightforward and agreeable, subject\n> to technical review in detail.\n\nI like this idea and I think we should integrate it with the objective\nof it becoming the workhorse of ABI-stability testing. However, I do\nnot think that the subsequent patches should be part of the tree at all;\ncertainly not the produced .xml files in your 0004, as that would be far\ntoo unstable and would cause a lot of pointless churn.\n\n> TODO: documentation\n\nYes, please.\n\n> TODO: Do we want a configure/make variant of this?\n\nNot needed IMO.\n\n\nThe way I see this working, is that we set up a buildfarm animal [per\narchitecture] that runs the new rules produced by the abidw option and\nstores the result locally, so that for stable branches it can turn red\nwhen an ABI-breaking change with the previous minor release of the same\nbranch is introduced. There's no point on it ever turning red in the\nmaster branch, since we're obviously not concerned with ABI changes there.\n\n(Perhaps we do need 0003 as an easy mechanism to run the comparison, but\nI'm not sure to what extent that would be actually helpful.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 27 Feb 2024 14:25:46 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 8:25 AM Alvaro Herrera <[email protected]> wrote:\n> The way I see this working, is that we set up a buildfarm animal [per\n> architecture] that runs the new rules produced by the abidw option and\n> stores the result locally, so that for stable branches it can turn red\n> when an ABI-breaking change with the previous minor release of the same\n> branch is introduced. There's no point on it ever turning red in the\n> master branch, since we're obviously not concerned with ABI changes there.\n\nABI stability doesn't seem like something that you can alert on. There\nare quite a few individual cases where the ABI was technically broken,\nin a way that these tools will complain about. And yet it was\ngenerally understood that these changes did not really break ABI\nstability, for various high-context reasons that no tool can possibly\nbe expected to understand. This will at least be true under our\nexisting practices, or anything like them.\n\nFor example, if you look at the \"Problems with Symbols, High Severity\"\nfrom the report I posted comparing REL_11_0 to REL_11_20, you'll see\nthat I removed _bt_pagedel() when backpatching a fix. That was\njustified by the fact that any extension that was calling that\nfunction was already hopelessly broken (I pointed this out at the\ntime).\n\nHaving some tooling in this area would be extremely useful. The\nabsolute number of false positives seems quite manageable, but the\nfact is that most individual complaints that the tooling makes are\nfalse positives. At least in some deeper sense.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 27 Feb 2024 08:45:12 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 2024-Feb-27, Peter Geoghegan wrote:\n\n> On Tue, Feb 27, 2024 at 8:25 AM Alvaro Herrera <[email protected]> wrote:\n> > The way I see this working, is that we set up a buildfarm animal [per\n> > architecture] that runs the new rules produced by the abidw option and\n> > stores the result locally, so that for stable branches it can turn red\n> > when an ABI-breaking change with the previous minor release of the same\n> > branch is introduced. There's no point on it ever turning red in the\n> > master branch, since we're obviously not concerned with ABI changes there.\n> \n> ABI stability doesn't seem like something that you can alert on.\n\nEh, I disagree. Since you can add suppression rules to the tree, I'd\nsay it definitely is.\n\nIf you commit something and it breaks ABI, we want to know as soon as\npossible -- for example suppose the ABI break occurs during a security\npatch at release time; if we get an alert about it immediately, we still\nhave time to fix it before the mess is released.\n\nNow, if you have an intentional ABI break, then you can let the testing\nsystem know about it so that it won't complain. We could for example\nhave src/abi/suppressions/REL_11_8.ini and\nsrc/abi/suppressions/REL_12_3.ini files (in the respective branches)\nwith the _bt_pagedel() change. You can add this file together with the\ncommit that introduces the change, if you know about it ahead of time,\nor as a separate commit after the buildfarm animal turns red. Or you\ncan fix your ABI break, if -- as is most likely -- it was unintentional.\n\nAgain -- this only matters for stable branches. We don't need to do\nanything for the master branch, as it would be far too noisy if we did\nthat.\n\nNow, maybe a buildfarm animal is not the right tool, and instead we need\na separate system that tests for it and emails pg-hackers when it breaks\nor whatever. That's fine with me, but it seems a pretty minor\nimplementation detail.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 27 Feb 2024 15:03:32 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 9:03 AM Alvaro Herrera <[email protected]> wrote:\n> Now, maybe a buildfarm animal is not the right tool, and instead we need\n> a separate system that tests for it and emails pg-hackers when it breaks\n> or whatever. That's fine with me, but it seems a pretty minor\n> implementation detail.\n\nAnything that alerts on breakage is pretty much equivalent to having a\nbuildfarm animal.\n\nI have a feeling that there are going to be real problems with\nalerting, at least if it's introduced right away. I'd feel much better\nabout it if there was an existing body of suppressions, that more or\nless worked as a reference of agreed upon best practices. Can we do\nthat part first, rather than starting out with a blanket assumption\nthat everything that happened before now must have been perfect?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 27 Feb 2024 09:22:22 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 2024-Feb-27, Peter Geoghegan wrote:\n\n> I have a feeling that there are going to be real problems with\n> alerting, at least if it's introduced right away. I'd feel much better\n> about it if there was an existing body of suppressions, that more or\n> less worked as a reference of agreed upon best practices. Can we do\n> that part first, rather than starting out with a blanket assumption\n> that everything that happened before now must have been perfect?\n\nWell, I was describing a possible plan, not saying that we have to\nassume we've been perfect all along. I think the first step should be\nto add the tooling now (Meson rules as in Peter E's 0001 patch\nupthread, or something equivalent), then figure out what suppressions we\nneed in the supported back branches. This would let us build the corpus\nof best practices you want, I think.\n\nOnce we have clean runs with those, we can add BF animals or whatever.\nThe alerts don't have to be the first step. In fact, we can wait even\nlonger for the alerts.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 27 Feb 2024 15:34:11 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On 27.02.24 14:25, Alvaro Herrera wrote:\n> I like this idea and I think we should integrate it with the objective\n> of it becoming the workhorse of ABI-stability testing. However, I do\n> not think that the subsequent patches should be part of the tree at all;\n> certainly not the produced .xml files in your 0004, as that would be far\n> too unstable and would cause a lot of pointless churn.\n\nLooking at this again, if we don't want the .xml files in the tree, then \nwe don't really need this patch series. Most of the delicate work in \nthe 0001 patch was to find the right abidw options combinations to \nreduce the variability in the .xml output files (--no-show-locs is an \nobvious example). If we don't want to record the .xml files in the \ntree, then we don't need all these complications.\n\nFor example, if we want to check the postgres backend ABI across minor \nversions, we could just compile it multiple times and compare the \nbinaries directly:\n\ngit checkout REL_16_0\nmeson setup build-0\nmeson compile -C build-0\n\ngit checkout REL_16_STABLE\nmeson setup build-1\nmeson compile -C build-1\n\nabidiff --no-added-syms build-0/src/backend/postgres \nbuild-1/src/backend/postgres\n\n\n> The way I see this working, is that we set up a buildfarm animal [per\n> architecture] that runs the new rules produced by the abidw option and\n> stores the result locally, so that for stable branches it can turn red\n> when an ABI-breaking change with the previous minor release of the same\n> branch is introduced. There's no point on it ever turning red in the\n> master branch, since we're obviously not concerned with ABI changes there.\n\nMaybe the way forward here is to write a buildfarm module for this, and \nthen see from there what further needs there are.\n\n\n\n",
"msg_date": "Mon, 4 Mar 2024 13:50:32 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 7:50 AM Peter Eisentraut <[email protected]> wrote:\n> Looking at this again, if we don't want the .xml files in the tree, then\n> we don't really need this patch series.\n\nBased on this, I've updated the status of this patch in the CommitFest\napplication to Withdrawn. If that's not correct, please feel free to\nadjust.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Mar 2024 10:38:33 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: abi-compliance-checker"
}
] |
[
{
"msg_contents": "Hi, hackers.\nIn PostgreSQL 16 Beta 1, standalone backend was added to the backend type by this patch [1]. I think this patch will change the value of backend_type column in the pg_stat_activity view, but it's not explained in the documentation. The attached patch fixes monitoring.sgml.\n\n[1] Add BackendType for standalone backends\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0c679464a837079acc75ff1d45eaa83f79e05690\n\nRegards, \nNoriyoshi Shinoda",
"msg_date": "Mon, 29 May 2023 00:39:08 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[16Beta1][doc] Add BackendType for standalone backends"
},
{
"msg_contents": "> On 29 May 2023, at 02:39, Shinoda, Noriyoshi (PN Japan FSIP) <[email protected]> wrote:\n\n> In PostgreSQL 16 Beta 1, standalone backend was added to the backend type by this patch [1]. I think this patch will change the value of backend_type column in the pg_stat_activity view, but it's not explained in the documentation. The attached patch fixes monitoring.sgml.\n\nNice catch, the documentation should indeed be updated with this to reflect the\npossible values. Will fix.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 30 May 2023 10:01:08 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [16Beta1][doc] Add BackendType for standalone backends"
}
] |
[
{
"msg_contents": "hi.\nreading through contrib/spi/insert_username.c\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD\n36\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD#l36>\n /* sanity checks from autoinc.c */\n37\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD#l37>\n if (!CALLED_AS_TRIGGER(fcinfo))\n38\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD#l38>\n /* internal error */\n39\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD#l39>\n elog(ERROR, \"insert_username: not fired by trigger manager\");\n\nshould it be /* sanity checks from insert_username.c */ ?\n\nhi.reading through contrib/spi/insert_username.chttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD 36 /* sanity checks from autoinc.c */ 37 if (!CALLED_AS_TRIGGER(fcinfo)) 38 /* internal error */ 39 elog(ERROR, \"insert_username: not fired by trigger manager\");should it be /* sanity checks from insert_username.c */ ?",
"msg_date": "Mon, 29 May 2023 12:31:37 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": true,
"msg_subject": "contrib/spi/insert_username.c comment typo?"
},
{
"msg_contents": "On Mon, May 29, 2023 at 11:31 AM jian he <[email protected]>\nwrote:\n> hi.\n> reading through contrib/spi/insert_username.c\n>\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD\n> 36 /* sanity checks from autoinc.c */\n> 37 if (!CALLED_AS_TRIGGER(fcinfo))\n> 38 /* internal error */\n> 39 elog(ERROR, \"insert_username: not fired by trigger manager\");\n>\n> should it be /* sanity checks from insert_username.c */ ?\n\nI believe it's saying the checks were copied from autoinc.c.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, May 29, 2023 at 11:31 AM jian he <[email protected]> wrote:> hi.> reading through contrib/spi/insert_username.c> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=contrib/spi/insert_username.c;h=a2e1747ff74c7667665dcc334f54ad368885d83c;hb=HEAD> 36 /* sanity checks from autoinc.c */> 37 if (!CALLED_AS_TRIGGER(fcinfo))> 38 /* internal error */> 39 elog(ERROR, \"insert_username: not fired by trigger manager\");>> should it be /* sanity checks from insert_username.c */ ?I believe it's saying the checks were copied from autoinc.c.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 29 May 2023 11:43:13 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: contrib/spi/insert_username.c comment typo?"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT.\n\nIndeed, we have seen occurrences in [1] that some slots were\nnot invalidated (while we expected vacuum to remove dead rows\nleading to slots invalidation on the standby).\n\nThough we don't have strong evidences that this\nwas due to transactions holding back global xmin (as vacuum did\nnot run in verbose mode), suspicion is high enough (as Tom pointed\nout that the test is broken on its face (see [1])).\n\nThe proposed patch:\n\n- set autovacuum = off on the primary (as autovacuum is the usual suspect\nfor holding global xmin).\n- Ensure that vacuum is able to remove dead rows before launching\nthe slots invalidation tests.\n- If after 10 attempts the vacuum is still not able to remove the dead\nrows then the slots invalidation tests are skipped: that should be pretty\nrare, as currently the majority of the tests are green (with only one attempt).\n\nWhile at it, the patch also addresses the nitpicks mentioned by Robert in [2].\n\n[1]: https://www.postgresql.org/message-id/flat/OSZPR01MB6310CFFD7D0DCD60A05DB1C3FD4A9%40OSZPR01MB6310.jpnprd01.prod.outlook.com#71898e088d2a57564a1bd9c41f3e6f36\n[2]: https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 30 May 2023 12:34:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test slots invalidations in 035_standby_logical_decoding.pl only if\n dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn 5/30/23 12:34 PM, Drouvot, Bertrand wrote:\n> Hi hackers,\n> \n> Please find attached a patch proposal to $SUBJECT.\n> \n> Indeed, we have seen occurrences in [1] that some slots were\n> not invalidated (while we expected vacuum to remove dead rows\n> leading to slots invalidation on the standby).\n> \n> Though we don't have strong evidences that this\n> was due to transactions holding back global xmin (as vacuum did\n> not run in verbose mode), suspicion is high enough (as Tom pointed\n> out that the test is broken on its face (see [1])).\n> \n> The proposed patch:\n> \n> - set autovacuum = off on the primary (as autovacuum is the usual suspect\n> for holding global xmin).\n> - Ensure that vacuum is able to remove dead rows before launching\n> the slots invalidation tests.\n> - If after 10 attempts the vacuum is still not able to remove the dead\n> rows then the slots invalidation tests are skipped: that should be pretty\n> rare, as currently the majority of the tests are green (with only one attempt).\n> \n> While at it, the patch also addresses the nitpicks mentioned by Robert in [2].\n> \n> [1]: https://www.postgresql.org/message-id/flat/OSZPR01MB6310CFFD7D0DCD60A05DB1C3FD4A9%40OSZPR01MB6310.jpnprd01.prod.outlook.com#71898e088d2a57564a1bd9c41f3e6f36\n> [2]: https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com\n> \n\nPlease find attached V2 that, instead of the above proposal, waits for a new snapshot\nthat has a newer horizon before doing the vacuum (as proposed by Andres in [1]).\n\nSo, V2:\n\n- set autovacuum = off on the primary (as autovacuum is the usual suspect\nfor holding global xmin).\n- waits for a new snapshot that has a newer horizon before doing the vacuum(s).\n- addresses the nitpicks mentioned by Robert in [2].\n\nV2 also keeps the verbose mode for the vacuum(s) (as done in V1), as it may help\nfor further analysis if needed.\n\n[1]: https://www.postgresql.org/message-id/20230530152426.ensapay7pozh7ozn%40alap3.anarazel.de\n[2]: https://www.postgresql.org/message-id/CA%2BTgmobHGpU2ZkChgKifGDLaf_%2BmFA7njEpeTjfyNf_msCZYew%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 31 May 2023 12:14:53 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hello Michael and Bertrand,\n\nI'd also like to note that even with FREEZE added [1], I happened to see\nthe test failure:\n5 # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_class'\n5 # at t/035_standby_logical_decoding.pl line 222.\n5\n5 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n5 # at t/035_standby_logical_decoding.pl line 227.\n\nwhere 035_standby_logical_decoding_primary.log contains:\n...\n2024-01-09 07:44:26.480 UTC [820142] 035_standby_logical_decoding.pl LOG: statement: DROP TABLE conflict_test;\n2024-01-09 07:44:26.687 UTC [820142] 035_standby_logical_decoding.pl LOG: statement: VACUUM (VERBOSE, FREEZE) pg_class;\n2024-01-09 07:44:26.687 UTC [820142] 035_standby_logical_decoding.pl INFO: aggressively vacuuming \n\"testdb.pg_catalog.pg_class\"\n2024-01-09 07:44:27.099 UTC [820143] DEBUG: autovacuum: processing database \"testdb\"\n2024-01-09 07:44:27.102 UTC [820142] 035_standby_logical_decoding.pl INFO: finished vacuuming \n\"testdb.pg_catalog.pg_class\": index scans: 1\n pages: 0 removed, 11 remain, 11 scanned (100.00% of total)\n tuples: 0 removed, 423 remain, 4 are dead but not yet removable\n removable cutoff: 762, which was 2 XIDs old when operation ended\n new relfrozenxid: 762, which is 2 XIDs ahead of previous value\n frozen: 1 pages from table (9.09% of total) had 1 tuples frozen\n....\n\nThus just adding FREEZE is not enough, seemingly. It makes me wonder if\n0174c2d21 should be superseded by a patch like discussed (or just have\nautovacuum = off added)...\n\n09.01.2024 07:59, Michael Paquier wrote:\n> Alexander, does the test gain in stability once you begin using the\n> patch posted on [2], mentioned by Bertrand?\n>\n> (Also, perhaps we'd better move the discussion to the other thread\n> where the patch has been sent.)\n>\n> [2]: https://www.postgresql.org/message-id/[email protected]\n\n09.01.2024 08:29, Bertrand Drouvot wrote:\n> Alexander, pleae find attached v3 which is more or less a rebased version of it.\n\nBertrand, thank you for updating the patch!\n\nMichael, it definitely increases stability of the test (tens of iterations\nwith 20 tests in parallel performed successfully), although I've managed to\nsee another interesting failure (twice):\n13 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n13 # at t/035_standby_logical_decoding.pl line 227.\n\npsql:<stdin>:1: INFO: vacuuming \"testdb.pg_catalog.pg_class\"\npsql:<stdin>:1: INFO: finished vacuuming \"testdb.pg_catalog.pg_class\": index scans: 1\npages: 0 removed, 11 remain, 11 scanned (100.00% of total)\ntuples: 4 removed, 419 remain, 0 are dead but not yet removable\nremovable cutoff: 754, which was 0 XIDs old when operation ended\n...\nWaiting for replication conn standby's replay_lsn to pass 0/403E6F8 on primary\n\nBut I see no VACUUM records in WAL:\nrmgr: Transaction len (rec/tot): 222/ 222, tx: 0, lsn: 0/0403E468, prev 0/0403E370, desc: INVALIDATION ; \ninval msgs: catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 catcache 54 relcache \n2662 relcache 2663 relcache 3455 relcache 1259\nrmgr: Standby len (rec/tot): 234/ 234, tx: 0, lsn: 0/0403E548, prev 0/0403E468, desc: INVALIDATIONS ; \nrelcache init file inval dbid 16384 tsid 1663; inval msgs: catcache 55 catcache 54 catcache 55 catcache 54 catcache 55 \ncatcache 54 catcache 55 catcache 54 relcache 2662 relcache 2663 relcache 3455 relcache 1259\nrmgr: Heap len (rec/tot): 60/ 140, tx: 754, lsn: 0/0403E638, prev 0/0403E548, desc: INSERT off: 2, \nflags: 0x08, blkref #0: rel 1663/16384/16385 blk 0 FPW\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 754, lsn: 0/0403E6C8, prev 0/0403E638, desc: COMMIT \n2024-01-09 13:40:59.873385 UTC\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0403E6F8, prev 0/0403E6C8, desc: RUNNING_XACTS \nnextXid 755 latestCompletedXid 754 oldestRunningXid 755\nrmgr: XLOG len (rec/tot): 30/ 30, tx: 0, lsn: 0/0403E730, prev 0/0403E6F8, desc: CHECKPOINT_REDO\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0403E750, prev 0/0403E730, desc: RUNNING_XACTS \nnextXid 755 latestCompletedXid 754 oldestRunningXid 755\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn: 0/0403E788, prev 0/0403E750, desc: \nCHECKPOINT_ONLINE redo 0/403E730; tli 1; prev tli 1; fpw true; xid 0:755; oid 24576; multi 1; offset 0; oldest xid 728 \nin DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 755; online\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0403E800, prev 0/0403E788, desc: RUNNING_XACTS \nnextXid 755 latestCompletedXid 754 oldestRunningXid 755\n\n(Full logs are attached.)\n\n[1] https://www.postgresql.org/message-id/4fd52508-54d7-0202-5bd3-546c2295967f%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Tue, 9 Jan 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Tue, Jan 09, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> Thus just adding FREEZE is not enough, seemingly. It makes me wonder if\n> 0174c2d21 should be superseded by a patch like discussed (or just have\n> autovacuum = off added)...\n\nAdding an extra FREEZE offers an extra insurance, so I don't see why\nit would be a problem to add it to stabilize the horizon conflicts on\nthe standbys.\n\n> 09.01.2024 07:59, Michael Paquier wrote:\n> Bertrand, thank you for updating the patch!\n> \n> Michael, it definitely increases stability of the test (tens of iterations\n> with 20 tests in parallel performed successfully), although I've managed to\n> see another interesting failure (twice):\n> 13 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n> 13 # at t/035_standby_logical_decoding.pl line 227.\n\nSomething I'd like to confirm here: you still see this failure with\nthe patch, but without an extra FREEZE, right? If we do both, the\ntest would get more stable, wouldn't it?\n--\nMichael",
"msg_date": "Wed, 10 Jan 2024 13:26:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 09, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> Michael, it definitely increases stability of the test (tens of iterations\n> with 20 tests in parallel performed successfully),\n\nThanks for testing!\n\n> although I've managed to\n> see another interesting failure (twice):\n> 13����� #�� Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n> 13����� #�� at t/035_standby_logical_decoding.pl line 227.\n> \n\nLooking at the attached log files and particularly 1/regress_log_035_standby_logical_decoding:\n\n\"\n[11:05:28.118](13.993s) ok 24 - inactiveslot slot invalidation is logged with vacuum on pg_class\n[11:05:28.119](0.001s) not ok 25 - activeslot slot invalidation is logged with vacuum on pg_class\n\"\n\nThat seems weird, the inactive slot has been invalidated while the active one is not.\nWhile it takes a bit longer to invalidate an active slot, I don't think the test can\nmove on until both are invalidated (then leading to the tests 24 and 25)). I can\nsee the tests are very slow to run (13.993s for 24) but still don't get how 24 could\nsucceed while 25 does not.\n\nLooking at 2/regress_log_035_standby_logical_decoding:\n\n\"\n[13:41:02.076](20.279s) ok 23 - inactiveslot slot invalidation is logged with vacuum on pg_class\n[13:41:02.076](0.000s) not ok 24 - activeslot slot invalidation is logged with vacuum on pg_class\n\"\n\nSame \"weird\" behavior but this time the tests numbering are not the same (23 and 24).\nThat is even more weird as those tests should be the 24 and 25 ones.\n\nWould it be possible to also send the standby logs?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 09:46:52 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\n10.01.2024 07:26, Michael Paquier wrote:\n> On Tue, Jan 09, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n>> Thus just adding FREEZE is not enough, seemingly. It makes me wonder if\n>> 0174c2d21 should be superseded by a patch like discussed (or just have\n>> autovacuum = off added)...\n> Adding an extra FREEZE offers an extra insurance, so I don't see why\n> it would be a problem to add it to stabilize the horizon conflicts on\n> the standbys.\n\nAs 0174c2d21 added FREEZE already, I meant to add \"autovacuum = off\" or\napply a fix similar to what we're are discussing here.\n\n>\n>> Michael, it definitely increases stability of the test (tens of iterations\n>> with 20 tests in parallel performed successfully), although I've managed to\n>> see another interesting failure (twice):\n>> 13 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n>> 13 # at t/035_standby_logical_decoding.pl line 227.\n> Something I'd like to confirm here: you still see this failure with\n> the patch, but without an extra FREEZE, right? If we do both, the\n> test would get more stable, wouldn't it?\n\nYes, I tested the patch as-is, without FREEZE, but it looks like it doesn't\nmatter in that case. And sorry for misleading information about missing\nVACUUM records in my previous message, please ignore it.\n\n10.01.2024 12:46, Bertrand Drouvot wrote:\n\n>> although I've managed to\n>> see another interesting failure (twice):\n>> 13 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n>> 13 # at t/035_standby_logical_decoding.pl line 227.\n>>\n> Looking at the attached log files and particularly 1/regress_log_035_standby_logical_decoding:\n>\n> \"\n> [11:05:28.118](13.993s) ok 24 - inactiveslot slot invalidation is logged with vacuum on pg_class\n> [11:05:28.119](0.001s) not ok 25 - activeslot slot invalidation is logged with vacuum on pg_class\n> \"\n>\n> That seems weird, the inactive slot has been invalidated while the active one is not.\n> While it takes a bit longer to invalidate an active slot, I don't think the test can\n> move on until both are invalidated (then leading to the tests 24 and 25)). I can\n> see the tests are very slow to run (13.993s for 24) but still don't get how 24 could\n> succeed while 25 does not.\n>\n> ...\n>\n> Would it be possible to also send the standby logs?\n\nYes, please look at the attached logs. This time I've build postgres with\n-DWAL_DEBUG and ran tests with TEMP_CONFIG as below:\nwal_keep_size=1GB\nwal_debug = on\nlog_autovacuum_min_duration = 0\nlog_statement = 'all'\nlog_min_messages = INFO\n\nThe archive attached contains logs from four runs:\nrecovery-1-ok -- an example of successful run for reference\nrecovery-7-pruning and recovery-19-pruning -- failures with a failed\n subtest 'activeslot slot invalidation is logged with on-access pruning'\nrecovery-15-vacuum_pg_class -- a failure with a failed\n subtest 'activeslot slot invalidation is logged with vacuum on pg_class'\n\nThe distinction that I see in the failed run logs, for example in\nrecovery-15-vacuum_pg_class, 035_standby_logical_decoding_standby.log:\n2024-01-10 11:00:18.700 UTC [789618] LOG: REDO @ 0/4020220; LSN 0/4020250: prev 0/401FDE0; xid 753; len 20 - \nTransaction/COMMIT: 2024-01-10 11:00:18.471694+00\n2024-01-10 11:00:18.797 UTC [789618] LOG: REDO @ 0/4020250; LSN 0/4020288: prev 0/4020220; xid 0; len 24 - \nStandby/RUNNING_XACTS: nextXid 754 latestCompletedXid 753 oldestRunningXid 754\n2024-01-10 11:00:19.013 UTC [789618] LOG: REDO @ 0/4020288; LSN 0/40202C8: prev 0/4020250; xid 0; len 9; blkref #0: rel \n1663/16384/2610, blk 0 - Heap2/PRUNE: snapshotConflictHorizon: 752, nredirected: 0, ndead: 1, isCatalogRel: T, nunused: \n0, redirected: [], dead: [48], unused: []\n2024-01-10 11:00:19.111 UTC [789618] LOG: invalidating obsolete replication slot \"row_removal_inactiveslot\"\n2024-01-10 11:00:19.111 UTC [789618] DETAIL: The slot conflicted with xid horizon 752.\n2024-01-10 11:00:19.111 UTC [789618] CONTEXT: WAL redo at 0/4020288 for Heap2/PRUNE: snapshotConflictHorizon: 752, \nnredirected: 0, ndead: 1, isCatalogRel: T, nunused: 0, redirected: [], dead: [48], unused: []; blkref #0: rel \n1663/16384/2610, blk 0\n2024-01-10 11:00:29.109 UTC [789618] LOG: terminating process 790377 to release replication slot \"row_removal_activeslot\"\n2024-01-10 11:00:29.109 UTC [789618] DETAIL: The slot conflicted with xid horizon 752.\n2024-01-10 11:00:29.109 UTC [789618] CONTEXT: WAL redo at 0/4020288 for Heap2/PRUNE: snapshotConflictHorizon: 752, \nnredirected: 0, ndead: 1, isCatalogRel: T, nunused: 0, redirected: [], dead: [48], unused: []; blkref #0: rel \n1663/16384/2610, blk 0\n2024-01-10 11:00:30.144 UTC [790377] 035_standby_logical_decoding.pl ERROR: canceling statement due to conflict with \nrecovery\n2024-01-10 11:00:30.144 UTC [790377] 035_standby_logical_decoding.pl DETAIL: User was using a logical replication slot \nthat must be invalidated.\n2024-01-10 11:00:30.144 UTC [790377] 035_standby_logical_decoding.pl STATEMENT: START_REPLICATION SLOT \n\"row_removal_activeslot\" LOGICAL 0/0 (\"include-xids\" '0', \"skip-empty-xacts\" '1')\n2024-01-10 11:00:30.144 UTC [790377] 035_standby_logical_decoding.pl LOG: released logical replication slot \n\"row_removal_activeslot\"\n\nis an absent message 'obsolete replication slot \"row_removal_activeslot\"'\nand an additional record 'Standby/RUNNING_XACTS', which can be found in\n035_standby_logical_decoding_primary.log:\n2024-01-10 11:00:18.515 UTC [783410] LOG: xlog bg flush request write 0/4020250; flush: 0/4020250, current is write \n0/4020220; flush 0/4020220\n2024-01-10 11:00:18.646 UTC [783387] LOG: INSERT @ 0/4020288: - Standby/RUNNING_XACTS: nextXid 754 latestCompletedXid \n753 oldestRunningXid 754\n2024-01-10 11:00:18.702 UTC [790526] 035_standby_logical_decoding.pl LOG: statement: SELECT (select \ntxid_snapshot_xmin(txid_current_snapshot()) - 753) > 0\n2024-01-10 11:00:18.724 UTC [783410] LOG: xlog bg flush request write 0/4020288; flush: 0/4020288, current is write \n0/4020250; flush 0/4020250\n\nSo perhaps it can affect an active slot invalidation?\n\nBest regards,\nAlexander",
"msg_date": "Wed, 10 Jan 2024 17:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jan 10, 2024 at 05:00:01PM +0300, Alexander Lakhin wrote:\n> 10.01.2024 12:46, Bertrand Drouvot wrote:\n> \n> > Would it be possible to also send the standby logs?\n> \n> Yes, please look at the attached logs. This time I've build postgres with\n> -DWAL_DEBUG and ran tests with TEMP_CONFIG as below:\n> wal_keep_size=1GB\n> wal_debug = on\n> log_autovacuum_min_duration = 0\n> log_statement = 'all'\n> log_min_messages = INFO\n> \n> The archive attached contains logs from four runs:\n> recovery-1-ok -- an example of successful run for reference\n> recovery-7-pruning and recovery-19-pruning -- failures with a failed\n> �subtest 'activeslot slot invalidation is logged with on-access pruning'\n> recovery-15-vacuum_pg_class -- a failure with a failed\n> �subtest 'activeslot slot invalidation is logged with vacuum on pg_class'\n\nThanks a lot for the testing!\n\n> is an absent message 'obsolete replication slot \"row_removal_activeslot\"'\n\nLooking at recovery-15-vacuum_pg_class/i035_standby_logical_decoding_standby.log here:\n\nYeah, the missing message has to come from InvalidatePossiblyObsoleteSlot().\n\nIn case of an active slot we first call ReportSlotInvalidation with the second\nparameter set to true (to emit the \"terminating\" message), then SIGTERM the active\nprocess and then (later) we should call the other ReportSlotInvalidation()\ncall with the second parameter set to false (to emit the message that we don't\nsee here).\n\nSo it means InvalidatePossiblyObsoleteSlot() did not trigger the second ReportSlotInvalidation()\ncall. \n\nThe thing it that it looks like we exited the loop in InvalidatePossiblyObsoleteSlot()\nbecause there is more messages from the startup process (789618) after the:\n\n\n\"\n2024-01-10 11:00:29.109 UTC [789618] LOG: terminating process 790377 to release replication slot \"row_removal_activeslot\"\n\"\n\none.\n\nDo you think you could try to add more debug messages in InvalidatePossiblyObsoleteSlot()\nto understand why the second call to ReportSlotInvalidation() is not done and IIUC\nwhy/how we exited the loop?\n\nFWIW, I did try to reproduce by launching pg_recvlogical and then kill -SIGSTOP\nit. Then producing a conflict, I'm able to get the first message and not the second\none (which is expected). But the startup process does not exit the loop, which is\nexpected here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 Jan 2024 16:32:36 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi Bertrand,\n\n10.01.2024 19:32, Bertrand Drouvot wrote:\n>\n>> is an absent message 'obsolete replication slot \"row_removal_activeslot\"'\n> Looking at recovery-15-vacuum_pg_class/i035_standby_logical_decoding_standby.log here:\n>\n> Yeah, the missing message has to come from InvalidatePossiblyObsoleteSlot().\n>\n> In case of an active slot we first call ReportSlotInvalidation with the second\n> parameter set to true (to emit the \"terminating\" message), then SIGTERM the active\n> process and then (later) we should call the other ReportSlotInvalidation()\n> call with the second parameter set to false (to emit the message that we don't\n> see here).\n>\n> So it means InvalidatePossiblyObsoleteSlot() did not trigger the second ReportSlotInvalidation()\n> call.\n\nI've found a way to reproduce the issue without slowing down a machine or\nrunning multiple tests in parallel. It's enough for this to add a delay to\nallow BackgroundWriterMain() to execute LogStandbySnapshot():\n@@ -692,2 +690,3 @@\n $node_primary->safe_psql('testdb', qq[UPDATE prun SET s = 'D';]);\n+$node_primary->safe_psql('testdb', qq[SELECT pg_sleep(15);]);\n $node_primary->safe_psql('testdb', qq[UPDATE prun SET s = 'E';]);\n\nWith this delay, I get the failure immediately:\n$ PROVE_TESTS=\"t/035*\" TEMP_CONFIG=/tmp/extra.config make check -s -C src/test/recovery\n# +++ tap check in src/test/recovery +++\nt/035_standby_logical_decoding.pl .. 47/?\n# Failed test 'activeslot slot invalidation is logged with on-access pruning'\n# at t/035_standby_logical_decoding.pl line 227.\n\n_primary.log contains:\n2024-01-11 09:37:01.731 UTC [67656] 035_standby_logical_decoding.pl STATEMENT: UPDATE prun SET s = 'D';\n2024-01-11 09:37:01.738 UTC [67664] 035_standby_logical_decoding.pl LOG: statement: SELECT pg_sleep(15);\n2024-01-11 09:37:01.738 UTC [67664] 035_standby_logical_decoding.pl LOG: xlog flush request 0/404D8F0; write 0/0; flush \n0/0 at character 8\n2024-01-11 09:37:01.738 UTC [67664] 035_standby_logical_decoding.pl CONTEXT: writing block 14 of relation base/16384/1247\n2024-01-11 09:37:01.738 UTC [67664] 035_standby_logical_decoding.pl STATEMENT: SELECT pg_sleep(15);\n2024-01-11 09:37:01.905 UTC [67204] LOG: xlog flush request 0/404DA58; write 0/404BB00; flush 0/404BB00\n2024-01-11 09:37:01.905 UTC [67204] CONTEXT: writing block 4 of relation base/16384/2673\n2024-01-11 09:37:12.514 UTC [67204] LOG: INSERT @ 0/4057768: - Standby/RUNNING_XACTS: nextXid 769 latestCompletedXid \n768 oldestRunningXid 769\n2024-01-11 09:37:12.514 UTC [67206] LOG: xlog bg flush request write 0/4057768; flush: 0/4057768, current is write \n0/4057730; flush 0/4057730\n2024-01-11 09:37:16.760 UTC [67712] 035_standby_logical_decoding.pl LOG: statement: UPDATE prun SET s = 'E';\n2024-01-11 09:37:16.760 UTC [67712] 035_standby_logical_decoding.pl LOG: INSERT @ 0/40577A8: - Heap2/PRUNE: \nsnapshotConflictHorizon: 768,...\n\nNote RUNNING_XACTS here...\n\n_standby.log contains:\n2024-01-11 09:37:16.842 UTC [67606] LOG: invalidating obsolete replication slot \"pruning_inactiveslot\"\n2024-01-11 09:37:16.842 UTC [67606] DETAIL: The slot conflicted with xid horizon 768.\n2024-01-11 09:37:16.842 UTC [67606] CONTEXT: WAL redo at 0/4057768 for Heap2/PRUNE: snapshotConflictHorizon: 768, ...\nand no 'invalidating obsolete replication slot \"pruning_activeslot\"' below.\n\nDebug logging added (see attached) gives more information:\n2024-01-11 09:37:16.842 UTC [67606] LOG: invalidating obsolete replication slot \"pruning_inactiveslot\"\n2024-01-11 09:37:16.842 UTC [67606] DETAIL: The slot conflicted with xid horizon 768.\n...\n2024-01-11 09:37:16.842 UTC [67606] LOG: !!!InvalidatePossiblyObsoleteSlot| RS_INVAL_HORIZON, s: 0x7f7985475c10, \ns->effective_xmin: 0, s->effective_catalog_xmin: 769, snapshotConflictHorizon: 768\n...\n2024-01-11 09:37:16.842 UTC [67606] LOG: !!!InvalidatePossiblyObsoleteSlot| conflict: 0\n\nso the condition TransactionIdPrecedesOrEquals(s->effective_catalog_xmin,\n snapshotConflictHorizon) is not satisfied, hence conflict = 0 and it breaks\nthe loop in InvalidatePossiblyObsoleteSlot().\nSeveral lines above in the log we can see:\n2024-01-11 09:37:12.514 UTC [67606] LOG: REDO @ 0/4057730; LSN 0/4057768: prev 0/4057700; xid 0; len 24 - \nStandby/RUNNING_XACTS: nextXid 769 latestCompletedXid 768 oldestRunningXid 769\n2024-01-11 09:37:12.540 UTC [67643] 035_standby_logical_decoding.pl LOG: !!!LogicalConfirmReceivedLocation| \nMyReplicationSlot: 0x7f7985475c10, MyReplicationSlot->effective_catalog_xmin: 769\n\nand that's the first occurrence of xid 769 in the log.\n\nThe decoded stack trace for the LogicalConfirmReceivedLocation call is:\nogicalConfirmReceivedLocation at logical.c:1886:1\nProcessStandbyReplyMessage at walsender.c:2327:1\nProcessStandbyMessage at walsender.c:2188:1\nProcessRepliesIfAny at walsender.c:2121:5\nWalSndWaitForWal at walsender.c:1735:7\nlogical_read_xlog_page at walsender.c:1068:13\nReadPageInternal at xlogreader.c:1062:12\nXLogDecodeNextRecord at xlogreader.c:601:5\nXLogReadAhead at xlogreader.c:976:5\nXLogReadRecord at xlogreader.c:406:3\nXLogSendLogical at walsender.c:3229:5\nWalSndLoop at walsender.c:2658:7\nStartLogicalReplication at walsender.c:1477:2\nexec_replication_command at walsender.c:1985:6\nPostgresMain at postgres.c:4649:10\n\nBest regards,\nAlexander",
"msg_date": "Thu, 11 Jan 2024 13:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jan 11, 2024 at 01:00:00PM +0300, Alexander Lakhin wrote:\n> Hi Bertrand,\n> \n> 10.01.2024 19:32, Bertrand Drouvot wrote:\n> > \n> > > is an absent message 'obsolete replication slot \"row_removal_activeslot\"'\n> > Looking at recovery-15-vacuum_pg_class/i035_standby_logical_decoding_standby.log here:\n> > \n> > Yeah, the missing message has to come from InvalidatePossiblyObsoleteSlot().\n> > \n> > In case of an active slot we first call ReportSlotInvalidation with the second\n> > parameter set to true (to emit the \"terminating\" message), then SIGTERM the active\n> > process and then (later) we should call the other ReportSlotInvalidation()\n> > call with the second parameter set to false (to emit the message that we don't\n> > see here).\n> > \n> > So it means InvalidatePossiblyObsoleteSlot() did not trigger the second ReportSlotInvalidation()\n> > call.\n> \n> I've found a way to reproduce the issue without slowing down a machine or\n> running multiple tests in parallel. It's enough for this to add a delay to\n> allow BackgroundWriterMain() to execute LogStandbySnapshot():\n> @@ -692,2 +690,3 @@\n> �$node_primary->safe_psql('testdb', qq[UPDATE prun SET s = 'D';]);\n> +$node_primary->safe_psql('testdb', qq[SELECT pg_sleep(15);]);\n> �$node_primary->safe_psql('testdb', qq[UPDATE prun SET s = 'E';]);\n> \n> With this delay, I get the failure immediately:\n> $ PROVE_TESTS=\"t/035*\" TEMP_CONFIG=/tmp/extra.config make check -s -C src/test/recovery\n> # +++ tap check in src/test/recovery +++\n> t/035_standby_logical_decoding.pl .. 47/?\n> #�� Failed test 'activeslot slot invalidation is logged with on-access pruning'\n> #�� at t/035_standby_logical_decoding.pl line 227.\n\nThanks a lot for the testing!\n\nSo I think we have 2 issues here:\n\n1) The one you're mentioning above related to the on-access pruning test:\n\nI think the engine behavior is right here and that the test is racy. I'm\nproposing to bypass the active slot invalidation check for this particular test (\nas I don't see any \"easy\" way to take care of this race condition). The active slot\ninvalidation is already well covered in the other tests anyway. \n\nI'm proposing the attached v4-0001-Fix-035_standby_logical_decoding.pl-race-conditio.patch\nfor it.\n\n2) The fact that sometime we're getting a termination message which is not followed\nby an obsolete one (like as discussed in [1]).\n\nFor this one, I think that InvalidatePossiblyObsoleteSlot() is racy:\n\nIn case of an active slot we proceed in 2 steps:\n - terminate the backend holding the slot\n - report the slot as obsolete\n\nThis is racy because between the two we release the mutex on the slot, which\nmeans the effective_xmin and effective_catalog_xmin could advance during that time.\n\nI'm proposing the attached v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\nfor it.\n\nWould it be possible to re-launch your repro (the slow one, not the pg_sleep() one)\nwith bot patch applied and see how it goes? (Please note that v4 replaces v3 that\nyou're already using in your tests).\n\nIf it helps, I'll propose v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\ninto a dedicated hackers thread.\n\n[1]: https://www.postgresql.org/message-id/ZZ7GpII4bAYN%2BjT5%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 11 Jan 2024 14:58:24 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "11.01.2024 17:58, Bertrand Drouvot wrote:\n> So I think we have 2 issues here:\n>\n> 1) The one you're mentioning above related to the on-access pruning test:\n>\n> I think the engine behavior is right here and that the test is racy. I'm\n> proposing to bypass the active slot invalidation check for this particular test (\n> as I don't see any \"easy\" way to take care of this race condition). The active slot\n> invalidation is already well covered in the other tests anyway.\n>\n> I'm proposing the attached v4-0001-Fix-035_standby_logical_decoding.pl-race-conditio.patch\n> for it.\n>\n> 2) The fact that sometime we're getting a termination message which is not followed\n> by an obsolete one (like as discussed in [1]).\n>\n> For this one, I think that InvalidatePossiblyObsoleteSlot() is racy:\n>\n> In case of an active slot we proceed in 2 steps:\n> - terminate the backend holding the slot\n> - report the slot as obsolete\n>\n> This is racy because between the two we release the mutex on the slot, which\n> means the effective_xmin and effective_catalog_xmin could advance during that time.\n>\n> I'm proposing the attached v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\n> for it.\n>\n> Would it be possible to re-launch your repro (the slow one, not the pg_sleep() one)\n> with bot patch applied and see how it goes? (Please note that v4 replaces v3 that\n> you're already using in your tests).\n>\n> If it helps, I'll propose v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\n> into a dedicated hackers thread.\n>\n> [1]: https://www.postgresql.org/message-id/ZZ7GpII4bAYN%2BjT5%40ip-10-97-1-34.eu-west-3.compute.internal\n\nBertrand, I've relaunched tests in the same slowed down VM with both\npatches applied (but with no other modifications) and got a failure\nwith pg_class, similar to what we had seen before:\n9 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n9 # at t/035_standby_logical_decoding.pl line 230.\n\nPlease look at the logs attached (I see there Standby/RUNNING_XACTS near\n'invalidating obsolete replication slot \"row_removal_inactiveslot\"').\n\nBest regards,\nAlexander",
"msg_date": "Thu, 11 Jan 2024 23:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 11:00:01PM +0300, Alexander Lakhin wrote:\n> Bertrand, I've relaunched tests in the same slowed down VM with both\n> patches applied (but with no other modifications) and got a failure\n> with pg_class, similar to what we had seen before:\n> 9 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n> 9 # at t/035_standby_logical_decoding.pl line 230.\n> \n> Please look at the logs attached (I see there Standby/RUNNING_XACTS near\n> 'invalidating obsolete replication slot \"row_removal_inactiveslot\"').\n\nStandby/RUNNING_XACTS is exactly why 039_end_of_wal.pl uses wal_level\n= minimal, because these lead to unpredictible records inserted,\nimpacting the reliability of the tests. We cannot do that here,\nobviously. That may be a long shot, but could it be possible to tweak\nthe test with a retry logic, retrying things if such a standby\nsnapshot is found because we know that the invalidation is not going\nto work anyway?\n--\nMichael",
"msg_date": "Fri, 12 Jan 2024 07:01:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 07:01:55AM +0900, Michael Paquier wrote:\n> On Thu, Jan 11, 2024 at 11:00:01PM +0300, Alexander Lakhin wrote:\n> > Bertrand, I've relaunched tests in the same slowed down VM with both\n> > patches applied (but with no other modifications) and got a failure\n> > with pg_class, similar to what we had seen before:\n> > 9 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n> > 9 # at t/035_standby_logical_decoding.pl line 230.\n> > \n> > Please look at the logs attached (I see there Standby/RUNNING_XACTS near\n> > 'invalidating obsolete replication slot \"row_removal_inactiveslot\"').\n\nThanks! \n\nFor this one, the \"good\" news is that it looks like that we don’t see the\n\"terminating\" message not followed by an \"obsolete\" message (so the engine\nbehaves correctly) anymore.\n\nThere is simply nothing related to the row_removal_activeslot at all (the catalog_xmin\nadvanced and there is no conflict).\n\nAnd I agree that this is due to the Standby/RUNNING_XACTS that is \"advancing\" the\ncatalog_xmin of the active slot.\n\n> Standby/RUNNING_XACTS is exactly why 039_end_of_wal.pl uses wal_level\n> = minimal, because these lead to unpredictible records inserted,\n> impacting the reliability of the tests. We cannot do that here,\n> obviously. That may be a long shot, but could it be possible to tweak\n> the test with a retry logic, retrying things if such a standby\n> snapshot is found because we know that the invalidation is not going\n> to work anyway?\n\nI think it all depends what the xl_running_xacts does contain (means does it\n\"advance\" or not the catalog_xmin in our case).\n\nIn our case it does advance it (should it occurs) due to the \"select txid_current()\"\nthat is done in wait_until_vacuum_can_remove() in 035_standby_logical_decoding.pl.\n\nI suggest to make use of txid_current_snapshot() instead (that does not produce\na Transaction/COMMIT wal record, as opposed to txid_current()).\n\nI think that it could be \"enough\" for our case here, and it's what v5 attached is\nnow doing.\n\nLet's give v5 a try? (please apply v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\ntoo).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 12 Jan 2024 07:15:44 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\n12.01.2024 10:15, Bertrand Drouvot wrote:\n>\n> For this one, the \"good\" news is that it looks like that we don’t see the\n> \"terminating\" message not followed by an \"obsolete\" message (so the engine\n> behaves correctly) anymore.\n>\n> There is simply nothing related to the row_removal_activeslot at all (the catalog_xmin\n> advanced and there is no conflict).\n\nYes, judging from all the failures that we see now, it looks like the\n0001-Fix-race-condition...patch works as expected.\n\n> And I agree that this is due to the Standby/RUNNING_XACTS that is \"advancing\" the\n> catalog_xmin of the active slot.\n>\n>> Standby/RUNNING_XACTS is exactly why 039_end_of_wal.pl uses wal_level\n>> = minimal, because these lead to unpredictible records inserted,\n>> impacting the reliability of the tests. We cannot do that here,\n>> obviously. That may be a long shot, but could it be possible to tweak\n>> the test with a retry logic, retrying things if such a standby\n>> snapshot is found because we know that the invalidation is not going\n>> to work anyway?\n> I think it all depends what the xl_running_xacts does contain (means does it\n> \"advance\" or not the catalog_xmin in our case).\n>\n> In our case it does advance it (should it occurs) due to the \"select txid_current()\"\n> that is done in wait_until_vacuum_can_remove() in 035_standby_logical_decoding.pl.\n>\n> I suggest to make use of txid_current_snapshot() instead (that does not produce\n> a Transaction/COMMIT wal record, as opposed to txid_current()).\n>\n> I think that it could be \"enough\" for our case here, and it's what v5 attached is\n> now doing.\n>\n> Let's give v5 a try? (please apply v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\n> too).\n\nUnfortunately, I've got the failure again (please see logs attached).\n(_primary.log can confirm that I have used exactly v5 — I see no\ntxid_current() calls there...)\n\nBest regards,\nAlexander",
"msg_date": "Fri, 12 Jan 2024 14:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 02:00:01PM +0300, Alexander Lakhin wrote:\n> Hi,\n> \n> 12.01.2024 10:15, Bertrand Drouvot wrote:\n> > \n> > For this one, the \"good\" news is that it looks like that we don’t see the\n> > \"terminating\" message not followed by an \"obsolete\" message (so the engine\n> > behaves correctly) anymore.\n> > \n> > There is simply nothing related to the row_removal_activeslot at all (the catalog_xmin\n> > advanced and there is no conflict).\n> \n> Yes, judging from all the failures that we see now, it looks like the\n> 0001-Fix-race-condition...patch works as expected.\n\nYeah, thanks for confirming, I'll create a dedicated hackers thread for that one.\n\n> > Let's give v5 a try? (please apply v1-0001-Fix-race-condition-in-InvalidatePossiblyObsoleteS.patch\n> > too).\n> \n> Unfortunately, I've got the failure again (please see logs attached).\n> (_primary.log can confirm that I have used exactly v5 — I see no\n> txid_current() calls there...)\n\nOkay ;-( Thanks for the testing. Then I can think of:\n\n1) Michael's proposal up-thread (means tweak the test with a retry logic, retrying\nthings if such a standby snapshot is found).\n\n2) Don't report a test error for active slots in case its catalog_xmin advanced.\n\nI'd vote for 2) as:\n\n- this is a corner case and the vast majority of the animals don't report any\nissues (means the active slot conflict detection is already well covered).\n\n- even on the same animal it should be pretty rare to not have an active slot \nconflict detection not covered at all (and the \"failing\" one would be probably\nmoving over time).\n\n- It may be possible that 1) ends up failing (as we'd need to put a limit on the\nretry logic anyhow).\n\nWhat do you think?\n\nAnd BTW, looking closely at wait_until_vacuum_can_remove(), I'm not sure it's\nfully correct, so I'll give it another look.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 13:46:08 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 01:46:08PM +0000, Bertrand Drouvot wrote:\n> 1) Michael's proposal up-thread (means tweak the test with a retry logic, retrying\n> things if such a standby snapshot is found).\n> \n> 2) Don't report a test error for active slots in case its catalog_xmin advanced.\n> \n> I'd vote for 2) as:\n> \n> - this is a corner case and the vast majority of the animals don't report any\n> issues (means the active slot conflict detection is already well covered).\n> \n> - even on the same animal it should be pretty rare to not have an active slot \n> conflict detection not covered at all (and the \"failing\" one would be probably\n> moving over time).\n> \n> - It may be possible that 1) ends up failing (as we'd need to put a limit on the\n> retry logic anyhow).\n> \n> What do you think?\n> \n> And BTW, looking closely at wait_until_vacuum_can_remove(), I'm not sure it's\n> fully correct, so I'll give it another look.\n\nThe WAL records related to standby snapshots are playing a lot with\nthe randomness of the failures we are seeing. Alexander has mentioned\nofflist something else: using SIGSTOP on the bgwriter to avoid these\nrecords and make the test more stable. That would not be workable for\nWindows, but I could live with that knowing that logical decoding for\nstandbys has no platform-speficic tweak for the code paths we're\ntesting here, and that would put as limitation to skip the test for\n$windows_os.\n\nWhile thinking about that, a second idea came into my mind: a\nsuperuser-settable developer GUC to disable such WAL records to be\ngenerated within certain areas of the test. This requires a small\nimplementation, but nothing really huge, while being portable\neverywhere. And it is not the first time I've been annoyed with these\nrecords when wanting a predictible set of WAL records for some test\ncase.\n\nAnother possibility would be to move these records elsewhere, outside\nof the bgwriter, but we need such records at a good frequency for the\navailability of read-only standbys. And surely we'd want an on/off\nswitch anyway to get a full control for test sequences.\n--\nMichael",
"msg_date": "Mon, 15 Jan 2024 12:59:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> While thinking about that, a second idea came into my mind: a\n> superuser-settable developer GUC to disable such WAL records to be\n> generated within certain areas of the test. This requires a small\n> implementation, but nothing really huge, while being portable\n> everywhere. And it is not the first time I've been annoyed with these\n> records when wanting a predictible set of WAL records for some test\n> case.\n\nHmm ... I see what you are after, but to what extent would this mean\nthat what we are testing is not our real-world behavior?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Jan 2024 23:08:39 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Sun, Jan 14, 2024 at 11:08:39PM -0500, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> While thinking about that, a second idea came into my mind: a\n>> superuser-settable developer GUC to disable such WAL records to be\n>> generated within certain areas of the test. This requires a small\n>> implementation, but nothing really huge, while being portable\n>> everywhere. And it is not the first time I've been annoyed with these\n>> records when wanting a predictible set of WAL records for some test\n>> case.\n> \n> Hmm ... I see what you are after, but to what extent would this mean\n> that what we are testing is not our real-world behavior?\n\nDon't think so. We don't care much about these records when it comes\nto checking slot invalidation scenarios with a predictible XID\nhorizon, AFAIK.\n--\nMichael",
"msg_date": "Mon, 15 Jan 2024 13:11:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 15, 2024 at 01:11:26PM +0900, Michael Paquier wrote:\n> On Sun, Jan 14, 2024 at 11:08:39PM -0500, Tom Lane wrote:\n> > Michael Paquier <[email protected]> writes:\n> >> While thinking about that, a second idea came into my mind: a\n> >> superuser-settable developer GUC to disable such WAL records to be\n> >> generated within certain areas of the test. This requires a small\n> >> implementation, but nothing really huge, while being portable\n> >> everywhere. And it is not the first time I've been annoyed with these\n> >> records when wanting a predictible set of WAL records for some test\n> >> case.\n> > \n> > Hmm ... I see what you are after, but to what extent would this mean\n> > that what we are testing is not our real-world behavior?\n> \n> Don't think so. We don't care much about these records when it comes\n> to checking slot invalidation scenarios with a predictible XID\n> horizon, AFAIK.\n\nYeah, we want to test slot invalidation behavior so we need to ensure that such\nan invalidation occur (which is not the case if we get a xl_running_xacts in the\nmiddle) at the first place.\n\nOTOH I also see Tom's point: for example I think we'd not have discovered [1]\n(outside from the field) with such a developer GUC in place.\n\nWe did a few things in this thread, so to sum up what we've discovered:\n\n- a race condition in InvalidatePossiblyObsoleteSlot() (see [1])\n- we need to launch the vacuum(s) only if we are sure we got a newer XID horizon\n( proposal in in v6 attached)\n- we need a way to control how frequent xl_running_xacts are emmitted (to ensure\nthey are not triggered in a middle of an active slot invalidation test).\n\nI'm not sure it's possible to address Tom's concern and keep the test \"predictable\".\n\nSo, I think I'd vote for Michael's proposal to implement a superuser-settable\ndeveloper GUC (as sending a SIGSTOP on the bgwriter (and bypass $windows_os) would\nstill not address Tom's concern anyway).\n\nAnother option would be to \"sacrifice\" the full predictablity of the test (in\nfavor of real-world behavior testing)?\n\n[1]: https://www.postgresql.org/message-id/ZaTjW2Xh%2BTQUCOH0%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 15 Jan 2024 08:49:10 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hello Michael and Bertrand,\n\n15.01.2024 06:59, Michael Paquier wrote:\n> The WAL records related to standby snapshots are playing a lot with\n> the randomness of the failures we are seeing. Alexander has mentioned\n> offlist something else: using SIGSTOP on the bgwriter to avoid these\n> records and make the test more stable. That would not be workable for\n> Windows, but I could live with that knowing that logical decoding for\n> standbys has no platform-speficic tweak for the code paths we're\n> testing here, and that would put as limitation to skip the test for\n> $windows_os.\n\nI've found a way to implement pause/resume for Windows processed and it\nlooks acceptable to me if we can afford \"use Win32::API;\" on Windows\n(maybe the test could be skipped only if this perl module is absent).\nPlease look at the PoC patch for the test 035_standby_logical_decoding.\n(The patched test passes for me.)\n\nIf this approach looks promising to you, maybe we could add a submodule to\nperl/PostgreSQL/Test/ and use this functionality in other tests (e.g., in\n019_replslot_limit) as well.\n\nPersonally I think that having such a functionality for using in tests\nmight be useful not only to avoid some \"problematic\" behaviour but also to\ntest the opposite cases.\n\n> While thinking about that, a second idea came into my mind: a\n> superuser-settable developer GUC to disable such WAL records to be\n> generated within certain areas of the test. This requires a small\n> implementation, but nothing really huge, while being portable\n> everywhere. And it is not the first time I've been annoyed with these\n> records when wanting a predictible set of WAL records for some test\n> case.\n\nI see that the test in question exists in REL_16_STABLE, it means that a\nnew GUC would not help there?\n\nBest regards,\nAlexander",
"msg_date": "Mon, 15 Jan 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hello,\n\n15.01.2024 12:00, Alexander Lakhin wrote:\n> If this approach looks promising to you, maybe we could add a submodule to\n> perl/PostgreSQL/Test/ and use this functionality in other tests (e.g., in\n> 019_replslot_limit) as well.\n>\n> Personally I think that having such a functionality for using in tests\n> might be useful not only to avoid some \"problematic\" behaviour but also to\n> test the opposite cases.\n\nAfter spending a few days on it, I've discovered two more issues:\nhttps://www.postgresql.org/message-id/16d6d9cc-f97d-0b34-be65-425183ed3721%40gmail.com\nhttps://www.postgresql.org/message-id/b0102688-6d6c-c86a-db79-e0e91d245b1a%40gmail.com\n\n(The latter is not related to bgwriter directly, but it was discovered\nthanks to the RUNNING_XACTS record flew in WAL in a lucky moment.)\n\nSo it becomes clear that the 035 test is not the only one, which might\nsuffer from bgwriter's activity, and inventing a way to stop bgwriter/\ncontrol it is a different subject, getting out of scope of the current\nissue.\n\n\n15.01.2024 11:49, Bertrand Drouvot wrote:\n> We did a few things in this thread, so to sum up what we've discovered:\n>\n> - a race condition in InvalidatePossiblyObsoleteSlot() (see [1])\n> - we need to launch the vacuum(s) only if we are sure we got a newer XID horizon\n> ( proposal in in v6 attached)\n> - we need a way to control how frequent xl_running_xacts are emmitted (to ensure\n> they are not triggered in a middle of an active slot invalidation test).\n>\n> I'm not sure it's possible to address Tom's concern and keep the test \"predictable\".\n>\n> So, I think I'd vote for Michael's proposal to implement a superuser-settable\n> developer GUC (as sending a SIGSTOP on the bgwriter (and bypass $windows_os) would\n> still not address Tom's concern anyway).\n>\n> Another option would be to \"sacrifice\" the full predictablity of the test (in\n> favor of real-world behavior testing)?\n>\n> [1]: https://www.postgresql.org/message-id/ZaTjW2Xh%2BTQUCOH0%40ip-10-97-1-34.eu-west-3.compute.internal\n\nSo, now we have the test 035 failing due to nondeterministic vacuum\nactivity in the first place, and due to bgwriter's activity in the second.\nMaybe it would be a right move to commit the fix, and then think about\nmore rare failures.\n\nThough I have a couple of question regarding the fix left, if you don't\nmind:\n1) The test has minor defects in the comments, that I noted before [1];\nwould you like to fix them in passing?\n\n> BTW, it looks like the comment:\n> # One way to produce recovery conflict is to create/drop a relation and\n> # launch a vacuum on pg_class with hot_standby_feedback turned off on the standby.\n> in scenario 3 is a copy-paste error.\n> Also, there are two \"Scenario 4\" in this test.\n>\n\n2) Shall we align the 035_standby_logical_decoding with\n031_recovery_conflict in regard to improving stability of vacuum?\nI see the following options for this:\na) use wait_until_vacuum_can_remove() and autovacuum = off in both tests;\nb) use FREEZE and autovacuum = off in both tests;\nc) use wait_until_vacuum_can_remove() in 035, FREEZE in 031, and\n autovacuum = off in both.\n\nI've re-tested the v6 patch today and confirmed that it makes the test\nmore stable. I ran (with bgwriter_delay = 10000 in temp.config) 20 tests in\nparallel and got failures ('inactiveslot slot invalidation is logged with\nvacuum on pg_authid') on iterations 2, 6, 6 with no patch applied.\n(With unlimited CPU, when average test duration is around 70 seconds.)\n\nBut with v6 applied, 60 iterations succeeded.\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 19 Jan 2024 09:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 19, 2024 at 09:00:01AM +0300, Alexander Lakhin wrote:\n> Hello,\n> \n> 15.01.2024 12:00, Alexander Lakhin wrote:\n> > If this approach looks promising to you, maybe we could add a submodule to\n> > perl/PostgreSQL/Test/ and use this functionality in other tests (e.g., in\n> > 019_replslot_limit) as well.\n> > \n> > Personally I think that having such a functionality for using in tests\n> > might be useful not only to avoid some \"problematic\" behaviour but also to\n> > test the opposite cases.\n> \n> After spending a few days on it, I've discovered two more issues:\n> https://www.postgresql.org/message-id/16d6d9cc-f97d-0b34-be65-425183ed3721%40gmail.com\n> https://www.postgresql.org/message-id/b0102688-6d6c-c86a-db79-e0e91d245b1a%40gmail.com\n> \n> (The latter is not related to bgwriter directly, but it was discovered\n> thanks to the RUNNING_XACTS record flew in WAL in a lucky moment.)\n> \n> So it becomes clear that the 035 test is not the only one, which might\n> suffer from bgwriter's activity,\n\nYeah... thanks for sharing!\n\n> and inventing a way to stop bgwriter/\n> control it is a different subject, getting out of scope of the current\n> issue.\n\nAgree.\n\n> 15.01.2024 11:49, Bertrand Drouvot wrote:\n> > We did a few things in this thread, so to sum up what we've discovered:\n> > \n> > - a race condition in InvalidatePossiblyObsoleteSlot() (see [1])\n> > - we need to launch the vacuum(s) only if we are sure we got a newer XID horizon\n> > ( proposal in in v6 attached)\n> > - we need a way to control how frequent xl_running_xacts are emmitted (to ensure\n> > they are not triggered in a middle of an active slot invalidation test).\n> > \n> > I'm not sure it's possible to address Tom's concern and keep the test \"predictable\".\n> > \n> > So, I think I'd vote for Michael's proposal to implement a superuser-settable\n> > developer GUC (as sending a SIGSTOP on the bgwriter (and bypass $windows_os) would\n> > still not address Tom's concern anyway).\n> > \n> > Another option would be to \"sacrifice\" the full predictablity of the test (in\n> > favor of real-world behavior testing)?\n> > \n> > [1]: https://www.postgresql.org/message-id/ZaTjW2Xh%2BTQUCOH0%40ip-10-97-1-34.eu-west-3.compute.internal\n> \n> So, now we have the test 035 failing due to nondeterministic vacuum\n> activity in the first place, and due to bgwriter's activity in the second.\n\nYeah, that's also my understanding.\n\n> Maybe it would be a right move to commit the fix, and then think about\n> more rare failures.\n\n+1\n\n> Though I have a couple of question regarding the fix left, if you don't\n> mind:\n> 1) The test has minor defects in the comments, that I noted before [1];\n> would you like to fix them in passing?\n> \n> > BTW, it looks like the comment:\n> > # One way to produce recovery conflict is to create/drop a relation and\n> > # launch a vacuum on pg_class with hot_standby_feedback turned off on the standby.\n> > in scenario 3 is a copy-paste error.\n\nNice catch, thanks! Fixed in v7 attached.\n\n> > Also, there are two \"Scenario 4\" in this test.\n\nD'oh! Fixed in v7.\n\n> > \n> \n> 2) Shall we align the 035_standby_logical_decoding with\n> 031_recovery_conflict in regard to improving stability of vacuum?\n\nYeah, I think that could make sense.\n\n> I see the following options for this:\n> a) use wait_until_vacuum_can_remove() and autovacuum = off in both tests;\n> b) use FREEZE and autovacuum = off in both tests;\n> c) use wait_until_vacuum_can_remove() in 035, FREEZE in 031, and\n> �autovacuum = off in both.\n>\n\nI'd vote for a) as I've the feeling it's \"easier\" to understand (and I'm not\nsure using FREEZE would give full \"stabilization predictability\", at least for\n035_standby_logical_decoding.pl). That said I did not test what the outcome would\nbe for 031_recovery_conflict.pl by making use of a).\n\n> I've re-tested the v6 patch today and confirmed that it makes the test\n> more stable. I ran (with bgwriter_delay = 10000 in temp.config) 20 tests in\n> parallel and got failures ('inactiveslot slot invalidation is logged with\n> vacuum on pg_authid') on iterations 2, 6, 6 with no patch applied.\n> (With unlimited CPU, when average test duration is around 70 seconds.)\n> \n> But with v6 applied, 60 iterations succeeded.\n\nNice! Thanks for the testing!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 19 Jan 2024 09:03:01 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 09:03:01AM +0000, Bertrand Drouvot wrote:\n> On Fri, Jan 19, 2024 at 09:00:01AM +0300, Alexander Lakhin wrote:\n>> 15.01.2024 12:00, Alexander Lakhin wrote:\n>>> If this approach looks promising to you, maybe we could add a submodule to\n>>> perl/PostgreSQL/Test/ and use this functionality in other tests (e.g., in\n>>> 019_replslot_limit) as well.\n>>> \n>>> Personally I think that having such a functionality for using in tests\n>>> might be useful not only to avoid some \"problematic\" behaviour but also to\n>>> test the opposite cases.\n>> \n>> After spending a few days on it, I've discovered two more issues:\n>> https://www.postgresql.org/message-id/16d6d9cc-f97d-0b34-be65-425183ed3721%40gmail.com\n>> https://www.postgresql.org/message-id/b0102688-6d6c-c86a-db79-e0e91d245b1a%40gmail.com\n>> \n>> (The latter is not related to bgwriter directly, but it was discovered\n>> thanks to the RUNNING_XACTS record flew in WAL in a lucky moment.)\n>> \n>> So it becomes clear that the 035 test is not the only one, which might\n>> suffer from bgwriter's activity,\n> \n> Yeah... thanks for sharing!\n> \n>> and inventing a way to stop bgwriter/\n>> control it is a different subject, getting out of scope of the current\n>> issue.\n> \n> Agree.\n> \n>> 15.01.2024 11:49, Bertrand Drouvot wrote:\n>> Maybe it would be a right move to commit the fix, and then think about\n>> more rare failures.\n> \n> +1\n\nYeah, agreed to make things more stable before making them fancier.\n\n>> 2) Shall we align the 035_standby_logical_decoding with\n>> 031_recovery_conflict in regard to improving stability of vacuum?\n> \n> Yeah, I think that could make sense.\n\nProbably. That can always be done as a separate change, after a few\nruns of the slow buildfarm members able to reproduce the failure.\n\n>> I see the following options for this:\n>> a) use wait_until_vacuum_can_remove() and autovacuum = off in both tests;\n>> b) use FREEZE and autovacuum = off in both tests;\n>> c) use wait_until_vacuum_can_remove() in 035, FREEZE in 031, and\n>> autovacuum = off in both.\n> \n> I'd vote for a) as I've the feeling it's \"easier\" to understand (and I'm not\n> sure using FREEZE would give full \"stabilization predictability\", at least for\n> 035_standby_logical_decoding.pl). That said I did not test what the outcome would\n> be for 031_recovery_conflict.pl by making use of a).\n\nYeah, I think I agree here with a), as v7 does for 035.\n\n+# Launch $sql and wait for a new snapshot that has a newer horizon before\n+# doing the vacuum with $vac_option on $to_vac.\n+sub wait_until_vacuum_can_remove\n\nThis had better document what the arguments of this routine are,\nbecause that's really unclear. $to_vac is the relation that will be\nvacuumed, for one.\n\nAlso, wouldn't it be better to document in the test why\ntxid_current_snapshot() is chosen? Contrary to txid_current(), it\ndoes not generate a Transaction/COMMIT to make the test more\npredictible, something you have mentioned upthread, and implied in the\ntest.\n\n- INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\n\nThis removes two INSERTs on flush_wal and refactors the code to do the\nINSERT in wait_until_vacuum_can_remove(), using a SQL comment to\ndocument a reference about the reason why an INSERT is used. Couldn't\nyou just use a normal comment here?\n\n>> I've re-tested the v6 patch today and confirmed that it makes the test\n>> more stable. I ran (with bgwriter_delay = 10000 in temp.config) 20 tests in\n>> parallel and got failures ('inactiveslot slot invalidation is logged with\n>> vacuum on pg_authid') on iterations 2, 6, 6 with no patch applied.\n>> (With unlimited CPU, when average test duration is around 70 seconds.)\n>> \n>> But with v6 applied, 60 iterations succeeded.\n> \n> Nice! Thanks for the testing!\n\nI need to review what you have more thoroughly, but is it OK to assume\nthat both of you are happy with the latest version of the patch in\nterms of stability gained? That's still not the full picture, still a\ngood step towards that.\n--\nMichael",
"msg_date": "Mon, 22 Jan 2024 15:54:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 22, 2024 at 03:54:44PM +0900, Michael Paquier wrote:\n> On Fri, Jan 19, 2024 at 09:03:01AM +0000, Bertrand Drouvot wrote:\n> > On Fri, Jan 19, 2024 at 09:00:01AM +0300, Alexander Lakhin wrote:\n> +# Launch $sql and wait for a new snapshot that has a newer horizon before\n> +# doing the vacuum with $vac_option on $to_vac.\n> +sub wait_until_vacuum_can_remove\n> \n> This had better document what the arguments of this routine are,\n> because that's really unclear. $to_vac is the relation that will be\n> vacuumed, for one.\n\nAgree, done that way in v8 attached.\n\n> Also, wouldn't it be better to document in the test why\n> txid_current_snapshot() is chosen? Contrary to txid_current(), it\n> does not generate a Transaction/COMMIT to make the test more\n> predictible, something you have mentioned upthread, and implied in the\n> test.\n\nGood point, added more comments in v8 (but not mentioning txid_current() as \nafter giving more thought about it then I think it was not the right approach in\nany case).\n\n> \n> - INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\n> \n> This removes two INSERTs on flush_wal and refactors the code to do the\n> INSERT in wait_until_vacuum_can_remove(), using a SQL comment to\n> document a reference about the reason why an INSERT is used. Couldn't\n> you just use a normal comment here?\n\nAgree, done in v8.\n\n> >> I've re-tested the v6 patch today and confirmed that it makes the test\n> >> more stable. I ran (with bgwriter_delay = 10000 in temp.config) 20 tests in\n> >> parallel and got failures ('inactiveslot slot invalidation is logged with\n> >> vacuum on pg_authid') on iterations 2, 6, 6 with no patch applied.\n> >> (With unlimited CPU, when average test duration is around 70 seconds.)\n> >> \n> >> But with v6 applied, 60 iterations succeeded.\n> > \n> > Nice! Thanks for the testing!\n> \n> I need to review what you have more thoroughly, but is it OK to assume\n> that both of you are happy with the latest version of the patch in\n> terms of stability gained? That's still not the full picture, still a\n> good step towards that.\n\nYeah, I can clearly see how this patch helps from a theoritical perspective but\nrely on Alexander's testing to see how it actually stabilizes the test.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 22 Jan 2024 09:07:45 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Mon, Jan 22, 2024 at 09:07:45AM +0000, Bertrand Drouvot wrote:\n> On Mon, Jan 22, 2024 at 03:54:44PM +0900, Michael Paquier wrote:\n>> Also, wouldn't it be better to document in the test why\n>> txid_current_snapshot() is chosen? Contrary to txid_current(), it\n>> does not generate a Transaction/COMMIT to make the test more\n>> predictible, something you have mentioned upthread, and implied in the\n>> test.\n> \n> Good point, added more comments in v8 (but not mentioning txid_current() as \n> after giving more thought about it then I think it was not the right approach in\n> any case).\n\nFine by me.\n\n>> - INSERT INTO flush_wal DEFAULT VALUES; -- see create table flush_wal\n>> \n>> This removes two INSERTs on flush_wal and refactors the code to do the\n>> INSERT in wait_until_vacuum_can_remove(), using a SQL comment to\n>> document a reference about the reason why an INSERT is used. Couldn't\n>> you just use a normal comment here?\n> \n> Agree, done in v8.\n\nI've rewritten some of that, and applied the patch down to v16. Let's\nsee how this stabilizes things, but that's likely going to take some\ntime as it depends on skink's mood.\n\n>> I need to review what you have more thoroughly, but is it OK to assume\n>> that both of you are happy with the latest version of the patch in\n>> terms of stability gained? That's still not the full picture, still a\n>> good step towards that.\n> \n> Yeah, I can clearly see how this patch helps from a theoritical perspective but\n> rely on Alexander's testing to see how it actually stabilizes the test.\n\nAnyway, that's not the end of it. What should we do for snapshot\nsnapshot records coming from the bgwriter? The slower the system, the\nhigher the odds of hitting a conflict with such records, even if the\nhorizon check should help.\n--\nMichael",
"msg_date": "Tue, 23 Jan 2024 14:50:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 23, 2024 at 02:50:06PM +0900, Michael Paquier wrote:\n> On Mon, Jan 22, 2024 at 09:07:45AM +0000, Bertrand Drouvot wrote:\n> \n> I've rewritten some of that, and applied the patch down to v16.\n\nThanks!\n\n> > Yeah, I can clearly see how this patch helps from a theoritical perspective but\n> > rely on Alexander's testing to see how it actually stabilizes the test.\n> \n> Anyway, that's not the end of it. What should we do for snapshot\n> snapshot records coming from the bgwriter?\n\nI've mixed feeling about it. On one hand we'll decrease even more the risk of\nseeing a xl_running_xacts in the middle of a test, but on the other hand I agree\nwith Tom's concern [1]: I think that we could miss \"corner cases/bug\" detection\n(like the one reported in [2]).\n\nWhat about?\n\n1) Apply \"wait_until_vacuum_can_remove() + autovacuum disabled\" where it makes\nsense and for tests that suffers from random xl_running_xacts. I can look at\n031_recovery_conflict.pl, do you have others in mind?\n2) fix [2]\n3) depending on how stabilized this test (and others that suffer from \"random\"\nxl_running_xacts) is, then think about the bgwriter.\n\n[1]: https://www.postgresql.org/message-id/1375923.1705291719%40sss.pgh.pa.us\n[2]: https://www.postgresql.org/message-id/flat/[email protected]\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 23 Jan 2024 08:07:36 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hello Bertrand and Michael,\n\n23.01.2024 11:07, Bertrand Drouvot wrote:\n> On Tue, Jan 23, 2024 at 02:50:06PM +0900, Michael Paquier wrote:\n>\n>> Anyway, that's not the end of it. What should we do for snapshot\n>> snapshot records coming from the bgwriter?\n> What about?\n>\n> 3) depending on how stabilized this test (and others that suffer from \"random\"\n> xl_running_xacts) is, then think about the bgwriter.\n\nA recent buildfarm failure [1] reminds me of that remaining question.\nHere we have a slow machine (a successful run, for example [2], shows\n541.13s duration of the test) and the following information logged:\n\n[13:55:13.725](34.411s) ok 25 - inactiveslot slot invalidation is logged with vacuum on pg_class\n[13:55:13.727](0.002s) not ok 26 - activeslot slot invalidation is logged with vacuum on pg_class\n[13:55:13.728](0.001s) # Failed test 'activeslot slot invalidation is logged with vacuum on pg_class'\n# at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 229.\n[14:27:42.995](1949.267s) # poll_query_until timed out executing this query:\n# select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb'\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n[14:27:42.999](0.004s) not ok 27 - confl_active_logicalslot updated\n[14:27:43.000](0.001s) # Failed test 'confl_active_logicalslot updated'\n# at C:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235.\nTimed out waiting confl_active_logicalslot to be updated at \nC:/prog/bf/root/HEAD/pgsql/src/test/recovery/t/035_standby_logical_decoding.pl line 235.\n\n---\n035_standby_logical_decoding_standby.log:\n2024-06-06 13:55:07.715 UTC [9172:7] LOG: invalidating obsolete replication slot \"row_removal_inactiveslot\"\n2024-06-06 13:55:07.715 UTC [9172:8] DETAIL: The slot conflicted with xid horizon 754.\n2024-06-06 13:55:07.715 UTC [9172:9] CONTEXT: WAL redo at 0/4020A80 for Heap2/PRUNE_ON_ACCESS: snapshotConflictHorizon: \n754, isCatalogRel: T, nplans: 0, nredirected: 0, ndead: 1, nunused: 0, dead: [48]; blkref #0: rel 1663/16384/2610, blk 0\n2024-06-06 13:55:14.372 UTC [7532:1] [unknown] LOG: connection received: host=127.0.0.1 port=55328\n2024-06-06 13:55:14.381 UTC [7532:2] [unknown] LOG: connection authenticated: identity=\"EC2AMAZ-P7KGG90\\\\pgrunner\" \nmethod=sspi \n(C:/prog/bf/root/HEAD/pgsql.build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_hba.conf:2)\n2024-06-06 13:55:14.381 UTC [7532:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres \napplication_name=035_standby_logical_decoding.pl\n2024-06-06 13:55:14.443 UTC [7532:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = \n1) from pg_stat_database_conflicts where datname = 'testdb'\n2024-06-06 13:55:14.452 UTC [7532:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.090 \nuser=pgrunner database=postgres host=127.0.0.1 port=55328\n# (there is no `invalidating obsolete replication slot \"row_removal_activeslot\"` message)\n...\n2024-06-06 14:27:42.675 UTC [4032:4] 035_standby_logical_decoding.pl LOG: statement: select (confl_active_logicalslot = \n1) from pg_stat_database_conflicts where datname = 'testdb'\n2024-06-06 14:27:42.681 UTC [4032:5] 035_standby_logical_decoding.pl LOG: disconnection: session time: 0:00:00.080 \nuser=pgrunner database=postgres host=127.0.0.1 port=58713\n2024-06-06 14:27:43.095 UTC [7892:2] FATAL: could not receive data from WAL stream: server closed the connection \nunexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nIt's hard to determine from this info, why row_removal_activeslot was not\ninvalidated, but running this test on a slowed down Windows VM, I (still)\nget the same looking failures caused by RUNNING_XACTS appeared just before\n`invalidating obsolete replication slot \"row_removal_inactiveslot\"`.\nSo I would consider this failure as yet another result of bgwriter activity\nand add it to the list of known failures as such...\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-06%2012%3A36%3A11\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2024-06-05%2017%3A03%3A13&stg=misc-check\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 8 Jun 2024 07:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi Alexander,\n\nOn Sat, Jun 08, 2024 at 07:00:00AM +0300, Alexander Lakhin wrote:\n> Hello Bertrand and Michael,\n> \n> 23.01.2024 11:07, Bertrand Drouvot wrote:\n> > On Tue, Jan 23, 2024 at 02:50:06PM +0900, Michael Paquier wrote:\n> > \n> > > Anyway, that's not the end of it. What should we do for snapshot\n> > > snapshot records coming from the bgwriter?\n> > What about?\n> > \n> > 3) depending on how stabilized this test (and others that suffer from \"random\"\n> > xl_running_xacts) is, then think about the bgwriter.\n> \n> A recent buildfarm failure [1] reminds me of that remaining question.\n> \n> It's hard to determine from this info, why row_removal_activeslot was not\n> invalidated, but running this test on a slowed down Windows VM, I (still)\n> get the same looking failures caused by RUNNING_XACTS appeared just before\n> `invalidating obsolete replication slot \"row_removal_inactiveslot\"`.\n> So I would consider this failure as yet another result of bgwriter activity\n> and add it to the list of known failures as such...\n\nThanks for the report! I think it makes sense to add it to the list of known\nfailures.\n\nOne way to deal with those corner cases could be to make use of injection points\naround places where RUNNING_XACTS is emitted, thoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jun 2024 06:29:17 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "On Mon, Jun 10, 2024 at 06:29:17AM +0000, Bertrand Drouvot wrote:\n> Thanks for the report! I think it makes sense to add it to the list of known\n> failures.\n> \n> One way to deal with those corner cases could be to make use of injection points\n> around places where RUNNING_XACTS is emitted, thoughts?\n\nAh. You mean to force a wait in the code path generating the standby\nsnapshots for the sake of this test? That's interesting, I like it.\n--\nMichael",
"msg_date": "Mon, 10 Jun 2024 15:39:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jun 10, 2024 at 03:39:34PM +0900, Michael Paquier wrote:\n> On Mon, Jun 10, 2024 at 06:29:17AM +0000, Bertrand Drouvot wrote:\n> > Thanks for the report! I think it makes sense to add it to the list of known\n> > failures.\n> > \n> > One way to deal with those corner cases could be to make use of injection points\n> > around places where RUNNING_XACTS is emitted, thoughts?\n> \n> Ah. You mean to force a wait in the code path generating the standby\n> snapshots for the sake of this test?\n\nYeah.\n\n> That's interesting, I like it.\n\nGreat, will look at it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jun 2024 06:54:11 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test slots invalidations in 035_standby_logical_decoding.pl only\n if dead rows are removed"
}
] |
[
{
"msg_contents": "Good morning,\n\nMy name is Nick Mayer, and I had a question concerning PostgreSQL's EAL. Has PostgreSQL been put through any audit/security testing, and does it have an EAL? If so, would I be able to get this information? I would appreciate any assistance you are able to provide for this.\n\nThanks,\n\nNick Mayer\nCyber Engineer\nLockheed Martin\nEmail: [email protected]<mailto:[email protected]>\n\n\n\n\n\n\n\n\n\n\nGood morning,\n \nMy name is Nick Mayer, and I had a question concerning PostgreSQL’s EAL. Has PostgreSQL been put through any audit/security testing, and does it have an EAL? If so, would I be able to get this information? I would appreciate any assistance\n you are able to provide for this.\n \nThanks, \n \nNick Mayer\nCyber Engineer\nLockheed Martin\nEmail: [email protected]",
"msg_date": "Tue, 30 May 2023 13:48:10 +0000",
"msg_from": "\"Mayer, Nicholas J\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question - Does PostgreSQL have an Evaluation Assurance Level?"
},
{
"msg_contents": "On Tue, 2023-05-30 at 13:48 +0000, Mayer, Nicholas J wrote:\n> My name is Nick Mayer, and I had a question concerning PostgreSQL’s EAL. Has PostgreSQL\n> been put through any audit/security testing, and does it have an EAL? If so, would I be\n> able to get this information? I would appreciate any assistance you are able to provide for this.\n\nI have never heard of that, but I'll reply on the -general list, where the question is\nmore likely to reach the people who know.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 31 May 2023 21:08:13 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question - Does PostgreSQL have an Evaluation Assurance Level?"
},
{
"msg_contents": "Hi Laurenz,\r\n\r\nThanks for your reply but we are actually all set with this. We found out that while PostgreSQL does not have EAL, the 'Crunchy Data' does have EAL of 2. Please feel free to close/discontinue this question and discussion if you like.\r\n\r\n\r\nThanks,\r\n\r\nNick \r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Wednesday, May 31, 2023 3:08 PM\r\nTo: Mayer, Nicholas J (US) <[email protected]>; [email protected]\r\nSubject: EXTERNAL: Re: Question - Does PostgreSQL have an Evaluation Assurance Level?\r\n\r\nOn Tue, 2023-05-30 at 13:48 +0000, Mayer, Nicholas J wrote:\r\n> My name is Nick Mayer, and I had a question concerning PostgreSQL’s \r\n> EAL. Has PostgreSQL been put through any audit/security testing, and \r\n> does it have an EAL? If so, would I be able to get this information? I would appreciate any assistance you are able to provide for this.\r\n\r\nI have never heard of that, but I'll reply on the -general list, where the question is more likely to reach the people who know.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Wed, 31 May 2023 19:51:56 +0000",
"msg_from": "\"Mayer, Nicholas J\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: EXTERNAL: Re: Question - Does PostgreSQL have an Evaluation\n Assurance Level?"
},
{
"msg_contents": "On Wed, 2023-05-31 at 19:51 +0000, Mayer, Nicholas J wrote:\n> We found out that while PostgreSQL does not have EAL, the 'Crunchy Data' does have EAL of 2.\n\nI see. I guess you are aware that a closed source fork of PostgreSQL is probably no more\nsecure than the original. But this is more about ticking off checkboxes, right?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 31 May 2023 22:30:43 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: EXTERNAL: Re: Question - Does PostgreSQL have an Evaluation\n Assurance Level?"
},
{
"msg_contents": "Hi Laurenz,\r\n\r\nThanks for this information. That is correct, we are just ticking off the checkboxes at the moment but I appreciate your feedback.\r\n\r\nThanks again,\r\n\r\nNick\r\n\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe <[email protected]> \r\nSent: Wednesday, May 31, 2023 4:31 PM\r\nTo: Mayer, Nicholas J (US) <[email protected]>; [email protected]\r\nSubject: EXTERNAL: Re: EXTERNAL: Re: Question - Does PostgreSQL have an Evaluation Assurance Level?\r\n\r\nOn Wed, 2023-05-31 at 19:51 +0000, Mayer, Nicholas J wrote:\r\n> We found out that while PostgreSQL does not have EAL, the 'Crunchy Data' does have EAL of 2.\r\n\r\nI see. I guess you are aware that a closed source fork of PostgreSQL is probably no more secure than the original. But this is more about ticking off checkboxes, right?\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Wed, 31 May 2023 20:41:09 +0000",
"msg_from": "\"Mayer, Nicholas J\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: EXTERNAL: Re: Question - Does PostgreSQL have an Evaluation\n Assurance\n Level?"
}
] |
[
{
"msg_contents": "I found an error similar to others before ([1]) that is still persists as of head right now (0bcb3ca3b9).\n\nCREATE TABLE t (\n\tn INTEGER\n);\n\nSELECT *\n FROM (VALUES (1)) t(c)\n LEFT JOIN t ljl1 ON true\n LEFT JOIN LATERAL (WITH cte AS (SELECT * FROM t WHERE t.n = ljl1.n) SELECT * FROM cte) ljl2 ON ljl1.n = 1;\n\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\nNote that the error does **not** occur if the CTE is unwrapped like this:\n\nSELECT *\n FROM (VALUES (1)) t(c)\n LEFT JOIN t ljl1 ON true\n LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n) ljl2 ON ljl1.n = 1;\n\n-markus\n\n[1] https://www.postgresql.org/message-id/CAHewXNnu7u1aT==WjnCRa+SzKb6s80hvwPP_9eMvvvtdyFdqjw@mail.gmail.com\n\n\n\n\n",
"msg_date": "Tue, 30 May 2023 16:47:39 +0200",
"msg_from": "Markus Winand <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "On Tue, May 30, 2023 at 10:48 PM Markus Winand <[email protected]>\nwrote:\n\n> I found an error similar to others before ([1]) that is still persists as\n> of head right now (0bcb3ca3b9).\n>\n> CREATE TABLE t (\n> n INTEGER\n> );\n>\n> SELECT *\n> FROM (VALUES (1)) t(c)\n> LEFT JOIN t ljl1 ON true\n> LEFT JOIN LATERAL (WITH cte AS (SELECT * FROM t WHERE t.n = ljl1.n)\n> SELECT * FROM cte) ljl2 ON ljl1.n = 1;\n>\n> ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n>\n> Note that the error does **not** occur if the CTE is unwrapped like this:\n>\n> SELECT *\n> FROM (VALUES (1)) t(c)\n> LEFT JOIN t ljl1 ON true\n> LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n) ljl2 ON ljl1.n =\n> 1;\n\n\nThanks for the report! Reproduced here. Also it can be reproduced with\nsubquery, as long as the subquery is not pulled up.\n\nSELECT *\n FROM (VALUES (1)) t(c)\n LEFT JOIN t ljl1 ON true\n LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n offset 0) ljl2 ON\nljl1.n = 1;\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\nWhen we transform the first form of identity 3 to the second form, we've\nconverted Pb*c to Pbc in deconstruct_distribute_oj_quals. But we\nneglect to consider that rel C might be a RTE_SUBQUERY and contains\nquals that have lateral references to B. So the B vars in such quals\nhave wrong nulling bitmaps and we'd finally notice that when we do\nfix_upper_expr for the NestLoopParam expressions.\n\nThanks\nRichard\n\nOn Tue, May 30, 2023 at 10:48 PM Markus Winand <[email protected]> wrote:I found an error similar to others before ([1]) that is still persists as of head right now (0bcb3ca3b9).\n\nCREATE TABLE t (\n n INTEGER\n);\n\nSELECT *\n FROM (VALUES (1)) t(c)\n LEFT JOIN t ljl1 ON true\n LEFT JOIN LATERAL (WITH cte AS (SELECT * FROM t WHERE t.n = ljl1.n) SELECT * FROM cte) ljl2 ON ljl1.n = 1;\n\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\nNote that the error does **not** occur if the CTE is unwrapped like this:\n\nSELECT *\n FROM (VALUES (1)) t(c)\n LEFT JOIN t ljl1 ON true\n LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n) ljl2 ON ljl1.n = 1;Thanks for the report! Reproduced here. Also it can be reproduced withsubquery, as long as the subquery is not pulled up.SELECT * FROM (VALUES (1)) t(c) LEFT JOIN t ljl1 ON true LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n offset 0) ljl2 ON ljl1.n = 1;ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1When we transform the first form of identity 3 to the second form, we'veconverted Pb*c to Pbc in deconstruct_distribute_oj_quals. But weneglect to consider that rel C might be a RTE_SUBQUERY and containsquals that have lateral references to B. So the B vars in such qualshave wrong nulling bitmaps and we'd finally notice that when we dofix_upper_expr for the NestLoopParam expressions.ThanksRichard",
"msg_date": "Wed, 31 May 2023 10:47:03 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "On Wed, May 31, 2023 at 10:47 AM Richard Guo <[email protected]> wrote:\n\n> On Tue, May 30, 2023 at 10:48 PM Markus Winand <[email protected]>\n> wrote:\n>\n>> I found an error similar to others before ([1]) that is still persists as\n>> of head right now (0bcb3ca3b9).\n>>\n>> CREATE TABLE t (\n>> n INTEGER\n>> );\n>>\n>> SELECT *\n>> FROM (VALUES (1)) t(c)\n>> LEFT JOIN t ljl1 ON true\n>> LEFT JOIN LATERAL (WITH cte AS (SELECT * FROM t WHERE t.n = ljl1.n)\n>> SELECT * FROM cte) ljl2 ON ljl1.n = 1;\n>>\n>> ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n>>\n>> Note that the error does **not** occur if the CTE is unwrapped like this:\n>>\n>> SELECT *\n>> FROM (VALUES (1)) t(c)\n>> LEFT JOIN t ljl1 ON true\n>> LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n) ljl2 ON ljl1.n =\n>> 1;\n>\n>\n> Thanks for the report! Reproduced here. Also it can be reproduced with\n> subquery, as long as the subquery is not pulled up.\n>\n> SELECT *\n> FROM (VALUES (1)) t(c)\n> LEFT JOIN t ljl1 ON true\n> LEFT JOIN LATERAL (SELECT * FROM t WHERE t.n = ljl1.n offset 0) ljl2 ON\n> ljl1.n = 1;\n> ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n>\n> When we transform the first form of identity 3 to the second form, we've\n> converted Pb*c to Pbc in deconstruct_distribute_oj_quals. But we\n> neglect to consider that rel C might be a RTE_SUBQUERY and contains\n> quals that have lateral references to B. So the B vars in such quals\n> have wrong nulling bitmaps and we'd finally notice that when we do\n> fix_upper_expr for the NestLoopParam expressions.\n>\n\nWe can identify in which form of identity 3 the plan is built up by\nchecking the relids of the B/C join's outer rel. If it's in the first\nform, the outer rel's relids must contain the A/B join. Otherwise it\nshould only contain B's relid. So I'm considering that maybe we can\nadjust the nulling bitmap for nestloop parameters according to that.\n\nAttached is a patch for that. Does this make sense?\n\nThanks\nRichard",
"msg_date": "Wed, 31 May 2023 14:36:45 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "\n> On 31.05.2023, at 08:36, Richard Guo <[email protected]> wrote:\n> \n> Attached is a patch for that. Does this make sense?\n> \n> Thanks\n> Richard\n> <v1-0001-Fix-nulling-bitmap-for-nestloop-parameters.patch>\n\nAll I can say is that it fixes the error for me — also for the non-simplified original query that I have.\n\n-markus\n\n",
"msg_date": "Wed, 31 May 2023 09:03:56 +0200",
"msg_from": "Markus Winand <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, May 31, 2023 at 10:47 AM Richard Guo <[email protected]> wrote:\n>> When we transform the first form of identity 3 to the second form, we've\n>> converted Pb*c to Pbc in deconstruct_distribute_oj_quals. But we\n>> neglect to consider that rel C might be a RTE_SUBQUERY and contains\n>> quals that have lateral references to B. So the B vars in such quals\n>> have wrong nulling bitmaps and we'd finally notice that when we do\n>> fix_upper_expr for the NestLoopParam expressions.\n\nRight. One question that immediately raises is whether it's even safe\nto apply the identity if C has lateral references to B, because that\nalmost certainly means that C will produce different output when\njoined to a nulled B row than when joined to a not-nulled row.\nI think that's okay because if the B row will fail Pab then it doesn't\nmatter what row(s) C produces, but maybe I've missed something.\n\n> We can identify in which form of identity 3 the plan is built up by\n> checking the relids of the B/C join's outer rel. If it's in the first\n> form, the outer rel's relids must contain the A/B join. Otherwise it\n> should only contain B's relid. So I'm considering that maybe we can\n> adjust the nulling bitmap for nestloop parameters according to that.\n> Attached is a patch for that. Does this make sense?\n\nHmm. I don't really want to do it in identify_current_nestloop_params\nbecause that gets applied to all nestloop params, so it seems like\nthat risks masking bugs of other kinds. I'd rather do it in\nprocess_subquery_nestloop_params, which we know is only applied to\nsubquery LATERAL references. So more or less as attached.\n\n(I dropped the equal() assertions in process_subquery_nestloop_params\nbecause they've never caught anything and it'd be too complicated to\nmake them still work.)\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 09 Jun 2023 12:08:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 12:08 AM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > We can identify in which form of identity 3 the plan is built up by\n> > checking the relids of the B/C join's outer rel. If it's in the first\n> > form, the outer rel's relids must contain the A/B join. Otherwise it\n> > should only contain B's relid. So I'm considering that maybe we can\n> > adjust the nulling bitmap for nestloop parameters according to that.\n> > Attached is a patch for that. Does this make sense?\n>\n> Hmm. I don't really want to do it in identify_current_nestloop_params\n> because that gets applied to all nestloop params, so it seems like\n> that risks masking bugs of other kinds. I'd rather do it in\n> process_subquery_nestloop_params, which we know is only applied to\n> subquery LATERAL references. So more or less as attached.\n\n\nYeah, that makes sense. process_subquery_nestloop_params is a better\nplace to do this adjustments. +1 to v2 patch.\n\nThanks\nRichard\n\nOn Sat, Jun 10, 2023 at 12:08 AM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> We can identify in which form of identity 3 the plan is built up by\n> checking the relids of the B/C join's outer rel. If it's in the first\n> form, the outer rel's relids must contain the A/B join. Otherwise it\n> should only contain B's relid. So I'm considering that maybe we can\n> adjust the nulling bitmap for nestloop parameters according to that.\n> Attached is a patch for that. Does this make sense?\n\nHmm. I don't really want to do it in identify_current_nestloop_params\nbecause that gets applied to all nestloop params, so it seems like\nthat risks masking bugs of other kinds. I'd rather do it in\nprocess_subquery_nestloop_params, which we know is only applied to\nsubquery LATERAL references. So more or less as attached.Yeah, that makes sense. process_subquery_nestloop_params is a betterplace to do this adjustments. +1 to v2 patch.ThanksRichard",
"msg_date": "Mon, 12 Jun 2023 10:44:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> Yeah, that makes sense. process_subquery_nestloop_params is a better\n> place to do this adjustments. +1 to v2 patch.\n\nPushed, then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:02:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 10:02 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > Yeah, that makes sense. process_subquery_nestloop_params is a better\n> > place to do this adjustments. +1 to v2 patch.\n>\n> Pushed, then.\n\n\nOh, wait ... It occurred to me that we may have this same issue with\nMemoize cache keys. In get_memoize_path we collect the cache keys from\ninnerpath's ppi_clauses and innerrel's lateral_vars, and the latter may\ncontain nullingrel markers that need adjustment. As an example,\nconsider the query below\n\nexplain (costs off)\nselect * from onek t1\n left join onek t2 on true\n left join lateral\n (select * from onek t3 where t3.two = t2.two offset 0) s\n on t2.unique1 = 1;\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/3\n\nAttached is a patch that does the same adjustments to innerrel's\nlateral_vars before they are added to MemoizePath->param_exprs.\n\nI was wondering if there are more places that need this kind of\nadjustments. After some thoughts I believe the Memoize cache keys\nshould be the last one regarding adjustments to nestloop parameters.\nAFAICS the lateral references in origin query would go to two places,\none is plan_params and the other is lateral_vars. And now we've handled\nboth of them.\n\nThanks\nRichard",
"msg_date": "Tue, 13 Jun 2023 11:33:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> Oh, wait ... It occurred to me that we may have this same issue with\n> Memoize cache keys. In get_memoize_path we collect the cache keys from\n> innerpath's ppi_clauses and innerrel's lateral_vars, and the latter may\n> contain nullingrel markers that need adjustment. As an example,\n> consider the query below\n\n> explain (costs off)\n> select * from onek t1\n> left join onek t2 on true\n> left join lateral\n> (select * from onek t3 where t3.two = t2.two offset 0) s\n> on t2.unique1 = 1;\n> ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/3\n\nGood catch --- I'll take a closer look tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Jun 2023 23:34:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "I wrote:\n> Richard Guo <[email protected]> writes:\n>> Oh, wait ... It occurred to me that we may have this same issue with\n>> Memoize cache keys.\n\n> Good catch --- I'll take a closer look tomorrow.\n\nPushed after a little more fiddling with the comments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 18:02:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 6:02 AM Tom Lane <[email protected]> wrote:\n\n> I wrote:\n> > Richard Guo <[email protected]> writes:\n> >> Oh, wait ... It occurred to me that we may have this same issue with\n> >> Memoize cache keys.\n>\n> > Good catch --- I'll take a closer look tomorrow.\n>\n> Pushed after a little more fiddling with the comments.\n\n\nI just realized that we may still have holes in this area. Until now\nwe're mainly focusing on LATERAL subquery, in which case the lateral\nreference Vars are copied into rel->subplan_params and we've already\nadjusted the nulling bitmaps there. But what about the lateral\nreference Vars in other cases?\n\nIn extract_lateral_references() we consider 5 cases,\n\n /* Fetch the appropriate variables */\n if (rte->rtekind == RTE_RELATION)\n vars = pull_vars_of_level((Node *) rte->tablesample, 0);\n else if (rte->rtekind == RTE_SUBQUERY)\n vars = pull_vars_of_level((Node *) rte->subquery, 1);\n else if (rte->rtekind == RTE_FUNCTION)\n vars = pull_vars_of_level((Node *) rte->functions, 0);\n else if (rte->rtekind == RTE_TABLEFUNC)\n vars = pull_vars_of_level((Node *) rte->tablefunc, 0);\n else if (rte->rtekind == RTE_VALUES)\n vars = pull_vars_of_level((Node *) rte->values_lists, 0);\n else\n {\n Assert(false);\n return; /* keep compiler quiet */\n }\n\nWe've handled the second case, i.e., RTE_SUBQUERY. It's not hard to\ncompose a query for each of the other 4 cases that shows that we need to\nadjust the nulling bitmaps for them too.\n\n1. RTE_RELATION with tablesample\n\nexplain (costs off)\nselect * from int8_tbl t1\n left join int8_tbl t2 on true\n left join lateral\n (select * from int8_tbl t3 TABLESAMPLE SYSTEM (t2.q1)) s\n on t2.q1 = 1;\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\n2. RTE_FUNCTION\n\nexplain (costs off)\nselect * from int8_tbl t1\n left join int8_tbl t2 on true\n left join lateral\n (select * from generate_series(t2.q1, 100)) s\n on t2.q1 = 1;\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\n3. RTE_TABLEFUNC\n\nexplain (costs off)\nselect * from xmltest2 t1\n left join xmltest2 t2 on true\n left join lateral\n xmltable('/d/r' PASSING t2.x COLUMNS a int)\n on t2._path = 'a';\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\n4. RTE_VALUES\n\nexplain (costs off)\nselect * from int8_tbl t1\n left join int8_tbl t2 on true\n left join lateral\n (select q1 from (values(t2.q1), (t2.q1)) v(q1)) s\n on t2.q1 = 1;\nERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1\n\nSo it seems that we need to do nullingrel adjustments in a more common\nplace.\n\nAlso, there might be lateral references in the tlist, so the query below\nis supposed to also encounter the 'wrong varnullingrels' error.\n\nexplain (costs off)\nselect * from int8_tbl t1\n left join int8_tbl t2 on true\n left join lateral\n (select t2.q1 from int8_tbl t3) s\n on t2.q1 = 1;\nserver closed the connection unexpectedly\n\nBut as we can see, it triggers the Assert in try_nestloop_path.\n\n/* If we got past that, we shouldn't have any unsafe outer-join refs */\nAssert(!have_unsafe_outer_join_ref(root, outerrelids, inner_paramrels));\n\nI think it exposes a new issue. It seems that we extract a problematic\nlateral_relids from lateral references within PlaceHolderVars in\ncreate_lateral_join_info. I doubt that we should use ph_lateral\ndirectly. It seems more reasonable to me that we strip outer-join\nrelids from ph_lateral and then use that for lateral_relids.\n\nAny thoughts?\n\nThanks\nRichard\n\nOn Wed, Jun 14, 2023 at 6:02 AM Tom Lane <[email protected]> wrote:I wrote:\n> Richard Guo <[email protected]> writes:\n>> Oh, wait ... It occurred to me that we may have this same issue with\n>> Memoize cache keys.\n\n> Good catch --- I'll take a closer look tomorrow.\n\nPushed after a little more fiddling with the comments.I just realized that we may still have holes in this area. Until nowwe're mainly focusing on LATERAL subquery, in which case the lateralreference Vars are copied into rel->subplan_params and we've alreadyadjusted the nulling bitmaps there. But what about the lateralreference Vars in other cases?In extract_lateral_references() we consider 5 cases, /* Fetch the appropriate variables */ if (rte->rtekind == RTE_RELATION) vars = pull_vars_of_level((Node *) rte->tablesample, 0); else if (rte->rtekind == RTE_SUBQUERY) vars = pull_vars_of_level((Node *) rte->subquery, 1); else if (rte->rtekind == RTE_FUNCTION) vars = pull_vars_of_level((Node *) rte->functions, 0); else if (rte->rtekind == RTE_TABLEFUNC) vars = pull_vars_of_level((Node *) rte->tablefunc, 0); else if (rte->rtekind == RTE_VALUES) vars = pull_vars_of_level((Node *) rte->values_lists, 0); else { Assert(false); return; /* keep compiler quiet */ }We've handled the second case, i.e., RTE_SUBQUERY. It's not hard tocompose a query for each of the other 4 cases that shows that we need toadjust the nulling bitmaps for them too.1. RTE_RELATION with tablesampleexplain (costs off)select * from int8_tbl t1 left join int8_tbl t2 on true left join lateral (select * from int8_tbl t3 TABLESAMPLE SYSTEM (t2.q1)) s on t2.q1 = 1;ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/12. RTE_FUNCTIONexplain (costs off)select * from int8_tbl t1 left join int8_tbl t2 on true left join lateral (select * from generate_series(t2.q1, 100)) s on t2.q1 = 1;ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/13. RTE_TABLEFUNCexplain (costs off)select * from xmltest2 t1 left join xmltest2 t2 on true left join lateral xmltable('/d/r' PASSING t2.x COLUMNS a int) on t2._path = 'a';ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/14. RTE_VALUESexplain (costs off)select * from int8_tbl t1 left join int8_tbl t2 on true left join lateral (select q1 from (values(t2.q1), (t2.q1)) v(q1)) s on t2.q1 = 1;ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1So it seems that we need to do nullingrel adjustments in a more commonplace.Also, there might be lateral references in the tlist, so the query belowis supposed to also encounter the 'wrong varnullingrels' error.explain (costs off)select * from int8_tbl t1 left join int8_tbl t2 on true left join lateral (select t2.q1 from int8_tbl t3) s on t2.q1 = 1;server closed the connection unexpectedlyBut as we can see, it triggers the Assert in try_nestloop_path./* If we got past that, we shouldn't have any unsafe outer-join refs */Assert(!have_unsafe_outer_join_ref(root, outerrelids, inner_paramrels));I think it exposes a new issue. It seems that we extract a problematiclateral_relids from lateral references within PlaceHolderVars increate_lateral_join_info. I doubt that we should use ph_lateraldirectly. It seems more reasonable to me that we strip outer-joinrelids from ph_lateral and then use that for lateral_relids.Any thoughts?ThanksRichard",
"msg_date": "Wed, 14 Jun 2023 14:55:36 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> I just realized that we may still have holes in this area. Until now\n> we're mainly focusing on LATERAL subquery, in which case the lateral\n> reference Vars are copied into rel->subplan_params and we've already\n> adjusted the nulling bitmaps there. But what about the lateral\n> reference Vars in other cases?\n\nUgh.\n\n> So it seems that we need to do nullingrel adjustments in a more common\n> place.\n\nI agree: this suggests that we fixed it in the wrong place.\n\n> I think it exposes a new issue. It seems that we extract a problematic\n> lateral_relids from lateral references within PlaceHolderVars in\n> create_lateral_join_info. I doubt that we should use ph_lateral\n> directly. It seems more reasonable to me that we strip outer-join\n> relids from ph_lateral and then use that for lateral_relids.\n\nHmm. I don't have time to think hard about this today, but this\ndoes feel similar to our existing decision that parameterized paths\nshould be generated with minimal nullingrels bits on their outer\nreferences. We only thought about pushed-down join clauses when we\ndid that. But a lateral ref necessarily gives rise to parameterized\npath(s), and what we seem to be seeing is that those need to be\nhandled just the same as ones generated by pushing down join clauses.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Jun 2023 09:06:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
},
{
"msg_contents": "I wrote:\n> Richard Guo <[email protected]> writes:\n>> So it seems that we need to do nullingrel adjustments in a more common\n>> place.\n\n> I agree: this suggests that we fixed it in the wrong place.\n\nSo pursuant to that, 0001 attached reverts the code changes from bfd332b3f\nand 63e4f13d2 (keeping the test cases and some unrelated comment fixes).\nThen the question is what to do instead. I've not come up with a better\nidea than to hack it in identify_current_nestloop_params (per 0002), as\nyou proposed upthread. I don't like this too much, as it's on the hairy\nedge of making setrefs.c's nullingrel cross-checks completely useless for\nNestLoopParams; but the alternatives aren't attractive either.\n\n>> I think it exposes a new issue. It seems that we extract a problematic\n>> lateral_relids from lateral references within PlaceHolderVars in\n>> create_lateral_join_info. I doubt that we should use ph_lateral\n>> directly. It seems more reasonable to me that we strip outer-join\n>> relids from ph_lateral and then use that for lateral_relids.\n\nI experimented with that (0003) and it fixes your example query.\nI think it is functionally okay, because the lateral_relids just need\nto be a sufficient subset of the lateral references' requirements to\nensure we can evaluate them where needed; other mechanisms should ensure\nthat the right sorts of joins happen. It seems a bit unsatisfying\nthough, especially given that we just largely lobotomized setrefs.c's\ncross-checks for these same references. I don't have a better idea\nhowever, and beta2 is fast approaching.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 19 Jun 2023 15:37:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: wrong varnullingrels (b 3) (expected (b)) for Var 2/1"
}
] |
[
{
"msg_contents": "I want to report on my on-the-plane-to-PGCon project.\n\nThe idea was mentioned in [0]. genbki.pl already knows everything about \nsystem catalog indexes. If we add a \"please also make a syscache for \nthis one\" flag to the catalog metadata, we can have genbki.pl produce \nthe tables in syscache.c and syscache.h automatically.\n\nAside from avoiding the cumbersome editing of those tables, I think this \nlayout is also conceptually cleaner, as you can more easily see which \nsystem catalog indexes have syscaches and maybe ask questions about why \nor why not.\n\nAs a possible follow-up, I have also started work on generating the \nObjectProperty structure in objectaddress.c. One of the things you need \nfor that is making genbki.pl aware of the syscache information. There \nis some more work to be done there, but it's looking promising.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGKdpDjKL2jgC-GpoL4DGZU1YPqnOFHbDqFkfRQcPaR5DQ%40mail.gmail.com",
"msg_date": "Tue, 30 May 2023 17:57:53 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "generate syscache info automatically"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n\n> The idea was mentioned in [0]. genbki.pl already knows everything about\n> system catalog indexes. If we add a \"please also make a syscache for \n> this one\" flag to the catalog metadata, we can have genbki.pl produce\n> the tables in syscache.c and syscache.h automatically.\n\n+1 on this worthwhile reduction of manual work. Tangentially, it\nreminded me of one of my least favourite parts of Catalog.pm, the\nregexes in ParseHeader():\n\n\n> diff --git a/src/backend/catalog/Catalog.pm b/src/backend/catalog/Catalog.pm\n> index 84aaeb002a..a727d692b7 100644\n> --- a/src/backend/catalog/Catalog.pm\n> +++ b/src/backend/catalog/Catalog.pm\n> @@ -110,7 +110,7 @@ sub ParseHeader\n> \t\t\t };\n> \t\t}\n> \t\telsif (\n> -\t\t\t/^DECLARE_(UNIQUE_)?INDEX(_PKEY)?\\(\\s*(\\w+),\\s*(\\d+),\\s*(\\w+),\\s*(.+)\\)/\n> +\t\t\t/^DECLARE_(UNIQUE_)?INDEX(_PKEY)?\\(\\s*(\\w+),\\s*(\\d+),\\s*(\\w+),\\s*(\\w+),\\s*(.+)\\)/\n> \t\t )\n> \t\t{\n> \t\t\tpush @{ $catalog{indexing} },\n> @@ -120,7 +120,8 @@ sub ParseHeader\n> \t\t\t\tindex_name => $3,\n> \t\t\t\tindex_oid => $4,\n> \t\t\t\tindex_oid_macro => $5,\n> -\t\t\t\tindex_decl => $6\n> +\t\t\t\ttable_name => $6,\n> +\t\t\t\tindex_decl => $7\n> \t\t\t };\n> \t\t}\n> \t\telsif (/^DECLARE_OID_DEFINING_MACRO\\(\\s*(\\w+),\\s*(\\d+)\\)/)\n\n\nNow that we require Perl 5.14, we could replace this parenthesis-\ncounting nightmare with named captures (introduced in Perl 5.10), which\nwould make the above change look like this instead (context expanded to\nshow the whole elsif block):\n\n \t\telsif (\n \t\t\t/^DECLARE_(UNIQUE_)?INDEX(_PKEY)?\\(\\s*\n \t\t\t (?<index_name>\\w+),\\s*\n \t\t\t (?<index_oid>\\d+),\\s*\n \t\t\t (?<index_oid_macro>\\w+),\\s*\n+\t\t\t (?<table_name>\\w+),\\s*\n \t\t\t (?<index_decl>.+)\n \t\t\t \\)/x\n \t\t )\n \t\t{\n \t\t\tpush @{ $catalog{indexing} },\n \t\t\t {\n \t\t\t\tis_unique => $1 ? 1 : 0,\n \t\t\t\tis_pkey => $2 ? 1 : 0,\n \t\t\t\t%+,\n \t\t\t };\n \t\t}\n\nFor other patterns without the optional bits in the keyword, it becomes\neven simpler, e.g.\n\n\t\tif (/^DECLARE_TOAST\\(\\s*\n\t\t\t (?<parent_table>\\w+),\\s*\n\t\t\t (?<toast_oid>\\d+),\\s*\n\t\t\t (?<toast_index_oid>\\d+)\\s*\n\t\t\t \\)/x\n\t\t )\n\t\t{\n\t\t\tpush @{ $catalog{toasting} }, {%+};\n\t \t}\n\n\nI'd be happy to submit a patch to do this for all the ParseHeader()\nregexes (in a separate thread) if others agree this is an improvement.\n\n- ilmari\n\n\n",
"msg_date": "Wed, 31 May 2023 18:02:57 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On 31.05.23 13:02, Dagfinn Ilmari Mannsåker wrote:\n> For other patterns without the optional bits in the keyword, it becomes\n> even simpler, e.g.\n> \n> \t\tif (/^DECLARE_TOAST\\(\\s*\n> \t\t\t (?<parent_table>\\w+),\\s*\n> \t\t\t (?<toast_oid>\\d+),\\s*\n> \t\t\t (?<toast_index_oid>\\d+)\\s*\n> \t\t\t \\)/x\n> \t\t )\n> \t\t{\n> \t\t\tpush @{ $catalog{toasting} }, {%+};\n> \t \t}\n> \n> \n> I'd be happy to submit a patch to do this for all the ParseHeader()\n> regexes (in a separate thread) if others agree this is an improvement.\n\nI would welcome such a patch.\n\n\n\n",
"msg_date": "Wed, 31 May 2023 17:24:01 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Wed, May 31, 2023 at 4:58 AM Peter Eisentraut <[email protected]>\nwrote:\n>\n> I want to report on my on-the-plane-to-PGCon project.\n>\n> The idea was mentioned in [0]. genbki.pl already knows everything about\n> system catalog indexes. If we add a \"please also make a syscache for\n> this one\" flag to the catalog metadata, we can have genbki.pl produce\n> the tables in syscache.c and syscache.h automatically.\n>\n> Aside from avoiding the cumbersome editing of those tables, I think this\n> layout is also conceptually cleaner, as you can more easily see which\n> system catalog indexes have syscaches and maybe ask questions about why\n> or why not.\n\nWhen this has come up before, one objection was that index declarations\nshouldn't know about cache names and bucket sizes [1]. The second paragraph\nabove makes a reasonable case for that, however. I believe one alternative\nidea was for a script to read the enum, which would look something like\nthis:\n\n#define DECLARE_SYSCACHE(cacheid,indexname,numbuckets) cacheid\n\nenum SysCacheIdentifier\n{\nDECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index, 16) = 0,\n...\n};\n\n...which would then look up the other info in the usual way from Catalog.pm.\n\n> As a possible follow-up, I have also started work on generating the\n> ObjectProperty structure in objectaddress.c. One of the things you need\n> for that is making genbki.pl aware of the syscache information. There\n> is some more work to be done there, but it's looking promising.\n\nI haven't studied this, but it seems interesting.\n\nOne other possible improvement: syscache.c has a bunch of #include's, one\nfor each catalog with a cache, so there's still a bit of manual work in\nadding a cache, and the current #include list is a bit cumbersome. Perhaps\nit's worth it to have the script emit them as well?\n\nI also wonder if at some point it will make sense to split off a separate\nscript(s) for some things that are unrelated to the bootstrap data.\ngenbki.pl is getting pretty large, and there are additional things that\ncould be done with syscaches, e.g. inlined eq/hash functions for cache\nlookup [2].\n\n[1] https://www.postgresql.org/message-id/[email protected]\n[2]\nhttps://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 31, 2023 at 4:58 AM Peter Eisentraut <[email protected]> wrote:>> I want to report on my on-the-plane-to-PGCon project.>> The idea was mentioned in [0]. genbki.pl already knows everything about> system catalog indexes. If we add a \"please also make a syscache for> this one\" flag to the catalog metadata, we can have genbki.pl produce> the tables in syscache.c and syscache.h automatically.>> Aside from avoiding the cumbersome editing of those tables, I think this> layout is also conceptually cleaner, as you can more easily see which> system catalog indexes have syscaches and maybe ask questions about why> or why not.When this has come up before, one objection was that index declarations shouldn't know about cache names and bucket sizes [1]. The second paragraph above makes a reasonable case for that, however. I believe one alternative idea was for a script to read the enum, which would look something like this:#define DECLARE_SYSCACHE(cacheid,indexname,numbuckets) cacheidenum SysCacheIdentifier{DECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index, 16) = 0,...};...which would then look up the other info in the usual way from Catalog.pm.> As a possible follow-up, I have also started work on generating the> ObjectProperty structure in objectaddress.c. One of the things you need> for that is making genbki.pl aware of the syscache information. There> is some more work to be done there, but it's looking promising.I haven't studied this, but it seems interesting.One other possible improvement: syscache.c has a bunch of #include's, one for each catalog with a cache, so there's still a bit of manual work in adding a cache, and the current #include list is a bit cumbersome. Perhaps it's worth it to have the script emit them as well?I also wonder if at some point it will make sense to split off a separate script(s) for some things that are unrelated to the bootstrap data. genbki.pl is getting pretty large, and there are additional things that could be done with syscaches, e.g. inlined eq/hash functions for cache lookup [2].[1] https://www.postgresql.org/message-id/[email protected][2] https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 15 Jun 2023 14:45:22 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "Here is an updated patch set that adjusts for the recently introduced \nnamed captures. I will address the other issues later. I think the \nfirst few patches in the series can be considered nonetheless.\n\n\nOn 15.06.23 09:45, John Naylor wrote:\n> On Wed, May 31, 2023 at 4:58 AM Peter Eisentraut <[email protected] \n> <mailto:[email protected]>> wrote:\n> >\n> > I want to report on my on-the-plane-to-PGCon project.\n> >\n> > The idea was mentioned in [0]. genbki.pl <http://genbki.pl> already \n> knows everything about\n> > system catalog indexes. If we add a \"please also make a syscache for\n> > this one\" flag to the catalog metadata, we can have genbki.pl \n> <http://genbki.pl> produce\n> > the tables in syscache.c and syscache.h automatically.\n> >\n> > Aside from avoiding the cumbersome editing of those tables, I think this\n> > layout is also conceptually cleaner, as you can more easily see which\n> > system catalog indexes have syscaches and maybe ask questions about why\n> > or why not.\n> \n> When this has come up before, one objection was that index declarations \n> shouldn't know about cache names and bucket sizes [1]. The second \n> paragraph above makes a reasonable case for that, however. I believe one \n> alternative idea was for a script to read the enum, which would look \n> something like this:\n> \n> #define DECLARE_SYSCACHE(cacheid,indexname,numbuckets) cacheid\n> \n> enum SysCacheIdentifier\n> {\n> DECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index, 16) = 0,\n> ...\n> };\n> \n> ...which would then look up the other info in the usual way from Catalog.pm.\n> \n> > As a possible follow-up, I have also started work on generating the\n> > ObjectProperty structure in objectaddress.c. One of the things you need\n> > for that is making genbki.pl <http://genbki.pl> aware of the syscache \n> information. There\n> > is some more work to be done there, but it's looking promising.\n> \n> I haven't studied this, but it seems interesting.\n> \n> One other possible improvement: syscache.c has a bunch of #include's, \n> one for each catalog with a cache, so there's still a bit of manual work \n> in adding a cache, and the current #include list is a bit cumbersome. \n> Perhaps it's worth it to have the script emit them as well?\n> \n> I also wonder if at some point it will make sense to split off a \n> separate script(s) for some things that are unrelated to the bootstrap \n> data. genbki.pl <http://genbki.pl> is getting pretty large, and there \n> are additional things that could be done with syscaches, e.g. inlined \n> eq/hash functions for cache lookup [2].\n> \n> [1] https://www.postgresql.org/message-id/[email protected] \n> <https://www.postgresql.org/message-id/[email protected]>\n> [2] \n> https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de <https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de>\n> \n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>",
"msg_date": "Mon, 3 Jul 2023 07:45:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On 03.07.23 07:45, Peter Eisentraut wrote:\n> Here is an updated patch set that adjusts for the recently introduced \n> named captures. I will address the other issues later. I think the \n> first few patches in the series can be considered nonetheless.\n\nI have committed the 0001 patch, which was really a (code comment) bug fix.\n\nI think the patches 0002 and 0003 should be uncontroversial, so I'd like \nto commit them if no one objects.\n\nThe remaining patches are still WIP.\n\n\n> On 15.06.23 09:45, John Naylor wrote:\n>> On Wed, May 31, 2023 at 4:58 AM Peter Eisentraut <[email protected] \n>> <mailto:[email protected]>> wrote:\n>> >\n>> > I want to report on my on-the-plane-to-PGCon project.\n>> >\n>> > The idea was mentioned in [0]. genbki.pl <http://genbki.pl> already \n>> knows everything about\n>> > system catalog indexes. If we add a \"please also make a syscache for\n>> > this one\" flag to the catalog metadata, we can have genbki.pl \n>> <http://genbki.pl> produce\n>> > the tables in syscache.c and syscache.h automatically.\n>> >\n>> > Aside from avoiding the cumbersome editing of those tables, I think \n>> this\n>> > layout is also conceptually cleaner, as you can more easily see which\n>> > system catalog indexes have syscaches and maybe ask questions about \n>> why\n>> > or why not.\n>>\n>> When this has come up before, one objection was that index \n>> declarations shouldn't know about cache names and bucket sizes [1]. \n>> The second paragraph above makes a reasonable case for that, however. \n>> I believe one alternative idea was for a script to read the enum, \n>> which would look something like this:\n>>\n>> #define DECLARE_SYSCACHE(cacheid,indexname,numbuckets) cacheid\n>>\n>> enum SysCacheIdentifier\n>> {\n>> DECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index, 16) = 0,\n>> ...\n>> };\n>>\n>> ...which would then look up the other info in the usual way from \n>> Catalog.pm.\n>>\n>> > As a possible follow-up, I have also started work on generating the\n>> > ObjectProperty structure in objectaddress.c. One of the things you \n>> need\n>> > for that is making genbki.pl <http://genbki.pl> aware of the \n>> syscache information. There\n>> > is some more work to be done there, but it's looking promising.\n>>\n>> I haven't studied this, but it seems interesting.\n>>\n>> One other possible improvement: syscache.c has a bunch of #include's, \n>> one for each catalog with a cache, so there's still a bit of manual \n>> work in adding a cache, and the current #include list is a bit \n>> cumbersome. Perhaps it's worth it to have the script emit them as well?\n>>\n>> I also wonder if at some point it will make sense to split off a \n>> separate script(s) for some things that are unrelated to the bootstrap \n>> data. genbki.pl <http://genbki.pl> is getting pretty large, and there \n>> are additional things that could be done with syscaches, e.g. inlined \n>> eq/hash functions for cache lookup [2].\n>>\n>> [1] \n>> https://www.postgresql.org/message-id/[email protected] \n>> <https://www.postgresql.org/message-id/[email protected]>\n>> [2] \n>> https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de <https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de>\n>>\n>> -- \n>> John Naylor\n>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n\n",
"msg_date": "Thu, 24 Aug 2023 16:03:29 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "I have committed 0002 and 0003, and also a small bug fix in the \nObjectProperty entry for \"transforms\".\n\nI have also gotten the automatic generation of the ObjectProperty lookup \ntable working (with some warts).\n\nAttached is an updated patch set.\n\nOne win here is that the ObjectProperty lookup now uses a hash function \ninstead of a linear search. So hopefully the performance is better (to \nbe confirmed) and it could now be used for things like [0].\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nThere was some discussion about whether the catalog files are the right \nplace to keep syscache information. Personally, I would find that more \nconvenient. (Note that we also recently moved the definitions of \nindexes and toast tables from files with the whole list into the various \ncatalog files.) But we can also keep it somewhere else. The important \nthing is that Catalog.pm can find it somewhere in a structured form.\n\nTo finish up the ObjectProperty generation, we also need to store some \nmore data, namely the OBJECT_* mappings. We probably do not want to \nstore those in the catalog headers; that looks like a layering violation \nto me.\n\nSo, there is some discussion to be had about where to put all this \nacross various use cases.\n\n\nOn 24.08.23 16:03, Peter Eisentraut wrote:\n> On 03.07.23 07:45, Peter Eisentraut wrote:\n>> Here is an updated patch set that adjusts for the recently introduced \n>> named captures. I will address the other issues later. I think the \n>> first few patches in the series can be considered nonetheless.\n> \n> I have committed the 0001 patch, which was really a (code comment) bug fix.\n> \n> I think the patches 0002 and 0003 should be uncontroversial, so I'd like \n> to commit them if no one objects.\n> \n> The remaining patches are still WIP.\n> \n> \n>> On 15.06.23 09:45, John Naylor wrote:\n>>> On Wed, May 31, 2023 at 4:58 AM Peter Eisentraut \n>>> <[email protected] <mailto:[email protected]>> wrote:\n>>> >\n>>> > I want to report on my on-the-plane-to-PGCon project.\n>>> >\n>>> > The idea was mentioned in [0]. genbki.pl <http://genbki.pl> \n>>> already knows everything about\n>>> > system catalog indexes. If we add a \"please also make a syscache for\n>>> > this one\" flag to the catalog metadata, we can have genbki.pl \n>>> <http://genbki.pl> produce\n>>> > the tables in syscache.c and syscache.h automatically.\n>>> >\n>>> > Aside from avoiding the cumbersome editing of those tables, I \n>>> think this\n>>> > layout is also conceptually cleaner, as you can more easily see which\n>>> > system catalog indexes have syscaches and maybe ask questions \n>>> about why\n>>> > or why not.\n>>>\n>>> When this has come up before, one objection was that index \n>>> declarations shouldn't know about cache names and bucket sizes [1]. \n>>> The second paragraph above makes a reasonable case for that, however. \n>>> I believe one alternative idea was for a script to read the enum, \n>>> which would look something like this:\n>>>\n>>> #define DECLARE_SYSCACHE(cacheid,indexname,numbuckets) cacheid\n>>>\n>>> enum SysCacheIdentifier\n>>> {\n>>> DECLARE_SYSCACHE(AGGFNOID, pg_aggregate_fnoid_index, 16) = 0,\n>>> ...\n>>> };\n>>>\n>>> ...which would then look up the other info in the usual way from \n>>> Catalog.pm.\n>>>\n>>> > As a possible follow-up, I have also started work on generating the\n>>> > ObjectProperty structure in objectaddress.c. One of the things \n>>> you need\n>>> > for that is making genbki.pl <http://genbki.pl> aware of the \n>>> syscache information. There\n>>> > is some more work to be done there, but it's looking promising.\n>>>\n>>> I haven't studied this, but it seems interesting.\n>>>\n>>> One other possible improvement: syscache.c has a bunch of #include's, \n>>> one for each catalog with a cache, so there's still a bit of manual \n>>> work in adding a cache, and the current #include list is a bit \n>>> cumbersome. Perhaps it's worth it to have the script emit them as well?\n>>>\n>>> I also wonder if at some point it will make sense to split off a \n>>> separate script(s) for some things that are unrelated to the \n>>> bootstrap data. genbki.pl <http://genbki.pl> is getting pretty large, \n>>> and there are additional things that could be done with syscaches, \n>>> e.g. inlined eq/hash functions for cache lookup [2].\n>>>\n>>> [1] \n>>> https://www.postgresql.org/message-id/[email protected] \n>>> <https://www.postgresql.org/message-id/[email protected]>\n>>> [2] \n>>> https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de <https://www.postgresql.org/message-id/20210831205906.4wk3s4lvgzkdaqpi%40alap3.anarazel.de>\n>>>\n>>> -- \n>>> John Naylor\n>>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> \n> \n>",
"msg_date": "Thu, 31 Aug 2023 13:23:00 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:23 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> Attached is an updated patch set.\n>\n> One win here is that the ObjectProperty lookup now uses a hash function\n> instead of a linear search. So hopefully the performance is better (to\n> be confirmed) and it could now be used for things like [0].\n\n+ # XXX This one neither, but if I add it to @skip, PerfectHash will fail.\n(???)\n+ #FIXME: AttributeRelationId\n\nI took a quick look at this, and attached is the least invasive way to get\nit working for now, which is to bump the table size slightly. The comment\nsays doing this isn't reliable, but it happens to work in this case.\nPlaying around with the functions is hit-or-miss, and when that fails,\nsomehow the larger table saves the day.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 1 Sep 2023 18:31:14 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "I wrote:\n\n> + # XXX This one neither, but if I add it to @skip, PerfectHash will\nfail. (???)\n> + #FIXME: AttributeRelationId\n>\n> I took a quick look at this, and attached is the least invasive way to\nget it working for now, which is to bump the table size slightly. The\ncomment says doing this isn't reliable, but it happens to work in this\ncase. Playing around with the functions is hit-or-miss, and when that\nfails, somehow the larger table saves the day.\n\nTo while away a rainy day, I poked at this a bit more and found the input\nis pathological with our current methods. Even with a large-ish exhaustive\nsearch, the two success are strange in that they only succeeded by\naccidentally bumping the table size up to where I got it to work before\n(77):\n\nWith multipliers (5, 19), it recognizes that the initial table size (75) is\na multiple of 5, so increases the table size to 76, which is a multiple of\n19, so it increases it again to 77 and succeeds.\n\nSame with (3, 76): 75 is a multiple of 3, so up to 76, which of course\ndivides 76, so bumps it to 77 likewise.\n\nTurning the loop into\n\na = (a ^ c) * 257;\nb = (b ^ c) * X;\n\n...seems to work very well.\n\nIn fact, now trying some powers-of-two for X before the primes works most\nof the time with our inputs, even for some unicode normalization functions,\non the first seed iteration. That likely won't make any difference in\npractice, but it's an interesting demo. I've attached these two draft ideas\nas text.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 2 Sep 2023 17:37:50 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 6:23 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> Attached is an updated patch set.\n\n> There was some discussion about whether the catalog files are the right\n> place to keep syscache information. Personally, I would find that more\n> convenient. (Note that we also recently moved the definitions of\n> indexes and toast tables from files with the whole list into the various\n> catalog files.) But we can also keep it somewhere else. The important\n> thing is that Catalog.pm can find it somewhere in a structured form.\n\nI don't have anything further to say on how to fit it together, but I'll go\nahead share some other comments from a quick read of v3-0003:\n\n+ # XXX hardcoded exceptions\n+ # extension doesn't belong to extnamespace\n+ $attnum_namespace = undef if $catname eq 'pg_extension';\n+ # pg_database owner is spelled datdba\n+ $attnum_owner = \"Anum_pg_database_datdba\" if $catname eq 'pg_database';\n\nThese exceptions seem like true exceptions...\n\n+ # XXX?\n+ $name_syscache = \"SUBSCRIPTIONNAME\" if $catname eq 'pg_subscription';\n+ # XXX?\n+ $is_nsp_name_unique = 1 if $catname eq 'pg_collation';\n+ $is_nsp_name_unique = 1 if $catname eq 'pg_opclass';\n+ $is_nsp_name_unique = 1 if $catname eq 'pg_opfamily';\n+ $is_nsp_name_unique = 1 if $catname eq 'pg_subscription';\n\n... but as the XXX conveys, these indicate a failure to do the right thing.\nTrying to derive this info from the index declarations is currently\nfragile. E.g. making one small change to this regex:\n\n- if ($index->{index_decl} =~ /\\(\\w+name name_ops(,\n\\w+namespace oid_ops)?\\)/)\n+ if ($index->{index_decl} =~ /\\w+name name_ops(,\n\\w+namespace oid_ops)?\\)/)\n\n...causes \"is_nsp_name_unique\" to flip from false to true, and/or sets\n\"name_catcache_id\" where it wasn't before, for several entries. It's not\nquite clear what the \"rule\" is intended to be, or whether it's robust going\nforward.\n\nThat being the case, perhaps some effort should also be made to make it\neasy to verify the output matches HEAD (or rather, v3-0001). This is now\ndifficult because of the C99 \".foo = bar\" syntax, plus the alphabetical\nordering.\n\n+foreach my $catname (@catnames)\n+{\n+ print $objectproperty_info_fh qq{#include \"catalog/${catname}_d.h\"\\n};\n+}\n\nAssuming we have a better way to know which catalogs need\nobject properties, is there a todo to only #include those?\n\n> To finish up the ObjectProperty generation, we also need to store some\n> more data, namely the OBJECT_* mappings. We probably do not want to\n> store those in the catalog headers; that looks like a layering violation\n> to me.\n\nPossibly, but it's also convenient and, one could argue, more\nstraightforward than storing syscache id and num_buckets in the index info.\n\nOne last thing: I did try to make the hash function use uint16 for the key\n(to reduce loop iterations on nul bytes), and that didn't help, so we are\nleft with the ideas I mentioned earlier.\n\n(not changing CF status, because nothing specific is really required to\nchange at the moment, some things up in the air)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 31, 2023 at 6:23 PM Peter Eisentraut <[email protected]> wrote:> Attached is an updated patch set.> There was some discussion about whether the catalog files are the right> place to keep syscache information. Personally, I would find that more> convenient. (Note that we also recently moved the definitions of> indexes and toast tables from files with the whole list into the various> catalog files.) But we can also keep it somewhere else. The important> thing is that Catalog.pm can find it somewhere in a structured form.I don't have anything further to say on how to fit it together, but I'll go ahead share some other comments from a quick read of v3-0003:+\t# XXX hardcoded exceptions+\t# extension doesn't belong to extnamespace+\t$attnum_namespace = undef if $catname eq 'pg_extension';+\t# pg_database owner is spelled datdba+\t$attnum_owner = \"Anum_pg_database_datdba\" if $catname eq 'pg_database';These exceptions seem like true exceptions...+\t# XXX?+\t$name_syscache = \"SUBSCRIPTIONNAME\" if $catname eq 'pg_subscription';+\t# XXX?+\t$is_nsp_name_unique = 1 if $catname eq 'pg_collation';+\t$is_nsp_name_unique = 1 if $catname eq 'pg_opclass';+\t$is_nsp_name_unique = 1 if $catname eq 'pg_opfamily';+\t$is_nsp_name_unique = 1 if $catname eq 'pg_subscription';... but as the XXX conveys, these indicate a failure to do the right thing. Trying to derive this info from the index declarations is currently fragile. E.g. making one small change to this regex:- if ($index->{index_decl} =~ /\\(\\w+name name_ops(, \\w+namespace oid_ops)?\\)/)+ if ($index->{index_decl} =~ /\\w+name name_ops(, \\w+namespace oid_ops)?\\)/)...causes \"is_nsp_name_unique\" to flip from false to true, and/or sets \"name_catcache_id\" where it wasn't before, for several entries. It's not quite clear what the \"rule\" is intended to be, or whether it's robust going forward. That being the case, perhaps some effort should also be made to make it easy to verify the output matches HEAD (or rather, v3-0001). This is now difficult because of the C99 \".foo = bar\" syntax, plus the alphabetical ordering.+foreach my $catname (@catnames)+{+\tprint $objectproperty_info_fh qq{#include \"catalog/${catname}_d.h\"\\n};+}Assuming we have a better way to know which catalogs need object properties, is there a todo to only #include those?> To finish up the ObjectProperty generation, we also need to store some> more data, namely the OBJECT_* mappings. We probably do not want to> store those in the catalog headers; that looks like a layering violation> to me.Possibly, but it's also convenient and, one could argue, more straightforward than storing syscache id and num_buckets in the index info. One last thing: I did try to make the hash function use uint16 for the key (to reduce loop iterations on nul bytes), and that didn't help, so we are left with the ideas I mentioned earlier.(not changing CF status, because nothing specific is really required to change at the moment, some things up in the air)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Sep 2023 18:02:18 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "Here is a rebased patch set, along with a summary of the questions I \nhave about these patches:\n\n\nv4-0001-Generate-syscache-info-from-catalog-files.patch\n\nWhat's a good syntax to declare a syscache? Currently, I have, for example\n\n-DECLARE_UNIQUE_INDEX_PKEY(pg_type_oid_index, 2703, TypeOidIndexId, \npg_type, btree(oid oid_ops));\n+DECLARE_UNIQUE_INDEX_PKEY(pg_type_oid_index, 2703, TypeOidIndexId, \npg_type, btree(oid oid_ops), TYPEOID, 64);\n\nI suppose a sensible alternative could be to leave the\nDECLARE_..._INDEX... alone and make a separate statement, like\n\nMAKE_SYSCACHE(pg_type_oid_index, TYPEOID, 64);\n\nThat's at least visually easier, because some of those\nDECLARE_... lines are getting pretty long.\n\nI would like to keep those MAKE_SYSCACHE lines in the catalog files.\nThat just makes it easier to reference everything together.\n\n\nv4-0002-Generate-ObjectProperty-from-catalog-files.patch\n\nSeveral questions here:\n\n* What's a good way to declare the mapping between catalog and object\n type?\n\n* How to select which catalogs have ObjectProperty entries generated?\n\n* Where to declare class descriptions? Or just keep the current hack in\n the patch.\n\n* How to declare the purpose of a catalog column, like \"this is the ACL\n column for this catalog\". This is currently done by name, but maybe\n it should be more explicit.\n\n* Question about how to pick the correct value for is_nsp_name_unique?\n\nThis second patch is clearly still WIP. I hope the first patch can be \nfinished soon, however.\n\n\nI was also amused to find the original commit fa351d5a0db that \nintroduced ObjectProperty, which contains the following comment:\n\n Performance isn't critical here, so there's no need to be smart\n about how we do the search.\n\nwhich I'm now trying to amend.\n\n\nOn 11.09.23 07:02, John Naylor wrote:\n> \n> On Thu, Aug 31, 2023 at 6:23 PM Peter Eisentraut <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > Attached is an updated patch set.\n> \n> > There was some discussion about whether the catalog files are the right\n> > place to keep syscache information. Personally, I would find that more\n> > convenient. (Note that we also recently moved the definitions of\n> > indexes and toast tables from files with the whole list into the various\n> > catalog files.) But we can also keep it somewhere else. The important\n> > thing is that Catalog.pm can find it somewhere in a structured form.\n> \n> I don't have anything further to say on how to fit it together, but I'll \n> go ahead share some other comments from a quick read of v3-0003:\n> \n> + # XXX hardcoded exceptions\n> + # extension doesn't belong to extnamespace\n> + $attnum_namespace = undef if $catname eq 'pg_extension';\n> + # pg_database owner is spelled datdba\n> + $attnum_owner = \"Anum_pg_database_datdba\" if $catname eq 'pg_database';\n> \n> These exceptions seem like true exceptions...\n> \n> + # XXX?\n> + $name_syscache = \"SUBSCRIPTIONNAME\" if $catname eq 'pg_subscription';\n> + # XXX?\n> + $is_nsp_name_unique = 1 if $catname eq 'pg_collation';\n> + $is_nsp_name_unique = 1 if $catname eq 'pg_opclass';\n> + $is_nsp_name_unique = 1 if $catname eq 'pg_opfamily';\n> + $is_nsp_name_unique = 1 if $catname eq 'pg_subscription';\n> \n> ... but as the XXX conveys, these indicate a failure to do the right \n> thing. Trying to derive this info from the index declarations is \n> currently fragile. E.g. making one small change to this regex:\n> \n> - if ($index->{index_decl} =~ /\\(\\w+name name_ops(, \n> \\w+namespace oid_ops)?\\)/)\n> + if ($index->{index_decl} =~ /\\w+name name_ops(, \n> \\w+namespace oid_ops)?\\)/)\n> \n> ...causes \"is_nsp_name_unique\" to flip from false to true, and/or sets \n> \"name_catcache_id\" where it wasn't before, for several entries. It's not \n> quite clear what the \"rule\" is intended to be, or whether it's robust \n> going forward.\n> \n> That being the case, perhaps some effort should also be made to make it \n> easy to verify the output matches HEAD (or rather, v3-0001). This is now \n> difficult because of the C99 \".foo = bar\" syntax, plus the alphabetical \n> ordering.\n> \n> +foreach my $catname (@catnames)\n> +{\n> + print $objectproperty_info_fh qq{#include \"catalog/${catname}_d.h\"\\n};\n> +}\n> \n> Assuming we have a better way to know which catalogs need \n> object properties, is there a todo to only #include those?\n> \n> > To finish up the ObjectProperty generation, we also need to store some\n> > more data, namely the OBJECT_* mappings. We probably do not want to\n> > store those in the catalog headers; that looks like a layering violation\n> > to me.\n> \n> Possibly, but it's also convenient and, one could argue, more \n> straightforward than storing syscache id and num_buckets in the index info.\n> \n> One last thing: I did try to make the hash function use uint16 for the \n> key (to reduce loop iterations on nul bytes), and that didn't help, so \n> we are left with the ideas I mentioned earlier.\n> \n> (not changing CF status, because nothing specific is really required to \n> change at the moment, some things up in the air)\n> \n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>",
"msg_date": "Wed, 1 Nov 2023 17:13:03 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Thu, Nov 2, 2023 at 4:13 AM Peter Eisentraut <[email protected]> wrote:\n>\n> Here is a rebased patch set, along with a summary of the questions I\n> have about these patches:\n\nThis is an excellent summary of the issues, thanks.\n\n> v4-0001-Generate-syscache-info-from-catalog-files.patch\n>\n> What's a good syntax to declare a syscache? Currently, I have, for example\n>\n> -DECLARE_UNIQUE_INDEX_PKEY(pg_type_oid_index, 2703, TypeOidIndexId,\n> pg_type, btree(oid oid_ops));\n> +DECLARE_UNIQUE_INDEX_PKEY(pg_type_oid_index, 2703, TypeOidIndexId,\n> pg_type, btree(oid oid_ops), TYPEOID, 64);\n>\n> I suppose a sensible alternative could be to leave the\n> DECLARE_..._INDEX... alone and make a separate statement, like\n>\n> MAKE_SYSCACHE(pg_type_oid_index, TYPEOID, 64);\n>\n> That's at least visually easier, because some of those\n> DECLARE_... lines are getting pretty long.\n\nProbably a good idea, and below I mention a third possible macro.\n\n> I would like to keep those MAKE_SYSCACHE lines in the catalog files.\n> That just makes it easier to reference everything together.\n\nThat seems fine. If we ever want to do something more invasive with\nthe syscaches, some of that can be copied/moved out into a separate\nscript, but what's there in 0001 is pretty small.\n\n> v4-0002-Generate-ObjectProperty-from-catalog-files.patch\n>\n> Several questions here:\n>\n> * What's a good way to declare the mapping between catalog and object\n> type?\n\nPerhaps this idea:\n\n> I suppose a sensible alternative could be to leave the\n> DECLARE_..._INDEX... alone and make a separate statement, like\n>\n> MAKE_SYSCACHE(pg_type_oid_index, TYPEOID, 64);\n\n...could be used, so something like\n\nDECLARE_OBJECT_INFO(OBJECT_ACCESS_METHOD);\n\n> * How to select which catalogs have ObjectProperty entries generated?\n\nWould the above work for that as well?\n\n> * Where to declare class descriptions? Or just keep the current hack in\n> the patch.\n\nI don't have an opinion on this.\n\n> * How to declare the purpose of a catalog column, like \"this is the ACL\n> column for this catalog\". This is currently done by name, but maybe\n> it should be more explicit.\n\nPerhaps we can have additional column macros, like BKI_OBJ_ACL or\nsomething, but that doesn't seem like it would result in less code.\n\n> * Question about how to pick the correct value for is_nsp_name_unique?\n\nHow is that known now -- I don't mean in the patch, but in general?\n\nAlso, I get most of the hard-wired exceptions, but\n\n+ # XXX?\n+ $name_syscache = \"SUBSCRIPTIONNAME\" if $catname eq 'pg_subscription';\n\nWhat is not right otherwise?\n\n+ if ($index->{index_decl} eq 'btree(oid oid_ops)')\n+ {\n+ $oid_index = $index->{index_oid_macro};\n+ $oid_syscache = $index->{syscache_name};\n+ }\n+ if ($index->{index_decl} =~ /\\(\\w+name name_ops(, \\w+namespace oid_ops)?\\)/)\n+ {\n+ $name_syscache = $index->{syscache_name};\n+ $is_nsp_name_unique = 1 if $index->{is_unique};\n+ }\n\nThe variables name_syscache and syscache_name are unfortunately close\nto eachother.\n\n\n",
"msg_date": "Wed, 10 Jan 2024 15:00:23 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On 10.01.24 09:00, John Naylor wrote:\n>> I suppose a sensible alternative could be to leave the\n>> DECLARE_..._INDEX... alone and make a separate statement, like\n>>\n>> MAKE_SYSCACHE(pg_type_oid_index, TYPEOID, 64);\n>>\n>> That's at least visually easier, because some of those\n>> DECLARE_... lines are getting pretty long.\n> \n> Probably a good idea, and below I mention a third possible macro.\n\nI updated the patch to use this style (but I swapped the first two \narguments from my example, so that the thing being created is named first).\n\nI also changed the names of the output files a bit to make them less \nconfusing. (I initially had some files named .c.h, which was weird, but \napparently necessary to avoid confusing the build system. But it's all \nclearer now.)\n\nOther than bugs and perhaps style opinions, I think the first patch is \npretty good now.\n\nI haven't changed much in the second patch, other than to update it for \nthe code changes made in the first patch. It's still very much \nWIP/preview at the moment.",
"msg_date": "Wed, 17 Jan 2024 13:46:02 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 7:46 PM Peter Eisentraut <[email protected]> wrote:\n>\n> I updated the patch to use this style (but I swapped the first two\n> arguments from my example, so that the thing being created is named first).\n>\n> I also changed the names of the output files a bit to make them less\n> confusing. (I initially had some files named .c.h, which was weird, but\n> apparently necessary to avoid confusing the build system. But it's all\n> clearer now.)\n>\n> Other than bugs and perhaps style opinions, I think the first patch is\n> pretty good now.\n\nLGTM. The only style consideration that occurred to me just now was\nMAKE_SYSCACHE, since it doesn't match the DECLARE_... macros. It seems\nlike the same thing from a code generation perspective. The other\nmacros refer to relations, so there is a difference, but maybe it\ndoesn't matter. I don't have a strong opinion.\n\n\n",
"msg_date": "Fri, 19 Jan 2024 12:28:28 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On 19.01.24 06:28, John Naylor wrote:\n> On Wed, Jan 17, 2024 at 7:46 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> I updated the patch to use this style (but I swapped the first two\n>> arguments from my example, so that the thing being created is named first).\n>>\n>> I also changed the names of the output files a bit to make them less\n>> confusing. (I initially had some files named .c.h, which was weird, but\n>> apparently necessary to avoid confusing the build system. But it's all\n>> clearer now.)\n>>\n>> Other than bugs and perhaps style opinions, I think the first patch is\n>> pretty good now.\n> \n> LGTM. The only style consideration that occurred to me just now was\n> MAKE_SYSCACHE, since it doesn't match the DECLARE_... macros. It seems\n> like the same thing from a code generation perspective. The other\n> macros refer to relations, so there is a difference, but maybe it\n> doesn't matter. I don't have a strong opinion.\n\nThe DECLARE_* macros map to actual BKI commands (\"declare toast\", \n\"declare index\"). So I wanted to use a different verb to distinguish \n\"generate code for this\" from those BKI commands.\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 15:03:37 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On Fri, Jan 19, 2024 at 9:03 PM Peter Eisentraut <[email protected]> wrote:\n>\n> The DECLARE_* macros map to actual BKI commands (\"declare toast\",\n> \"declare index\"). So I wanted to use a different verb to distinguish\n> \"generate code for this\" from those BKI commands.\n\nThat makes sense to me.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 07:33:13 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generate syscache info automatically"
},
{
"msg_contents": "On 22.01.24 01:33, John Naylor wrote:\n> On Fri, Jan 19, 2024 at 9:03 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> The DECLARE_* macros map to actual BKI commands (\"declare toast\",\n>> \"declare index\"). So I wanted to use a different verb to distinguish\n>> \"generate code for this\" from those BKI commands.\n> \n> That makes sense to me.\n\nI have committed the first patch, and the buildfarm hiccup seems to have \npassed.\n\nI'll close this commitfest entry now. I have enough feedback to work on \nthe ObjectProperty generation, but I'll make a new entry for that.\n\n\n",
"msg_date": "Tue, 23 Jan 2024 19:58:30 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generate syscache info automatically"
}
] |
[
{
"msg_contents": "Hi, hackers.\n PostgreSQL 16 Beta1, added last access time to pg_stat_all_tables and pg_stat_all_indexes views by this patch [1].\nAccording to the documentation [2], the data type of the columns added to these views is 'timestamptz'. \nHowever, columns of the same data type in pg_stat_all_tables.last_vacuum, last_analyze and other tables are unified to 'timestamp with time zone'. The attached patch changes the data type of the added column from timestamptz to timestamp with time zone.\n\n[1] pgstat: Track time of the last scan of a relation\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c037471832e1ec3327f81eebbd8892e5c1042fe0\n[2] pg_stat_activity view\nhttps://www.postgresql.org/docs/16/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Wed, 31 May 2023 03:56:51 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[16Beta1][doc] pgstat: Track time of the last scan of a relation"
},
{
"msg_contents": "On Wed, 31 May 2023 at 15:57, Shinoda, Noriyoshi (PN Japan FSIP)\n<[email protected]> wrote:\n> According to the documentation [2], the data type of the columns added to these views is 'timestamptz'.\n> However, columns of the same data type in pg_stat_all_tables.last_vacuum, last_analyze and other tables are unified to 'timestamp with time zone'. The attached patch changes the data type of the added column from timestamptz to timestamp with time zone.\n\nI agree that it would be good to make those consistently use timestamp\nwith time zone for all columns of that type in the docs for\npg_stat_all_tables.\n\nMore generally, it might be good if we did it for the entire docs:\n\ndoc $ git grep \"<type>timestamptz</type>\" | wc -l\n17\ndoc $ git grep \"<type>timestamp with time zone</type>\" | wc -l\n74\n\nClearly \"timestamp with time zone\" is much more commonly used.\n\nThe bar is probably set a bit higher for changing the\nlonger-established ones, however.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 May 2023 18:13:57 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [16Beta1][doc] pgstat: Track time of the last scan of a relation"
},
{
"msg_contents": "Hi, Thanks for your comment.\r\nAs you say, it would be difficult to unify the data types in all documents right now.\r\nThe patch I attached the other day unifies only the newly added columns in monitoring.sgml to \"timestamp with time zone\".\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Wednesday, May 31, 2023 3:14 PM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <[email protected]>\r\nCc: PostgreSQL-development <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]\r\nSubject: Re: [16Beta1][doc] pgstat: Track time of the last scan of a relation\r\n\r\nOn Wed, 31 May 2023 at 15:57, Shinoda, Noriyoshi (PN Japan FSIP) <[email protected]> wrote:\r\n> According to the documentation [2], the data type of the columns added to these views is 'timestamptz'.\r\n> However, columns of the same data type in pg_stat_all_tables.last_vacuum, last_analyze and other tables are unified to 'timestamp with time zone'. The attached patch changes the data type of the added column from timestamptz to timestamp with time zone.\r\n\r\nI agree that it would be good to make those consistently use timestamp with time zone for all columns of that type in the docs for pg_stat_all_tables.\r\n\r\nMore generally, it might be good if we did it for the entire docs:\r\n\r\ndoc $ git grep \"<type>timestamptz</type>\" | wc -l\r\n17\r\ndoc $ git grep \"<type>timestamp with time zone</type>\" | wc -l\r\n74\r\n\r\nClearly \"timestamp with time zone\" is much more commonly used.\r\n\r\nThe bar is probably set a bit higher for changing the longer-established ones, however.\r\n\r\nDavid\r\n",
"msg_date": "Thu, 1 Jun 2023 09:38:18 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [16Beta1][doc] pgstat: Track time of the last scan of a relation"
},
{
"msg_contents": "On Wed, 31 May 2023 at 15:57, Shinoda, Noriyoshi (PN Japan FSIP)\n<[email protected]> wrote:\n> PostgreSQL 16 Beta1, added last access time to pg_stat_all_tables and pg_stat_all_indexes views by this patch [1].\n> According to the documentation [2], the data type of the columns added to these views is 'timestamptz'.\n> However, columns of the same data type in pg_stat_all_tables.last_vacuum, last_analyze and other tables are unified to 'timestamp with time zone'. The attached patch changes the data type of the added column from timestamptz to timestamp with time zone.\n\nI've now pushed this change.\n\nDavid\n\n\n",
"msg_date": "Mon, 5 Jun 2023 17:36:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [16Beta1][doc] pgstat: Track time of the last scan of a relation"
},
{
"msg_contents": "Hi, David.\r\n\r\n> I've now pushed this change.\r\nThank you so much.\r\n\r\n-----Original Message-----\r\nFrom: David Rowley <[email protected]> \r\nSent: Monday, June 5, 2023 2:37 PM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <[email protected]>\r\nCc: PostgreSQL-development <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]\r\nSubject: Re: [16Beta1][doc] pgstat: Track time of the last scan of a relation\r\n\r\nOn Wed, 31 May 2023 at 15:57, Shinoda, Noriyoshi (PN Japan FSIP) <[email protected]> wrote:\r\n> PostgreSQL 16 Beta1, added last access time to pg_stat_all_tables and pg_stat_all_indexes views by this patch [1].\r\n> According to the documentation [2], the data type of the columns added to these views is 'timestamptz'.\r\n> However, columns of the same data type in pg_stat_all_tables.last_vacuum, last_analyze and other tables are unified to 'timestamp with time zone'. The attached patch changes the data type of the added column from timestamptz to timestamp with time zone.\r\n\r\nI've now pushed this change.\r\n\r\nDavid\r\n",
"msg_date": "Mon, 5 Jun 2023 07:04:38 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [16Beta1][doc] pgstat: Track time of the last scan of a relation"
}
] |
[
{
"msg_contents": "The SSL tests for pg_ctl restarts with an incorrect key passphrase run pg_ctl\nmanually and use the internal method _update_pid to set the server PID file\naccordingly. This is needed since $node->restart will BAIL in case the restart\nfails, which clearly isn't useful to anyone wanting to test restarts. This is\nthe only use of _update_pid outside of Cluster.pm.\n\nTo avoid this, the attached adds fail_ok functionality to restart() which makes\nit easier to use it in tests, and aligns it with how stop() and start() works.\nThe resulting SSL tests are also more readable IMO.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 31 May 2023 14:47:54 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Refactor ssl tests to avoid using internal PostgreSQL::Test::Cluster\n methods"
},
{
"msg_contents": "Hi Daniel,\n\nThanks for the patch.\n\nDaniel Gustafsson <[email protected]>, 31 May 2023 Çar, 15:48 tarihinde\nşunu yazdı:\n>\n> To avoid this, the attached adds fail_ok functionality to restart() which makes\n> it easier to use it in tests, and aligns it with how stop() and start() works.\n> The resulting SSL tests are also more readable IMO.\n\nI agree that it's more readable this way.\n\nI only have a few comments:\n\n>\n> + BAIL_OUT(\"pg_ctl restart failed\") unless $params{fail_ok};\n> + return 0;\n> + }\n>\n>\n> $self->_update_pid(1);\n> return;\n\nI was comparing this new restart function to start and stop functions.\nI see that restart() does not return a value if it's successful while\nothers return 1.\nIts return value is not checked anywhere, so it may not be useful at\nthe moment. But I feel like it would be nice to make it look like\nstart()/stop(). What do you think?\n\n> command_fails(\n> [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n> 'restart fails with incorrect SSL protocol bounds');\n\nThere are two other places where ssl tests restart the node like\nabove. We can call $node->restart in those lines too.\n\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\n\n",
"msg_date": "Wed, 31 May 2023 16:46:23 +0300",
"msg_from": "Melih Mutlu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactor ssl tests to avoid using internal\n PostgreSQL::Test::Cluster\n methods"
},
{
"msg_contents": "> On 31 May 2023, at 15:46, Melih Mutlu <[email protected]> wrote:\n\n> I was comparing this new restart function to start and stop functions.\n> I see that restart() does not return a value if it's successful while\n> others return 1.\n> Its return value is not checked anywhere, so it may not be useful at\n> the moment. But I feel like it would be nice to make it look like\n> start()/stop(). What do you think?\n\nIt should absolutely return 1, that was an oversight. Fixed as well as\ndocumentation updated.\n\n>> command_fails(\n>> [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n>> 'restart fails with incorrect SSL protocol bounds');\n> \n> There are two other places where ssl tests restart the node like\n> above. We can call $node->restart in those lines too.\n\nFixed in the attached v2.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 31 May 2023 16:12:42 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactor ssl tests to avoid using internal\n PostgreSQL::Test::Cluster methods"
},
{
"msg_contents": "On 31/05/2023 15:47, Daniel Gustafsson wrote:\n> The SSL tests for pg_ctl restarts with an incorrect key passphrase run pg_ctl\n> manually and use the internal method _update_pid to set the server PID file\n> accordingly. This is needed since $node->restart will BAIL in case the restart\n> fails, which clearly isn't useful to anyone wanting to test restarts. This is\n> the only use of _update_pid outside of Cluster.pm.\n> \n> To avoid this, the attached adds fail_ok functionality to restart() which makes\n> it easier to use it in tests, and aligns it with how stop() and start() works.\n> The resulting SSL tests are also more readable IMO.\n\nMakes sense.\n\n> diff --git a/src/test/ssl/t/001_ssltests.pl b/src/test/ssl/t/001_ssltests.pl\n> index 76442de063..e33f648aae 100644\n> --- a/src/test/ssl/t/001_ssltests.pl\n> +++ b/src/test/ssl/t/001_ssltests.pl\n> @@ -85,10 +85,8 @@ switch_server_cert(\n> \tpassphrase_cmd => 'echo wrongpassword',\n> \trestart => 'no');\n> \n> -command_fails(\n> -\t[ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n> -\t'restart fails with password-protected key file with wrong password');\n> -$node->_update_pid(0);\n> +$result = $node->restart(fail_ok => 1);\n> +is($result, 0, 'restart fails with password-protected key file with wrong password');\n> \n> switch_server_cert(\n> \t$node,\n\nIn principle, this makes the tests more lenient. If \"pg_ctl restart\" \nfails because of a timeout, for example, the PID file could be present. \nPreviously, the _update_pid(0) would error out on that, but now it's \naccepted. I think that's fine, but just wanted to point it out.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 14:37:15 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Refactor ssl tests to avoid using internal\n PostgreSQL::Test::Cluster methods"
},
{
"msg_contents": "> On 3 Jul 2023, at 13:37, Heikki Linnakangas <[email protected]> wrote:\n\n> If \"pg_ctl restart\" fails because of a timeout, for example, the PID file could be present. Previously, the _update_pid(0) would error out on that, but now it's accepted. I think that's fine, but just wanted to point it out.\n\nRevisiting an old patch. Agreed, I think that's fine, so I went ahead and\npushed this. Thanks for review!\n\nIt's unfortunately not that common for buildfarm animals to run the SSL tests,\nso I guess I'll keep an extra eye on the CFBot for this one.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 22 Sep 2023 13:52:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Refactor ssl tests to avoid using internal\n PostgreSQL::Test::Cluster methods"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been working with a social network start-up that uses PostgreSQL as their\nonly database. Recently, they became interested in graph databases, largely\nbecause of an article [1] suggesting that a SQL database \"just chokes\" when it\nencounters a depth-five friends-of-friends query (for a million users having\n50 friends each).\n\nThe article didn't provide the complete SQL queries, so I had to buy the\nreferenced book to get the full picture. It turns out, the query was a simple\nself-join, which, of course, isn't very efficient. When we rewrote the query\nusing a modern RECURSIVE CTE, it worked but still took quite some time.\n\nOf course, there will always be a need for specific databases, and some queries\nwill run faster on them. But, if PostgreSQL could handle graph queries with a\nBig-O runtime similar to graph databases, scalability wouldn't be such a big\nworry.\n\nJust like the addition of the JSON type to PostgreSQL helped reduce the hype\naround NoSQL, maybe there's something simple that's missing in PostgreSQL that\ncould help us achieve the same Big-O class performance as graph databases for\nsome of these type of graph queries?\n\nLooking into the key differences between PostgreSQL and graph databases,\nit seems that one is how they store adjacent nodes. In SQL, a graph can be\nrepresented as one table for the Nodes and another table for the Edges.\nFor a friends-of-friends query, we would need to query Edges to find adjacent\nnodes, recursively.\n\nGraph databases, on the other hand, keep adjacent nodes immediately accessible\nby storing them with the node itself. This looks like a major difference in\nterms of how the data is stored.\n\nCould a hashset type help bridge this gap?\n\nThe idea would be to store adjacent nodes as a hashset column in a Nodes table.\n\nApache AGE is an option for users who really need a new graph query language.\nBut I believe there are more users who occasionally run into a graph problem and\nwould be glad if there was an efficient way to solve it in SQL without having\nto bring in a new database.\n\n/Joel\n\n[1] https://neo4j.com/news/how-much-faster-is-a-graph-database-really/\n\nHi,I've been working with a social network start-up that uses PostgreSQL as theironly database. Recently, they became interested in graph databases, largelybecause of an article [1] suggesting that a SQL database \"just chokes\" when itencounters a depth-five friends-of-friends query (for a million users having50 friends each).The article didn't provide the complete SQL queries, so I had to buy thereferenced book to get the full picture. It turns out, the query was a simpleself-join, which, of course, isn't very efficient. When we rewrote the queryusing a modern RECURSIVE CTE, it worked but still took quite some time.Of course, there will always be a need for specific databases, and some querieswill run faster on them. But, if PostgreSQL could handle graph queries with aBig-O runtime similar to graph databases, scalability wouldn't be such a bigworry.Just like the addition of the JSON type to PostgreSQL helped reduce the hypearound NoSQL, maybe there's something simple that's missing in PostgreSQL thatcould help us achieve the same Big-O class performance as graph databases forsome of these type of graph queries?Looking into the key differences between PostgreSQL and graph databases,it seems that one is how they store adjacent nodes. In SQL, a graph can berepresented as one table for the Nodes and another table for the Edges.For a friends-of-friends query, we would need to query Edges to find adjacentnodes, recursively.Graph databases, on the other hand, keep adjacent nodes immediately accessibleby storing them with the node itself. This looks like a major difference interms of how the data is stored.Could a hashset type help bridge this gap?The idea would be to store adjacent nodes as a hashset column in a Nodes table.Apache AGE is an option for users who really need a new graph query language.But I believe there are more users who occasionally run into a graph problem andwould be glad if there was an efficient way to solve it in SQL without havingto bring in a new database./Joel[1] https://neo4j.com/news/how-much-faster-is-a-graph-database-really/",
"msg_date": "Wed, 31 May 2023 16:09:55 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 5/31/23 16:09, Joel Jacobson wrote:\n> Hi,\n> \n> I've been working with a social network start-up that uses PostgreSQL as\n> their\n> only database. Recently, they became interested in graph databases, largely\n> because of an article [1] suggesting that a SQL database \"just chokes\"\n> when it\n> encounters a depth-five friends-of-friends query (for a million users having\n> 50 friends each).\n> \n> The article didn't provide the complete SQL queries, so I had to buy the\n> referenced book to get the full picture. It turns out, the query was a\n> simple\n> self-join, which, of course, isn't very efficient. When we rewrote the query\n> using a modern RECURSIVE CTE, it worked but still took quite some time.\n> \n> Of course, there will always be a need for specific databases, and some\n> queries\n> will run faster on them. But, if PostgreSQL could handle graph queries\n> with a\n> Big-O runtime similar to graph databases, scalability wouldn't be such a big\n> worry.\n> \n> Just like the addition of the JSON type to PostgreSQL helped reduce the hype\n> around NoSQL, maybe there's something simple that's missing in\n> PostgreSQL that\n> could help us achieve the same Big-O class performance as graph\n> databases for\n> some of these type of graph queries?\n> \n> Looking into the key differences between PostgreSQL and graph databases,\n> it seems that one is how they store adjacent nodes. In SQL, a graph can be\n> represented as one table for the Nodes and another table for the Edges.\n> For a friends-of-friends query, we would need to query Edges to find\n> adjacent\n> nodes, recursively.\n> \n> Graph databases, on the other hand, keep adjacent nodes immediately\n> accessible\n> by storing them with the node itself. This looks like a major difference in\n> terms of how the data is stored.\n> \n> Could a hashset type help bridge this gap?\n> \n> The idea would be to store adjacent nodes as a hashset column in a Nodes\n> table.\n> \n\nI think this needs a better explanation - what exactly is a hashset in\nthis context? Something like an array with a hash for faster lookup of\nunique elements, or what?\n\nPresumably it'd store whole adjacent nodes, not just some sort of node\nID. So what if a node is adjacent to many other nodes? What if a node is\nadded/deleted/modified?\n\nAFAICS the main problem is the lookups of adjacent nodes, generating\nlot of random I/O etc. Presumably it's not that hard to keep the\n\"relational\" schema with table for vertices/edges, and then an auxiliary\ntable with adjacent nodes grouped by node, possibly maintained by a\ncouple triggers. A bit like an \"aggregated index\" except the queries\nwould have to use it explicitly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 31 May 2023 16:53:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, May 31, 2023, at 16:53, Tomas Vondra wrote:\n> I think this needs a better explanation - what exactly is a hashset in\n> this context? Something like an array with a hash for faster lookup of\n> unique elements, or what?\n\nIn this context, by \"hashset\" I am indeed referring to a data structure similar\nto an array, where each element would be unique, and lookups would be faster\nthan arrays for larger number of elements due to hash-based lookups.\n\nThis data structure would store identifiers (IDs) of the nodes, not the complete\nnodes themselves.\n\n> Presumably it'd store whole adjacent nodes, not just some sort of node\n> ID. So what if a node is adjacent to many other nodes? What if a node is\n> added/deleted/modified?\n\nThat would require updating the hashset, which should be close to O(1) in\npractical applications.\n\n> AFAICS the main problem is the lookups of adjacent nodes, generating\n> lot of random I/O etc. Presumably it's not that hard to keep the\n> \"relational\" schema with table for vertices/edges, and then an auxiliary\n> table with adjacent nodes grouped by node, possibly maintained by a\n> couple triggers. A bit like an \"aggregated index\" except the queries\n> would have to use it explicitly.\n\nYes, auxiliary table would be good, since we don't want to duplicate all\nnode-related data, and only store the IDs in the adjacent nodes hashset.\n\n/Joel\n\n\n",
"msg_date": "Wed, 31 May 2023 17:40:22 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 5/31/23 17:40, Joel Jacobson wrote:\n> On Wed, May 31, 2023, at 16:53, Tomas Vondra wrote:\n>> I think this needs a better explanation - what exactly is a hashset in\n>> this context? Something like an array with a hash for faster lookup of\n>> unique elements, or what?\n> \n> In this context, by \"hashset\" I am indeed referring to a data structure similar\n> to an array, where each element would be unique, and lookups would be faster\n> than arrays for larger number of elements due to hash-based lookups.\n> \n\nWhy would you want hash-based lookups? It should be fairly trivial to\nimplement as a user-defined data type, but in what cases are you going\nto ask \"does the hashset contain X\"?\n\n> This data structure would store identifiers (IDs) of the nodes, not the complete\n> nodes themselves.\n> \n\nHow does storing just the IDs solves anything? Isn't the main challenge\nthe random I/O when fetching the adjacent nodes? This does not really\nimprove that, no?\n\n>> Presumably it'd store whole adjacent nodes, not just some sort of node\n>> ID. So what if a node is adjacent to many other nodes? What if a node is\n>> added/deleted/modified?\n> \n> That would require updating the hashset, which should be close to O(1) in\n> practical applications.\n> \n\nBut you need to modify hashsets for all the adjacent nodes. Also, O(1)\ndoesn't say it's cheap. I wonder how expensive it'd be in practice.\n\n>> AFAICS the main problem is the lookups of adjacent nodes, generating\n>> lot of random I/O etc. Presumably it's not that hard to keep the\n>> \"relational\" schema with table for vertices/edges, and then an auxiliary\n>> table with adjacent nodes grouped by node, possibly maintained by a\n>> couple triggers. A bit like an \"aggregated index\" except the queries\n>> would have to use it explicitly.\n> \n> Yes, auxiliary table would be good, since we don't want to duplicate all\n> node-related data, and only store the IDs in the adjacent nodes hashset.\n> \n\nI may be missing something, but as mentioned, I don't quite see how this\nwould help. What exactly would this save us? If you create an index on\nthe edge node IDs, you should get the adjacent nodes pretty cheap from\nan IOS. Granted, a pre-built hashset is going to be faster, but if the\nnext step is fetching the node info, that's going to do a lot of random\nI/O, dwarfing all of this.\n\nIt's entirely possible I'm missing some important aspect. It'd be very\nuseful to have a couple example queries illustrating the issue, that we\ncould use to actually test different approaches.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 31 May 2023 18:59:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, May 31, 2023, at 18:59, Tomas Vondra wrote:\n> How does storing just the IDs solves anything? Isn't the main challenge\n> the random I/O when fetching the adjacent nodes? This does not really\n> improve that, no?\n\nI'm thinking of a recursive query where a lot of time is just spent following\nall friends-of-friends, where the graph traversal is the heavy part,\nwhere the final set of nodes are only fetched at the end.\n\n> It's entirely possible I'm missing some important aspect. It'd be very\n> useful to have a couple example queries illustrating the issue, that we\n> could use to actually test different approaches.\n\nHere is an example using a real anonymised social network.\n\nwget https://snap.stanford.edu/data/soc-pokec-relationships.txt.gz\ngunzip soc-pokec-relationships.txt.gz\nCREATE TABLE edges (from_node INT, to_node INT);\n\\copy edges from soc-pokec-relationships.txt;\nALTER TABLE edges ADD PRIMARY KEY (from_node, to_node);\nSET work_mem TO '1GB';\nCREATE VIEW friends_of_friends AS\nWITH RECURSIVE friends_of_friends AS (\n SELECT \n ARRAY[5867::bigint] AS current, \n ARRAY[5867::bigint] AS found,\n 0 AS depth\n UNION ALL \n SELECT \n new_current, \n found || new_current, \n friends_of_friends.depth + 1\n FROM \n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n array_agg(DISTINCT edges.to_node) AS new_current\n FROM\n edges\n WHERE\n from_node = ANY(friends_of_friends.current)\n ) q\n WHERE\n friends_of_friends.depth < 3\n)\nSELECT\n depth,\n coalesce(array_length(current, 1), 0)\nFROM\n friends_of_friends\nWHERE\n depth = 3;\n;\n\nSELECT COUNT(*) FROM edges;\n count\n----------\n 30622564\n(1 row)\n\nSELECT COUNT(DISTINCT from_node) FROM edges;\n count\n---------\n 1432693\n(1 row)\n\n-- Most connected user (worst-case) is user 5867 with 8763 friends:\nSELECT from_node, COUNT(*) FROM edges GROUP BY from_node ORDER BY COUNT(*) DESC LIMIT 1;\n from_node | count\n-----------+-------\n 5867 | 8763\n(1 row)\n\n-- Friend-of-friends query exactly at depth three:\n\nEXPLAIN ANALYZE\nSELECT * FROM friends_of_friends;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on friends_of_friends (cost=6017516.90..6017517.60 rows=1 width=8) (actual time=2585.881..2586.334 rows=1 loops=1)\n Filter: (depth = 3)\n Rows Removed by Filter: 3\n CTE friends_of_friends\n -> Recursive Union (cost=0.00..6017516.90 rows=31 width=68) (actual time=0.005..2581.664 rows=4 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=200583.71..601751.66 rows=3 width=68) (actual time=645.036..645.157 rows=1 loops=4)\n -> Nested Loop (cost=200583.71..601751.54 rows=3 width=68) (actual time=641.880..641.972 rows=1 loops=4)\n -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=68) (actual time=0.001..0.002 rows=1 loops=4)\n Filter: (depth < 3)\n Rows Removed by Filter: 0\n -> Aggregate (cost=200583.71..200583.72 rows=1 width=32) (actual time=850.997..850.998 rows=1 loops=3)\n -> Bitmap Heap Scan on edges (cost=27656.38..196840.88 rows=1497133 width=4) (actual time=203.239..423.534 rows=3486910 loops=3)\n Recheck Cond: (from_node = ANY (friends_of_friends_1.current))\n Heap Blocks: exact=117876\n -> Bitmap Index Scan on edges_pkey (cost=0.00..27282.10 rows=1497133 width=0) (actual time=198.047..198.047 rows=3486910 loops=3)\n Index Cond: (from_node = ANY (friends_of_friends_1.current))\n Planning Time: 1.414 ms\n Execution Time: 2588.288 ms\n(19 rows)\n\nI tested on PostgreSQL 15.2. For some reason I got a different slower query on HEAD:\n\nSELECT * FROM friends_of_friends;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on friends_of_friends (cost=6576.67..6577.37 rows=1 width=8) (actual time=6412.693..6413.335 rows=1 loops=1)\n Filter: (depth = 3)\n Rows Removed by Filter: 3\n CTE friends_of_friends\n -> Recursive Union (cost=0.00..6576.67 rows=31 width=68) (actual time=0.008..6407.134 rows=4 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=68) (actual time=0.005..0.005 rows=1 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=219.05..657.64 rows=3 width=68) (actual time=1600.747..1600.934 rows=1 loops=4)\n -> Nested Loop (cost=219.05..657.52 rows=3 width=68) (actual time=1594.906..1595.035 rows=1 loops=4)\n -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=68) (actual time=0.001..0.002 rows=1 loops=4)\n Filter: (depth < 3)\n Rows Removed by Filter: 0\n -> Aggregate (cost=219.05..219.06 rows=1 width=32) (actual time=2118.105..2118.105 rows=1 loops=3)\n -> Sort (cost=207.94..213.49 rows=2221 width=4) (actual time=1780.770..1925.853 rows=3486910 loops=3)\n Sort Key: edges.to_node\n Sort Method: quicksort Memory: 393217kB\n -> Index Only Scan using edges_pkey on edges (cost=0.56..84.48 rows=2221 width=4) (actual time=0.077..762.408 rows=3486910 loops=3)\n Index Cond: (from_node = ANY (friends_of_friends_1.current))\n Heap Fetches: 0\n Planning Time: 8.229 ms\n Execution Time: 6446.421 ms\n(20 rows)\n\n\n",
"msg_date": "Thu, 01 Jun 2023 09:02:21 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 1, 2023, at 09:02, Joel Jacobson wrote:\n> Here is an example using a real anonymised social network.\n\nI realised the \"found\" column is not necessary in this particular example,\nsince we only care about the friends at the exact depth level. Simplified query:\n\nCREATE OR REPLACE VIEW friends_of_friends AS\nWITH RECURSIVE friends_of_friends AS (\n SELECT \n ARRAY[5867::bigint] AS current,\n 0 AS depth\n UNION ALL\n SELECT\n new_current,\n friends_of_friends.depth + 1\n FROM\n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n array_agg(DISTINCT edges.to_node) AS new_current\n FROM\n edges\n WHERE\n from_node = ANY(friends_of_friends.current)\n ) q\n WHERE\n friends_of_friends.depth < 3\n)\nSELECT\n depth,\n coalesce(array_length(current, 1), 0)\nFROM\n friends_of_friends\nWHERE\n depth = 3;\n;\n\n-- PostgreSQL 15.2:\n\nEXPLAIN ANALYZE SELECT * FROM friends_of_friends;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on friends_of_friends (cost=2687.88..2688.58 rows=1 width=8) (actual time=2076.362..2076.454 rows=1 loops=1)\n Filter: (depth = 3)\n Rows Removed by Filter: 3\n CTE friends_of_friends\n -> Recursive Union (cost=0.00..2687.88 rows=31 width=36) (actual time=0.008..2075.073 rows=4 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=36) (actual time=0.002..0.002 rows=1 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=89.44..268.75 rows=3 width=36) (actual time=518.613..518.622 rows=1 loops=4)\n -> Nested Loop (cost=89.44..268.64 rows=3 width=36) (actual time=515.523..515.523 rows=1 loops=4)\n -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=36) (actual time=0.001..0.001 rows=1 loops=4)\n Filter: (depth < 3)\n Rows Removed by Filter: 0\n -> Aggregate (cost=89.44..89.45 rows=1 width=32) (actual time=687.356..687.356 rows=1 loops=3)\n -> Index Only Scan using edges_pkey on edges (cost=0.56..83.96 rows=2191 width=4) (actual time=0.139..290.996 rows=3486910 loops=3)\n Index Cond: (from_node = ANY (friends_of_friends_1.current))\n Heap Fetches: 0\n Planning Time: 0.557 ms\n Execution Time: 2076.990 ms\n(17 rows)\n\n\n",
"msg_date": "Thu, 01 Jun 2023 09:14:08 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-05-31 We 11:40, Joel Jacobson wrote:\n> On Wed, May 31, 2023, at 16:53, Tomas Vondra wrote:\n>> I think this needs a better explanation - what exactly is a hashset in\n>> this context? Something like an array with a hash for faster lookup of\n>> unique elements, or what?\n> In this context, by \"hashset\" I am indeed referring to a data structure similar\n> to an array, where each element would be unique, and lookups would be faster\n> than arrays for larger number of elements due to hash-based lookups.\n\n\nYeah, a fast lookup set type has long been on my \"blue sky\" wish list. \nSo +1 for pursuing the idea.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-31 We 11:40, Joel Jacobson\n wrote:\n\n\nOn Wed, May 31, 2023, at 16:53, Tomas Vondra wrote:\n\n\nI think this needs a better explanation - what exactly is a hashset in\nthis context? Something like an array with a hash for faster lookup of\nunique elements, or what?\n\n\n\nIn this context, by \"hashset\" I am indeed referring to a data structure similar\nto an array, where each element would be unique, and lookups would be faster\nthan arrays for larger number of elements due to hash-based lookups.\n\n\n\nYeah, a fast lookup set type has long been on my \"blue sky\" wish\n list. So +1 for pursuing the idea.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 1 Jun 2023 06:51:21 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, 31 May 2023 at 18:40, Joel Jacobson <[email protected]> wrote:\n>\n> On Wed, May 31, 2023, at 16:53, Tomas Vondra wrote:\n> > I think this needs a better explanation - what exactly is a hashset in\n> > this context? Something like an array with a hash for faster lookup of\n> > unique elements, or what?\n>\n> In this context, by \"hashset\" I am indeed referring to a data structure similar\n> to an array, where each element would be unique, and lookups would be faster\n> than arrays for larger number of elements due to hash-based lookups.\n>\n> This data structure would store identifiers (IDs) of the nodes, not the complete\n> nodes themselves.\n\nHave you looked at roaring bitmaps? There is a pg_roaringbitmap\nextension [1] already available that offers very fast unions,\nintersections and membership tests over integer sets. I used it to get\nsome pretty impressive performance results for faceting search on\nlarge document sets. [2]\n\nDepending on the graph fan-outs and operations it might make sense in\nthe graph use case. For small sets it's probably not too different\nfrom the intarray extension in contrib. But for finding intersections\nover large sets (i.e. a join) it's very-very fast. If the workload is\ntraversal heavy it might make sense to even cache materialized\ntransitive closures up to some depth (a friend-of-a-friend list).\n\nRoaring bitmaps only support int4 right now, but that is easily\nfixable. And they need a relatively dense ID space to get the\nperformance boost, which seems essential to the approach. The latter\nissue means that it can't be easily dropped into GIN or B-tree indexes\nfor ctid storage.\n\n[1] https://github.com/ChenHuajun/pg_roaringbitmap\n[2] https://github.com/cybertec-postgresql/pgfaceting\n-- \nAnts Aasma\nwww.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 2 Jun 2023 11:01:42 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/1/23 09:14, Joel Jacobson wrote:\n> On Thu, Jun 1, 2023, at 09:02, Joel Jacobson wrote:\n>> Here is an example using a real anonymised social network.\n> \n> I realised the \"found\" column is not necessary in this particular example,\n> since we only care about the friends at the exact depth level. Simplified query:\n> \n> CREATE OR REPLACE VIEW friends_of_friends AS\n> WITH RECURSIVE friends_of_friends AS (\n> SELECT \n> ARRAY[5867::bigint] AS current,\n> 0 AS depth\n> UNION ALL\n> SELECT\n> new_current,\n> friends_of_friends.depth + 1\n> FROM\n> friends_of_friends\n> CROSS JOIN LATERAL (\n> SELECT\n> array_agg(DISTINCT edges.to_node) AS new_current\n> FROM\n> edges\n> WHERE\n> from_node = ANY(friends_of_friends.current)\n> ) q\n> WHERE\n> friends_of_friends.depth < 3\n> )\n> SELECT\n> depth,\n> coalesce(array_length(current, 1), 0)\n> FROM\n> friends_of_friends\n> WHERE\n> depth = 3;\n> ;\n> \n> -- PostgreSQL 15.2:\n> \n> EXPLAIN ANALYZE SELECT * FROM friends_of_friends;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> CTE Scan on friends_of_friends (cost=2687.88..2688.58 rows=1 width=8) (actual time=2076.362..2076.454 rows=1 loops=1)\n> Filter: (depth = 3)\n> Rows Removed by Filter: 3\n> CTE friends_of_friends\n> -> Recursive Union (cost=0.00..2687.88 rows=31 width=36) (actual time=0.008..2075.073 rows=4 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=36) (actual time=0.002..0.002 rows=1 loops=1)\n> -> Subquery Scan on \"*SELECT* 2\" (cost=89.44..268.75 rows=3 width=36) (actual time=518.613..518.622 rows=1 loops=4)\n> -> Nested Loop (cost=89.44..268.64 rows=3 width=36) (actual time=515.523..515.523 rows=1 loops=4)\n> -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=36) (actual time=0.001..0.001 rows=1 loops=4)\n> Filter: (depth < 3)\n> Rows Removed by Filter: 0\n> -> Aggregate (cost=89.44..89.45 rows=1 width=32) (actual time=687.356..687.356 rows=1 loops=3)\n> -> Index Only Scan using edges_pkey on edges (cost=0.56..83.96 rows=2191 width=4) (actual time=0.139..290.996 rows=3486910 loops=3)\n> Index Cond: (from_node = ANY (friends_of_friends_1.current))\n> Heap Fetches: 0\n> Planning Time: 0.557 ms\n> Execution Time: 2076.990 ms\n> (17 rows)\n> \n\nI've been thinking about this a bit on the way back from pgcon. Per CPU\nprofile it seems most of the job is actually spent on qsort, calculating\nthe array_agg(distinct) bit. I don't think we build\n\nWe could replace this part by a hash-based set aggregate, which would be\nfaster, but I doubt that may yield a massive improvement that'd change\nthe fundamental cost.\n\nI forgot I did something like that a couple years back, implementing a\ncount_distinct() aggregate that was meant as a faster alternative to\ncount(distinct). And then I mostly abandoned it because the built-in\nsort-based approach improved significantly - it was still slower in\ncases, but the gap got small enough.\n\nAnyway, I hacked together a trivial set backed by an open addressing\nhash table:\n\n https://github.com/tvondra/hashset\n\nIt's super-basic, providing just some bare minimum of functions, but\nhopefully good enough for experiments.\n\n- hashset data type - hash table in varlena\n- hashset_init - create new hashset\n- hashset_add / hashset_contains - add value / check membership\n- hashset_merge - merge two hashsets\n- hashset_count - count elements\n- hashset_to_array - return\n- hashset(int) aggregate\n\nThis allows rewriting the recursive query like this:\n\n WITH RECURSIVE friends_of_friends AS (\n SELECT\n ARRAY[5867::bigint] AS current,\n 0 AS depth\n UNION ALL\n SELECT\n new_current,\n friends_of_friends.depth + 1\n FROM\n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n hashset_to_array(hashset(edges.to_node)) AS new_current\n FROM\n edges\n WHERE\n from_node = ANY(friends_of_friends.current)\n ) q\n WHERE\n friends_of_friends.depth < 3\n )\n SELECT\n depth,\n coalesce(array_length(current, 1), 0)\n FROM\n friends_of_friends\n WHERE\n depth = 3;\n\nOn my laptop cuts the timing roughly in half - which is nice, but as I\nsaid I don't think it's not a fundamental speedup. The aggregate can be\nalso parallelized, but I don't think that changes much.\n\nFurthermore, this has a number of drawbacks too - e.g. it can't spill\ndata to disk, which might be an issue on more complex queries / larger\ndata sets.\n\nFWIW I wonder how representative this query is, considering it returns\n~1M node IDs, i.e. about 10% of the whole set of node IDs. Seems a bit\non the high side.\n\nI've also experimented with doing stuff from plpgsql procedure (that's\nwhat the non-aggregate functions are about). I saw this mostly as a way\nto do stuff that'd be hard to do in the recursive CTE, but it has a lot\nof additional execution overhead due to plpgsql. Maybe it we had some\nsmart trick to calculate adjacent nodes we could have a SRF written in C\nto get rid of the overhead.\n\nAnyway, this leads me to the question what graph databases are doing for\nthese queries, if they're faster in answering such queries (by how\nmuch?). I'm not very familiar with that stuff, but surely they do\nsomething smart - precalculating the data, some special indexing,\nduplicating the data in multiple places?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Jun 2023 01:44:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 2, 2023, at 10:01, Ants Aasma wrote:\n> Have you looked at roaring bitmaps? There is a pg_roaringbitmap\n> extension [1] already available that offers very fast unions,\n> intersections and membership tests over integer sets. I used it to get\n> some pretty impressive performance results for faceting search on\n> large document sets. [2]\n\nMany thanks for the tip!\n\nNew benchmark:\n\nWe already had since before:\n> wget https://snap.stanford.edu/data/soc-pokec-relationships.txt.gz\n> gunzip soc-pokec-relationships.txt.gz\n> CREATE TABLE edges (from_node INT, to_node INT);\n> \\copy edges from soc-pokec-relationships.txt;\n> ALTER TABLE edges ADD PRIMARY KEY (from_node, to_node);\n\nI've created a new `users` table from the `edges` table,\nwith a new `friends` roaringbitmap column:\n\nCREATE TABLE users AS\nSELECT from_node AS id, rb_build_agg(to_node) AS friends FROM edges GROUP BY 1;\nALTER TABLE users ADD PRIMARY KEY (id);\n\nOld query from before:\n\nCREATE OR REPLACE VIEW friends_of_friends AS\nWITH RECURSIVE friends_of_friends AS (\n SELECT\n ARRAY[5867::bigint] AS current,\n 0 AS depth\n UNION ALL\n SELECT\n new_current,\n friends_of_friends.depth + 1\n FROM\n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n array_agg(DISTINCT edges.to_node) AS new_current\n FROM\n edges\n WHERE\n from_node = ANY(friends_of_friends.current)\n ) q\n WHERE\n friends_of_friends.depth < 3\n)\nSELECT\n coalesce(array_length(current, 1), 0) AS count_friends_at_depth_3\nFROM\n friends_of_friends\nWHERE\n depth = 3;\n;\n\nNew roaringbitmap-based query using users instead:\n\nCREATE OR REPLACE VIEW friends_of_friends_roaringbitmap AS\nWITH RECURSIVE friends_of_friends AS\n(\n SELECT\n friends,\n 1 AS depth\n FROM users WHERE id = 5867\n UNION ALL\n SELECT\n new_friends,\n friends_of_friends.depth + 1\n FROM\n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n rb_or_agg(users.friends) AS new_friends\n FROM\n users\n WHERE\n users.id = ANY(rb_to_array(friends_of_friends.friends))\n ) q\n WHERE\n friends_of_friends.depth < 3\n)\nSELECT\n rb_cardinality(friends) AS count_friends_at_depth_3\nFROM\n friends_of_friends\nWHERE\n depth = 3\n;\n\nNote, depth is 1 at first level since we already have user 5867's friends in the users column.\n\nMaybe there is a better way to make use of the btree index on users.id,\nthan to convert the roaringbitmap to an array in order to use = ANY(...),\nthat is, this part: `users.id = ANY(rb_to_array(friends_of_friends.friends))`?\n\nOr maybe there is some entirely different but equivalent way of writing the query\nin a more efficient way?\n\n\nSELECT * FROM friends_of_friends;\n count_friends_at_depth_3\n--------------------------\n 1035293\n(1 row)\n\nSELECT * FROM friends_of_friends_roaringbitmap;\n count_friends_at_depth_3\n--------------------------\n 1035293\n(1 row)\n\nEXPLAIN ANALYZE SELECT * FROM friends_of_friends;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on friends_of_friends (cost=5722.03..5722.73 rows=1 width=4) (actual time=2232.896..2233.289 rows=1 loops=1)\n Filter: (depth = 3)\n Rows Removed by Filter: 3\n CTE friends_of_friends\n -> Recursive Union (cost=0.00..5722.03 rows=31 width=36) (actual time=0.003..2228.707 rows=4 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=36) (actual time=0.001..0.001 rows=1 loops=1)\n -> Subquery Scan on \"*SELECT* 2\" (cost=190.59..572.17 rows=3 width=36) (actual time=556.806..556.837 rows=1 loops=4)\n -> Nested Loop (cost=190.59..572.06 rows=3 width=36) (actual time=553.748..553.748 rows=1 loops=4)\n -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=36) (actual time=0.000..0.001 rows=1 loops=4)\n Filter: (depth < 3)\n Rows Removed by Filter: 0\n -> Aggregate (cost=190.59..190.60 rows=1 width=32) (actual time=737.427..737.427 rows=1 loops=3)\n -> Sort (cost=179.45..185.02 rows=2227 width=4) (actual time=577.192..649.812 rows=3486910 loops=3)\n Sort Key: edges.to_node\n Sort Method: quicksort Memory: 393217kB\n -> Index Only Scan using edges_pkey on edges (cost=0.56..55.62 rows=2227 width=4) (actual time=0.027..225.609 rows=3486910 loops=3)\n Index Cond: (from_node = ANY (friends_of_friends_1.current))\n Heap Fetches: 0\n Planning Time: 0.294 ms\n Execution Time: 2240.446 ms\n\nEXPLAIN ANALYZE SELECT * FROM friends_of_friends_roaringbitmap;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on friends_of_friends (cost=799.30..800.00 rows=1 width=8) (actual time=492.925..492.930 rows=1 loops=1)\n Filter: (depth = 3)\n Rows Removed by Filter: 2\n CTE friends_of_friends\n -> Recursive Union (cost=0.43..799.30 rows=31 width=118) (actual time=0.061..492.842 rows=3 loops=1)\n -> Index Scan using users_pkey on users (cost=0.43..2.65 rows=1 width=118) (actual time=0.060..0.062 rows=1 loops=1)\n Index Cond: (id = 5867)\n -> Nested Loop (cost=26.45..79.63 rows=3 width=36) (actual time=164.244..164.244 rows=1 loops=3)\n -> WorkTable Scan on friends_of_friends friends_of_friends_1 (cost=0.00..0.22 rows=3 width=36) (actual time=0.000..0.001 rows=1 loops=3)\n Filter: (depth < 3)\n Rows Removed by Filter: 0\n -> Aggregate (cost=26.45..26.46 rows=1 width=32) (actual time=246.359..246.359 rows=1 loops=2)\n -> Index Scan using users_pkey on users users_1 (cost=0.43..26.42 rows=10 width=114) (actual time=0.074..132.318 rows=116336 loops=2)\n Index Cond: (id = ANY (rb_to_array(friends_of_friends_1.friends)))\n Planning Time: 0.257 ms\n Execution Time: 493.134 ms\n\n\n",
"msg_date": "Mon, 05 Jun 2023 11:27:53 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 5, 2023, at 01:44, Tomas Vondra wrote:\n> Anyway, I hacked together a trivial set backed by an open addressing\n> hash table:\n>\n> https://github.com/tvondra/hashset\n>\n> It's super-basic, providing just some bare minimum of functions, but\n> hopefully good enough for experiments.\n>\n> - hashset data type - hash table in varlena\n> - hashset_init - create new hashset\n> - hashset_add / hashset_contains - add value / check membership\n> - hashset_merge - merge two hashsets\n> - hashset_count - count elements\n> - hashset_to_array - return\n> - hashset(int) aggregate\n>\n> This allows rewriting the recursive query like this:\n>\n> WITH RECURSIVE friends_of_friends AS (\n...\n\nNice! I get similar results, 737 ms (hashset) vs 1809 ms (array_agg).\n\nI think if you just add one more hashset function, it will be a win against roaringbitmap, which is 400 ms.\n\nThe missing function is an agg that takes hashset as input and returns hashset, similar to roaringbitmap's rb_or_agg().\n\nWith such a function, we could add an adjacent nodes hashset column to the `nodes` table, which would eliminate the need to scan the edges table for graph traversal:\n\nWe could then benchmark roaringbitmap against hashset querying the same table:\n\nCREATE TABLE users AS\nSELECT\n from_node AS id,\n rb_build_agg(to_node) AS friends_roaringbitmap,\n hashset(to_node) AS friends_hashset\nFROM edges\nGROUP BY 1;\n\n/Joel\n\n\n",
"msg_date": "Mon, 05 Jun 2023 21:52:23 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\nOn 6/5/23 21:52, Joel Jacobson wrote:\n> On Mon, Jun 5, 2023, at 01:44, Tomas Vondra wrote:\n>> Anyway, I hacked together a trivial set backed by an open addressing\n>> hash table:\n>>\n>> https://github.com/tvondra/hashset\n>>\n>> It's super-basic, providing just some bare minimum of functions, but\n>> hopefully good enough for experiments.\n>>\n>> - hashset data type - hash table in varlena\n>> - hashset_init - create new hashset\n>> - hashset_add / hashset_contains - add value / check membership\n>> - hashset_merge - merge two hashsets\n>> - hashset_count - count elements\n>> - hashset_to_array - return\n>> - hashset(int) aggregate\n>>\n>> This allows rewriting the recursive query like this:\n>>\n>> WITH RECURSIVE friends_of_friends AS (\n> ...\n> \n> Nice! I get similar results, 737 ms (hashset) vs 1809 ms (array_agg).\n> \n> I think if you just add one more hashset function, it will be a win against roaringbitmap, which is 400 ms.\n> \n> The missing function is an agg that takes hashset as input and returns hashset, similar to roaringbitmap's rb_or_agg().\n> \n> With such a function, we could add an adjacent nodes hashset column to the `nodes` table, which would eliminate the need to scan the edges table for graph traversal:\n> \n\nI added a trivial version of such aggregate hashset(hashset), and if I\nrewrite the CTE like this:\n\n WITH RECURSIVE friends_of_friends AS (\n SELECT\n (select hashset(v) from values (5867) as s(v)) AS current,\n 0 AS depth\n UNION ALL\n SELECT\n new_current,\n friends_of_friends.depth + 1\n FROM\n friends_of_friends\n CROSS JOIN LATERAL (\n SELECT\n hashset(edges.to_node) AS new_current\n FROM\n edges\n WHERE\n from_node =\nANY(hashset_to_array(friends_of_friends.current))\n ) q\n WHERE\n friends_of_friends.depth < 3\n )\n SELECT\n depth,\n hashset_count(current)\n FROM\n friends_of_friends\n WHERE\n depth = 3;\n\nit cuts the timing to about 50% on my laptop, so maybe it'll be ~300ms\non your system. There's a bunch of opportunities for more improvements,\nas the hash table implementation is pretty naive/silly, the on-disk\nformat is wasteful and so on.\n\nBut before spending more time on that, it'd be interesting to know what\nwould be a competitive timing. I mean, what would be \"good enough\"? What\ntimings are achievable with graph databases?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:20:40 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 6, 2023, at 13:20, Tomas Vondra wrote:\n> it cuts the timing to about 50% on my laptop, so maybe it'll be ~300ms\n> on your system. There's a bunch of opportunities for more improvements,\n> as the hash table implementation is pretty naive/silly, the on-disk\n> format is wasteful and so on.\n>\n> But before spending more time on that, it'd be interesting to know what\n> would be a competitive timing. I mean, what would be \"good enough\"? What\n> timings are achievable with graph databases?\n\nYour hashset is now almost exactly as fast as the corresponding roaringbitmap query, +/- 1 ms on my machine.\n\nI tested Neo4j and the results are surprising; it appears to be significantly *slower*.\nHowever, I've probably misunderstood something, maybe I need to add some index or something.\nEven so, it's interesting it's apparently not fast \"by default\".\n\nThe query I tested:\nMATCH (user:User {id: '5867'})-[:FRIENDS_WITH*3..3]->(fof)\nRETURN COUNT(DISTINCT fof)\n\nHere is how I loaded the data into it:\n\n% pwd\n/Users/joel/Library/Application Support/Neo4j Desktop/Application/relate-data/dbmss/dbms-3837aa22-c830-4dcf-8668-ef8e302263c7\n\n% head import/*\n==> import/friendships.csv <==\n1,13,FRIENDS_WITH\n1,11,FRIENDS_WITH\n1,6,FRIENDS_WITH\n1,3,FRIENDS_WITH\n1,4,FRIENDS_WITH\n1,5,FRIENDS_WITH\n1,15,FRIENDS_WITH\n1,14,FRIENDS_WITH\n1,7,FRIENDS_WITH\n1,8,FRIENDS_WITH\n\n==> import/friendships_header.csv <==\n:START_ID(User),:END_ID(User),:TYPE\n\n==> import/users.csv <==\n1,User\n2,User\n3,User\n4,User\n5,User\n6,User\n7,User\n8,User\n9,User\n10,User\n\n==> import/users_header.csv <==\nid:ID(User),:LABEL\n\n% ./bin/neo4j-admin database import full --overwrite-destination --nodes=User=import/users_header.csv,import/users.csv --relationships=FRIENDS_WIDTH=import/friendships_header.csv,import/friendships.csv neo4j\n\n/Joel\n\n\n",
"msg_date": "Wed, 07 Jun 2023 16:21:52 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/7/23 16:21, Joel Jacobson wrote:\n> On Tue, Jun 6, 2023, at 13:20, Tomas Vondra wrote:\n>> it cuts the timing to about 50% on my laptop, so maybe it'll be ~300ms\n>> on your system. There's a bunch of opportunities for more improvements,\n>> as the hash table implementation is pretty naive/silly, the on-disk\n>> format is wasteful and so on.\n>>\n>> But before spending more time on that, it'd be interesting to know what\n>> would be a competitive timing. I mean, what would be \"good enough\"? What\n>> timings are achievable with graph databases?\n> \n> Your hashset is now almost exactly as fast as the corresponding roaringbitmap query, +/- 1 ms on my machine.\n> \n\nInteresting, considering how dumb the the hash table implementation is.\n\n> I tested Neo4j and the results are surprising; it appears to be significantly *slower*.\n> However, I've probably misunderstood something, maybe I need to add some index or something.\n> Even so, it's interesting it's apparently not fast \"by default\".\n> \n\nNo idea how to fix that, but it's rather suspicious.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Jun 2023 19:37:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 7, 2023, at 19:37, Tomas Vondra wrote:\n> Interesting, considering how dumb the the hash table implementation is.\n\nThat's promising.\n\n>> I tested Neo4j and the results are surprising; it appears to be significantly *slower*.\n>> However, I've probably misunderstood something, maybe I need to add some index or something.\n>> Even so, it's interesting it's apparently not fast \"by default\".\n>> \n>\n> No idea how to fix that, but it's rather suspicious.\n\nI've had a graph-db expert review my benchmark, and he suggested adding an index:\n\nCREATE INDEX FOR (n:User) ON (n.id)\n\nThis did improve the execution time for Neo4j a bit, down from 819 ms to 528 ms, but PostgreSQL 299 ms is still a win.\n\nBenchmark here: https://github.com/joelonsql/graph-query-benchmarks\nNote, in this benchmark, I only test the naive RECURSIVE CTE approach using array_agg(DISTINCT ...).\nAnd I couldn't even test the most connected user with Neo4j, the query never finish for some reason,\nso I had to test with a less connected user.\n\nThe graph expert also said that other more realistic graph use-cases might be \"multi-relational\",\nand pointed me to a link: https://github.com/totogo/awesome-knowledge-graph#knowledge-graph-dataset\nNo idea how such multi-relational datasets would affect the benchmarks.\n\nI think we have already strong indicators that PostgreSQL with a hashset type will from a relative\nperformance perspective, do just fine processing basic graph queries, even with large datasets.\n\nThen there will always be the case when users primarily write very different graph queries all day long,\nwho might prefer a graph query *language*, like SQL/PGQ in SQL:2023, Cypher or Gremlin.\n\n/Joel\n\n\n",
"msg_date": "Thu, 08 Jun 2023 11:41:35 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 6/8/23 11:41, Joel Jacobson wrote:\n> On Wed, Jun 7, 2023, at 19:37, Tomas Vondra wrote:\n>> Interesting, considering how dumb the the hash table implementation is.\n> \n> That's promising.\n> \n\nYeah, not bad for sleep-deprived on-plane hacking ...\n\nThere's a bunch of stuff that needs to be improved to make this properly\nusable, like:\n\n1) better hash table implementation\n\n2) input/output functions\n\n3) support for other types (now it only works with int32)\n\n4) I wonder if this might be done as an array-like polymorphic type.\n\n5) more efficient storage format, with versioning etc.\n\n6) regression tests\n\nWould you be interested in helping with / working on some of that? I\ndon't have immediate need for this stuff, so it's not very high on my\nTODO list.\n\n>>> I tested Neo4j and the results are surprising; it appears to be significantly *slower*.\n>>> However, I've probably misunderstood something, maybe I need to add some index or something.\n>>> Even so, it's interesting it's apparently not fast \"by default\".\n>>>\n>>\n>> No idea how to fix that, but it's rather suspicious.\n> \n> I've had a graph-db expert review my benchmark, and he suggested adding an index:\n> \n> CREATE INDEX FOR (n:User) ON (n.id)\n> \n> This did improve the execution time for Neo4j a bit, down from 819 ms to 528 ms, but PostgreSQL 299 ms is still a win.\n> \n> Benchmark here: https://github.com/joelonsql/graph-query-benchmarks\n> Note, in this benchmark, I only test the naive RECURSIVE CTE approach using array_agg(DISTINCT ...).\n> And I couldn't even test the most connected user with Neo4j, the query never finish for some reason,\n> so I had to test with a less connected user.\n> \n\nInteresting. I'd have expected the graph db to be much faster.\n\n> The graph expert also said that other more realistic graph use-cases might be \"multi-relational\",\n> and pointed me to a link: https://github.com/totogo/awesome-knowledge-graph#knowledge-graph-dataset\n> No idea how such multi-relational datasets would affect the benchmarks.\n> \n\nNot sure either, but I don't have ambition to improve everything at\nonce. If the hashset improves one practical use case, fine with me.\n\n> I think we have already strong indicators that PostgreSQL with a hashset type will from a relative\n> performance perspective, do just fine processing basic graph queries, even with large datasets.\n> \n> Then there will always be the case when users primarily write very different graph queries all day long,\n> who might prefer a graph query *language*, like SQL/PGQ in SQL:2023, Cypher or Gremlin.\n> \n\nRight. IMHO the query language is a separate thing, you still need to\nevaluate the query somehow - which is where hashset applies.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Jun 2023 12:19:07 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 8, 2023, at 12:19, Tomas Vondra wrote:\n> Would you be interested in helping with / working on some of that? I\n> don't have immediate need for this stuff, so it's not very high on my\n> TODO list.\n\nSure, I'm willing to help!\n\nI've attached a patch that works on some of the items on your list,\nincluding some additions to the README.md.\n\nThere were a bunch of places where `maxelements / 8` caused bugs,\nthat had to be changed to do proper integer ceiling division:\n\n- values = (int32 *) (set->data + set->maxelements / 8);\n+ values = (int32 *) (set->data + (set->maxelements + 7) / 8);\n\nSide note: I wonder if it would be good to add CEIL_DIV and FLOOR_DIV macros\nto the PostgreSQL source code in general, since it's easy to make this mistake,\nand quite verbose/error-prone to write it out manually everywhere.\nSuch macros could simplify code in e.g. numeric.c.\n\n> There's a bunch of stuff that needs to be improved to make this properly\n> usable, like:\n>\n> 1) better hash table implementation\nTODO\n\n> 2) input/output functions\nI've attempted to implement these.\nI thought comma separated values wrapped around curly braces felt as the most natural format,\nexample:\nSELECT '{1,2,3}'::hashset;\n\n> 3) support for other types (now it only works with int32)\nTODO\n\n> 4) I wonder if this might be done as an array-like polymorphic type.\nThat would be nice!\nI guess the work-around would be to store the actual value of non-int type\nin a lookup table, and then hash the int-based primary key in such table.\n\nDo you think later implementing polymorphic type support would\nmean a more or less complete rewrite, or can we carry on with int32-support\nand add it later on?\n\n> 5) more efficient storage format, with versioning etc.\nTODO\n\n> 6) regression tests\nI've added some regression tests.\n\n> Right. IMHO the query language is a separate thing, you still need to\n> evaluate the query somehow - which is where hashset applies.\n\nGood point, I fully agree.\n\n/Joel",
"msg_date": "Fri, 09 Jun 2023 12:58:18 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 6:58 PM Joel Jacobson <[email protected]> wrote:\n\n> On Thu, Jun 8, 2023, at 12:19, Tomas Vondra wrote:\n> > Would you be interested in helping with / working on some of that? I\n> > don't have immediate need for this stuff, so it's not very high on my\n> > TODO list.\n>\n> Sure, I'm willing to help!\n>\n> I've attached a patch that works on some of the items on your list,\n> including some additions to the README.md.\n>\n> There were a bunch of places where `maxelements / 8` caused bugs,\n> that had to be changed to do proper integer ceiling division:\n>\n> - values = (int32 *) (set->data + set->maxelements / 8);\n> + values = (int32 *) (set->data + (set->maxelements + 7) / 8);\n>\n> Side note: I wonder if it would be good to add CEIL_DIV and FLOOR_DIV\n> macros\n> to the PostgreSQL source code in general, since it's easy to make this\n> mistake,\n> and quite verbose/error-prone to write it out manually everywhere.\n> Such macros could simplify code in e.g. numeric.c.\n>\n> > There's a bunch of stuff that needs to be improved to make this properly\n> > usable, like:\n> >\n> > 1) better hash table implementation\n> TODO\n>\n> > 2) input/output functions\n> I've attempted to implement these.\n> I thought comma separated values wrapped around curly braces felt as the\n> most natural format,\n> example:\n> SELECT '{1,2,3}'::hashset;\n>\n> > 3) support for other types (now it only works with int32)\n> TODO\n>\n> > 4) I wonder if this might be done as an array-like polymorphic type.\n> That would be nice!\n> I guess the work-around would be to store the actual value of non-int type\n> in a lookup table, and then hash the int-based primary key in such table.\n>\n> Do you think later implementing polymorphic type support would\n> mean a more or less complete rewrite, or can we carry on with int32-support\n> and add it later on?\n>\n> > 5) more efficient storage format, with versioning etc.\n> TODO\n>\n> > 6) regression tests\n> I've added some regression tests.\n>\n> > Right. IMHO the query language is a separate thing, you still need to\n> > evaluate the query somehow - which is where hashset applies.\n>\n> Good point, I fully agree.\n>\n> /Joel\n\n\n\n\nHi, I am quite new about C.....\nThe following function I have 3 questions.\n1. 7691,4201, I assume they are just random prime ints?\n2. I don't get the last return set, even the return type should be bool.\n3. I don't understand 13 in hash = (hash + 13) % set->maxelements;\n\nstatic bool\nhashset_contains_element(hashset_t *set, int32 value)\n{\n\tint\t\tbyte;\n\tint\t\tbit;\n\tuint32\thash;\n\tchar *bitmap;\n\tint32 *values;\n\n\thash = ((uint32) value * 7691 + 4201) % set->maxelements;\n\n\tbitmap = set->data;\n\tvalues = (int32 *) (set->data + set->maxelements / 8);\n\n\twhile (true)\n\t{\n\t\tbyte = (hash / 8);\n\t\tbit = (hash % 8);\n\n\t\t/* found an empty slot, value is not there */\n\t\tif ((bitmap[byte] & (0x01 << bit)) == 0)\n\t\t\treturn false;\n\n\t\t/* is it the same value? */\n\t\tif (values[hash] == value)\n\t\t\treturn true;\n\n\t\t/* move to the next element */\n\t\thash = (hash + 13) % set->maxelements;\n\t}\n\n\treturn set;\n}\n\nOn Fri, Jun 9, 2023 at 6:58 PM Joel Jacobson <[email protected]> wrote:On Thu, Jun 8, 2023, at 12:19, Tomas Vondra wrote:\n> Would you be interested in helping with / working on some of that? I\n> don't have immediate need for this stuff, so it's not very high on my\n> TODO list.\n\nSure, I'm willing to help!\n\nI've attached a patch that works on some of the items on your list,\nincluding some additions to the README.md.\n\nThere were a bunch of places where `maxelements / 8` caused bugs,\nthat had to be changed to do proper integer ceiling division:\n\n- values = (int32 *) (set->data + set->maxelements / 8);\n+ values = (int32 *) (set->data + (set->maxelements + 7) / 8);\n\nSide note: I wonder if it would be good to add CEIL_DIV and FLOOR_DIV macros\nto the PostgreSQL source code in general, since it's easy to make this mistake,\nand quite verbose/error-prone to write it out manually everywhere.\nSuch macros could simplify code in e.g. numeric.c.\n\n> There's a bunch of stuff that needs to be improved to make this properly\n> usable, like:\n>\n> 1) better hash table implementation\nTODO\n\n> 2) input/output functions\nI've attempted to implement these.\nI thought comma separated values wrapped around curly braces felt as the most natural format,\nexample:\nSELECT '{1,2,3}'::hashset;\n\n> 3) support for other types (now it only works with int32)\nTODO\n\n> 4) I wonder if this might be done as an array-like polymorphic type.\nThat would be nice!\nI guess the work-around would be to store the actual value of non-int type\nin a lookup table, and then hash the int-based primary key in such table.\n\nDo you think later implementing polymorphic type support would\nmean a more or less complete rewrite, or can we carry on with int32-support\nand add it later on?\n\n> 5) more efficient storage format, with versioning etc.\nTODO\n\n> 6) regression tests\nI've added some regression tests.\n\n> Right. IMHO the query language is a separate thing, you still need to\n> evaluate the query somehow - which is where hashset applies.\n\nGood point, I fully agree.\n\n/JoelHi, I am quite new about C.....The following function I have 3 questions.1. 7691,4201, I assume they are just random prime ints?2. I don't get the last return set, even the return type should be bool. 3. I don't understand 13 in hash = (hash + 13) % set->maxelements;static bool\nhashset_contains_element(hashset_t *set, int32 value)\n{\n\tint\t\tbyte;\n\tint\t\tbit;\n\tuint32\thash;\n\tchar *bitmap;\n\tint32 *values;\n\n\thash = ((uint32) value * 7691 + 4201) % set->maxelements;\n\n\tbitmap = set->data;\n\tvalues = (int32 *) (set->data + set->maxelements / 8);\n\n\twhile (true)\n\t{\n\t\tbyte = (hash / 8);\n\t\tbit = (hash % 8);\n\n\t\t/* found an empty slot, value is not there */\n\t\tif ((bitmap[byte] & (0x01 << bit)) == 0)\n\t\t\treturn false;\n\n\t\t/* is it the same value? */\n\t\tif (values[hash] == value)\n\t\t\treturn true;\n\n\t\t/* move to the next element */\n\t\thash = (hash + 13) % set->maxelements;\n\t}\n\n\treturn set;\n}",
"msg_date": "Fri, 9 Jun 2023 19:33:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 9, 2023, at 13:33, jian he wrote:\n> Hi, I am quite new about C.....\n> The following function I have 3 questions.\n> 1. 7691,4201, I assume they are just random prime ints?\n\nYes, 7691 and 4201 are likely chosen as random prime numbers.\nIn hash functions, prime numbers are often used to help in evenly distributing\nthe hash values across the range and reduce the chance of collisions.\n\n> 2. I don't get the last return set, even the return type should be bool.\n\nThanks, you found a mistake!\n\nThe line\n\n return set;\n\nis actually unreachable and should be removed.\nThe function will always return either true or false within the while loop and\nnever reach the final return statement.\n\nI've attached a new incremental patch with this fix.\n\n> 3. I don't understand 13 in hash = (hash + 13) % set->maxelements;\n\nThe value 13 is used for linear probing [1] in handling hash collisions.\nLinear probing sequentially checks the next slot in the array when a collision\noccurs. 13, being a small prime number not near a power of 2, helps in uniformly\ndistributing data and ensuring that all slots are probed, as it's relatively prime\nto the hash table size.\n\nHm, I realise we actually don't ensure the hash table size and step size (13)\nare coprime. I've fixed that in the attached patch as well.\n\n[1] https://en.wikipedia.org/wiki/Linear_probing\n\n/Joel",
"msg_date": "Fri, 09 Jun 2023 13:56:28 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-09 Fr 07:56, Joel Jacobson wrote:\n> On Fri, Jun 9, 2023, at 13:33, jian he wrote:\n> > Hi, I am quite new about C.....\n> > The following function I have 3 questions.\n> > 1. 7691,4201, I assume they are just random prime ints?\n>\n> Yes, 7691 and 4201 are likely chosen as random prime numbers.\n> In hash functions, prime numbers are often used to help in evenly \n> distributing\n> the hash values across the range and reduce the chance of collisions.\n>\n> > 2. I don't get the last return set, even the return type should be bool.\n>\n> Thanks, you found a mistake!\n>\n> The line\n>\n> return set;\n>\n> is actually unreachable and should be removed.\n> The function will always return either true or false within the while \n> loop and\n> never reach the final return statement.\n>\n> I've attached a new incremental patch with this fix.\n>\n> > 3. I don't understand 13 in hash = (hash + 13) % set->maxelements;\n>\n> The value 13 is used for linear probing [1] in handling hash collisions.\n> Linear probing sequentially checks the next slot in the array when a \n> collision\n> occurs. 13, being a small prime number not near a power of 2, helps in \n> uniformly\n> distributing data and ensuring that all slots are probed, as it's \n> relatively prime\n> to the hash table size.\n>\n> Hm, I realise we actually don't ensure the hash table size and step \n> size (13)\n> are coprime. I've fixed that in the attached patch as well.\n>\n> [1] https://en.wikipedia.org/wiki/Linear_probing\n>\n>\n\n\nMaybe you can post a full patch as well as incremental?\n\nStylistically I think you should reduce reliance on magic numbers (like \n13). Probably need some #define's?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-09 Fr 07:56, Joel Jacobson\n wrote:\n\n\n\n\n\nOn Fri, Jun 9, 2023, at 13:33, jian he wrote:\n\n> Hi, I am quite new about C.....\n\n> The following function I have 3 questions.\n\n> 1. 7691,4201, I assume they are just random prime ints?\n\n\n\nYes, 7691 and 4201 are likely chosen as random prime numbers.\n\nIn hash functions, prime numbers are often used to help in\n evenly distributing\n\nthe hash values across the range and reduce the chance of\n collisions.\n\n\n\n> 2. I don't get the last return set, even the return type\n should be bool.\n\n\n\nThanks, you found a mistake!\n\n\n\nThe line\n\n\n\n return set;\n\n\n\nis actually unreachable and should be removed.\n\nThe function will always return either true or false within\n the while loop and\n\nnever reach the final return statement.\n\n\n\nI've attached a new incremental patch with this fix.\n\n\n\n> 3. I don't understand 13 in hash = (hash + 13) %\n set->maxelements;\n\n\n\nThe value 13 is used for linear probing [1] in handling hash\n collisions.\n\nLinear probing sequentially checks the next slot in the array\n when a collision\n\noccurs. 13, being a small prime number not near a power of 2,\n helps in uniformly\n\ndistributing data and ensuring that all slots are probed, as\n it's relatively prime\n\nto the hash table size.\n\n\n\nHm, I realise we actually don't ensure the hash table size\n and step size (13)\n\nare coprime. I've fixed that in the attached patch as well.\n\n\n[1] https://en.wikipedia.org/wiki/Linear_probing\n\n\n\n\n\n\n\n\n\n\nMaybe you can post a full patch as well as incremental?\nStylistically I think you should reduce reliance on magic numbers\n (like 13). Probably need some #define's?\n\n\ncheers\n\n\nandrew \n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 10 Jun 2023 11:46:00 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "in funcion. hashset_in\n\nint32 value = strtol(str, &endptr, 10);\nthere is no int32 value range check?\nimitate src/backend/utils/adt/int.c. the following way is what I came up\nwith.\n\nint64 value = strtol(str, &endptr, 10);\nif (errno == ERANGE || value < INT_MIN || value > INT_MAX)\nereturn(fcinfo->context, (Datum) 0,\n(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\nerrmsg(\"value \\\"%s\\\" is out of range for type %s\", str,\n\"integer\")));\n\n set = hashset_add_element(set, (int32)value);\n\nalso it will infinity loop in hashset_in if supply the wrong value....\nexample select '{1,2s}'::hashset;\nI need kill -9 to kill the process.\n\nin funcion. hashset_in int32 value = strtol(str, &endptr, 10);there is no int32 value range check? imitate src/backend/utils/adt/int.c. the following way is what I came up with.int64 value = strtol(str, &endptr, 10);\t\tif (errno == ERANGE || value < INT_MIN || value > INT_MAX)\t\t\tereturn(fcinfo->context, (Datum) 0,\t\t\t\t\t(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\t\t\t\t\t errmsg(\"value \\\"%s\\\" is out of range for type %s\", str,\t\t\t\t\t\t\t\"integer\"))); set = hashset_add_element(set, (int32)value);also it will infinity loop in hashset_in if supply the wrong value.... example select '{1,2s}'::hashset;I need kill -9 to kill the process.",
"msg_date": "Sat, 10 Jun 2023 23:51:30 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/9/23 12:58, Joel Jacobson wrote:\n> On Thu, Jun 8, 2023, at 12:19, Tomas Vondra wrote:\n>> Would you be interested in helping with / working on some of that? I\n>> don't have immediate need for this stuff, so it's not very high on my\n>> TODO list.\n> \n> Sure, I'm willing to help!\n> \n> I've attached a patch that works on some of the items on your list,\n> including some additions to the README.md.\n> \n> There were a bunch of places where `maxelements / 8` caused bugs,\n> that had to be changed to do proper integer ceiling division:\n> \n> - values = (int32 *) (set->data + set->maxelements / 8);\n> + values = (int32 *) (set->data + (set->maxelements + 7) / 8);\n> \n> Side note: I wonder if it would be good to add CEIL_DIV and FLOOR_DIV macros\n> to the PostgreSQL source code in general, since it's easy to make this mistake,\n> and quite verbose/error-prone to write it out manually everywhere.\n> Such macros could simplify code in e.g. numeric.c.\n> \n\nYeah, it'd be good to have macros to calculate the sizes. We'll need\nthat in many places.\n\n>> There's a bunch of stuff that needs to be improved to make this properly\n>> usable, like:\n>>\n>> 1) better hash table implementation\n> TODO\n> \n>> 2) input/output functions\n> I've attempted to implement these.\n> I thought comma separated values wrapped around curly braces felt as the most natural format,\n> example:\n> SELECT '{1,2,3}'::hashset;\n> \n\n+1 to that. I'd mimic the array in/out functions as much as possible.\n\n>> 3) support for other types (now it only works with int32)\n> TODO\n> \n\nI think we should decide what types we want/need to support, and add one\nor two types early. Otherwise we'll have code / on-disk format making\nvarious assumptions about the type length etc.\n\nI have no idea what types people use as node IDs - is it likely we'll\nneed to support types passed by reference / varlena types? Or can we\njust assume it's int/bigint?\n\n>> 4) I wonder if this might be done as an array-like polymorphic type.\n> That would be nice!\n> I guess the work-around would be to store the actual value of non-int type\n> in a lookup table, and then hash the int-based primary key in such table.\n> \n> Do you think later implementing polymorphic type support would\n> mean a more or less complete rewrite, or can we carry on with int32-support\n> and add it later on?\n> \n\nI don't know. From the storage perspective it doesn't matter much, I\nthink, we would not need to change that. But I think adding a\npolymorphic type (array-like) would require changes to grammar, and\nthat's not possible for an extension. If there's a way, I'm not aware of\nit and I don't recall an extension doing that.\n\n>> 5) more efficient storage format, with versioning etc.\n> TODO\n> \n\nI think the main question is whether to serialize the hash table as is,\nor compact it in some way. The current code just uses the same thing for\nboth cases - on-disk format and in-memory representation (aggstate).\nThat's simple, but it also means the on-disk value is likely not well\ncompressible (because it's ~50% random data.\n\nI've thought about serializing just the values (as a simple array), but\nthat defeats the whole purpose of fast membership checks. I have two ideas:\n\na) sort the data, and use binary search for this compact variant (and\nthen expand it into \"full\" hash table if needed)\n\nb) use a more compact hash table (with load factor much closer to 1.0)\n\nNot sure which if this option is the right one, each has cost for\nconverting between formats (but large on-disk value is not free either).\n\nThat's roughly what I did for the tdigest extension.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Jun 2023 22:12:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/10/23 17:46, Andrew Dunstan wrote:\n> \n> On 2023-06-09 Fr 07:56, Joel Jacobson wrote:\n>> On Fri, Jun 9, 2023, at 13:33, jian he wrote:\n>> > Hi, I am quite new about C.....\n>> > The following function I have 3 questions.\n>> > 1. 7691,4201, I assume they are just random prime ints?\n>>\n>> Yes, 7691 and 4201 are likely chosen as random prime numbers.\n>> In hash functions, prime numbers are often used to help in evenly\n>> distributing\n>> the hash values across the range and reduce the chance of collisions.\n>>\n>> > 2. I don't get the last return set, even the return type should be bool.\n>>\n>> Thanks, you found a mistake!\n>>\n>> The line\n>>\n>> return set;\n>>\n>> is actually unreachable and should be removed.\n>> The function will always return either true or false within the while\n>> loop and\n>> never reach the final return statement.\n>>\n>> I've attached a new incremental patch with this fix.\n>>\n>> > 3. I don't understand 13 in hash = (hash + 13) % set->maxelements;\n>>\n>> The value 13 is used for linear probing [1] in handling hash collisions.\n>> Linear probing sequentially checks the next slot in the array when a\n>> collision\n>> occurs. 13, being a small prime number not near a power of 2, helps in\n>> uniformly\n>> distributing data and ensuring that all slots are probed, as it's\n>> relatively prime\n>> to the hash table size.\n>>\n>> Hm, I realise we actually don't ensure the hash table size and step\n>> size (13)\n>> are coprime. I've fixed that in the attached patch as well.\n>>\n>> [1] https://en.wikipedia.org/wiki/Linear_probing\n>>\n>>\n> \n> \n> Maybe you can post a full patch as well as incremental?\n> \n\nI wonder if we should keep discussing this extension here, considering\nit's going to be out of core (at least for now). Not sure how many\npgsql-hackers are interested in this, so maybe we should just move it to\ngithub PRs or something ...\n\n\n> Stylistically I think you should reduce reliance on magic numbers (like\n> 13). Probably need some #define's?\n> \n\nYeah, absolutely. This was just pure laziness.\n\n\nregard\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Jun 2023 22:26:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sat, Jun 10, 2023, at 22:26, Tomas Vondra wrote:\n> On 6/10/23 17:46, Andrew Dunstan wrote:\n>> \n>> Maybe you can post a full patch as well as incremental?\n>> \n>\n> I wonder if we should keep discussing this extension here, considering\n> it's going to be out of core (at least for now). Not sure how many\n> pgsql-hackers are interested in this, so maybe we should just move it to\n> github PRs or something ...\n\nI think there are some good arguments that speaks in favour of including it in core:\n\n1. It's a fundamental data structure. Perhaps \"set\" would have been a better name,\nsince the use of hash functions from an end-user perspective is implementation\ndetails, but we cannot use that word since it's a reserved keyword, hence \"hashset\".\n\n2. The addition of SQL/PGQ in SQL:2023 is evidence of a general perceived need\namong SQL users to evaluate graph queries. Even if a future implementation of SQL/PGQ\nwould mean users wouldn't need to deal with the hashset type directly, the same\ntype could hopefully be used, in part or in whole, under the hood by the future \nSQL/PGQ implementation. If low-level functionality is useful on its own, I think it's\na benefit of exposing it to users, since it allows system testing of the component\nin isolation, even if it's primarily gonna be used as a smaller part of a larger more\nhigh-level component.\n\n3. I think there is a general need for hashset, experienced by myself, Andrew and\nI would guess lots of others users. The general pattern that will be improved is\nwhen you currently would do array_agg(DISTINCT ...)\nprobably there are other situations too, since it's a fundamental data structure.\n\nOn Sat, Jun 10, 2023, at 22:12, Tomas Vondra wrote:\n>>> 3) support for other types (now it only works with int32)\n> I think we should decide what types we want/need to support, and add one\n> or two types early. Otherwise we'll have code / on-disk format making\n> various assumptions about the type length etc.\n>\n> I have no idea what types people use as node IDs - is it likely we'll\n> need to support types passed by reference / varlena types? Or can we\n> just assume it's int/bigint?\n\nI think we should just support data types that would be sensible\nto use as a PRIMARY KEY in a fully normalised data model,\nwhich I believe would only include \"int\", \"bigint\" and \"uuid\".\n\n/Joel\n\n\n",
"msg_date": "Sun, 11 Jun 2023 12:26:39 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-11 Su 06:26, Joel Jacobson wrote:\n> On Sat, Jun 10, 2023, at 22:26, Tomas Vondra wrote:\n>> On 6/10/23 17:46, Andrew Dunstan wrote:\n>>> Maybe you can post a full patch as well as incremental?\n>>>\n>> I wonder if we should keep discussing this extension here, considering\n>> it's going to be out of core (at least for now). Not sure how many\n>> pgsql-hackers are interested in this, so maybe we should just move it to\n>> github PRs or something ...\n> I think there are some good arguments that speaks in favour of including it in core:\n>\n> 1. It's a fundamental data structure.\n\n\nThat's reason enough IMNSHO.\n\n\n> Perhaps \"set\" would have been a better name,\n> since the use of hash functions from an end-user perspective is implementation\n> details, but we cannot use that word since it's a reserved keyword, hence \"hashset\".\n\n\nRather than use \"hashset\", which as you say is based on an \nimplementation detail, I would prefer something like \"integer_set\" - \nwhat it's a set of.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-11 Su 06:26, Joel Jacobson\n wrote:\n\n\nOn Sat, Jun 10, 2023, at 22:26, Tomas Vondra wrote:\n\n\nOn 6/10/23 17:46, Andrew Dunstan wrote:\n\n\n\nMaybe you can post a full patch as well as incremental?\n\n\n\n\nI wonder if we should keep discussing this extension here, considering\nit's going to be out of core (at least for now). Not sure how many\npgsql-hackers are interested in this, so maybe we should just move it to\ngithub PRs or something ...\n\n\n\nI think there are some good arguments that speaks in favour of including it in core:\n\n1. It's a fundamental data structure. \n\n\n\nThat's reason enough IMNSHO.\n\n\n\n\nPerhaps \"set\" would have been a better name,\nsince the use of hash functions from an end-user perspective is implementation\ndetails, but we cannot use that word since it's a reserved keyword, hence \"hashset\".\n\n\n\nRather than use \"hashset\", which as you say is based on an\n implementation detail, I would prefer something like \"integer_set\"\n - what it's a set of.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 11 Jun 2023 10:58:41 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/11/23 12:26, Joel Jacobson wrote:\n> On Sat, Jun 10, 2023, at 22:26, Tomas Vondra wrote:\n>> On 6/10/23 17:46, Andrew Dunstan wrote:\n>>>\n>>> Maybe you can post a full patch as well as incremental?\n>>>\n>>\n>> I wonder if we should keep discussing this extension here, considering\n>> it's going to be out of core (at least for now). Not sure how many\n>> pgsql-hackers are interested in this, so maybe we should just move it to\n>> github PRs or something ...\n> \n> I think there are some good arguments that speaks in favour of including it in core:\n> \n> 1. It's a fundamental data structure. Perhaps \"set\" would have been a better name,\n> since the use of hash functions from an end-user perspective is implementation\n> details, but we cannot use that word since it's a reserved keyword, hence \"hashset\".\n> \n> 2. The addition of SQL/PGQ in SQL:2023 is evidence of a general perceived need\n> among SQL users to evaluate graph queries. Even if a future implementation of SQL/PGQ\n> would mean users wouldn't need to deal with the hashset type directly, the same\n> type could hopefully be used, in part or in whole, under the hood by the future \n> SQL/PGQ implementation. If low-level functionality is useful on its own, I think it's\n> a benefit of exposing it to users, since it allows system testing of the component\n> in isolation, even if it's primarily gonna be used as a smaller part of a larger more\n> high-level component.\n> \n> 3. I think there is a general need for hashset, experienced by myself, Andrew and\n> I would guess lots of others users. The general pattern that will be improved is\n> when you currently would do array_agg(DISTINCT ...)\n> probably there are other situations too, since it's a fundamental data structure.\n> \n\nI agree with all of that, but ...\n\nIt's just past feature freeze, so the earliest release this could appear\nin is 17, about 15 months away.\n\nOnce stuff gets added to core, it's tied to the release cycle, so no new\nfeatures in between.\n\nPresumably people would like to use the extension in the release they\nalready use, without backporting.\n\nPostgres is extensible for a reason, exactly so that we don't need to\nhave everything in core.\n\n> On Sat, Jun 10, 2023, at 22:12, Tomas Vondra wrote:\n>>>> 3) support for other types (now it only works with int32)\n>> I think we should decide what types we want/need to support, and add one\n>> or two types early. Otherwise we'll have code / on-disk format making\n>> various assumptions about the type length etc.\n>>\n>> I have no idea what types people use as node IDs - is it likely we'll\n>> need to support types passed by reference / varlena types? Or can we\n>> just assume it's int/bigint?\n> \n> I think we should just support data types that would be sensible\n> to use as a PRIMARY KEY in a fully normalised data model,\n> which I believe would only include \"int\", \"bigint\" and \"uuid\".\n> \n\nOK, makes sense.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 11 Jun 2023 17:03:04 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sun, Jun 11, 2023, at 16:58, Andrew Dunstan wrote:\n>>On 2023-06-11 Su 06:26, Joel Jacobson wrote:\n>>Perhaps \"set\" would have been a better name, since the use of hash functions from an end-user perspective is >>implementation details, but we cannot use that word since it's a reserved keyword, hence \"hashset\".\n>\n>Rather than use \"hashset\", which as you say is based on an implementation detail, I would prefer something like\n>\"integer_set\" - what it's a set of.\n\nApologies for the confusion previously.\nUpon further reflection, I recognize that the term \"hash\" in \"hashset\"\nextends beyond mere representation of implementation.\nIt imparts key information about performance characteristics as well as functionality inherent to the set.\n\nIn hindsight, \"hashset\" does emerge as the most suitable terminology.\n\n/Joel\nOn Sun, Jun 11, 2023, at 16:58, Andrew Dunstan wrote:>>On 2023-06-11 Su 06:26, Joel Jacobson\n wrote:>>Perhaps \"set\" would have been a better name,\nsince the use of hash functions from an end-user perspective is >>implementation\ndetails, but we cannot use that word since it's a reserved keyword, hence \"hashset\".>>Rather than use \"hashset\", which as you say is based on an\n implementation detail, I would prefer something like>\"integer_set\"\n - what it's a set of.Apologies for the confusion previously.Upon further reflection, I recognize that the term \"hash\" in \"hashset\"extends beyond mere representation of implementation.It imparts key information about performance characteristics as well as functionality inherent to the set.In hindsight, \"hashset\" does emerge as the most suitable terminology./Joel",
"msg_date": "Sun, 11 Jun 2023 22:05:39 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sun, Jun 11, 2023, at 17:03, Tomas Vondra wrote:\n> On 6/11/23 12:26, Joel Jacobson wrote:\n>> I think there are some good arguments that speaks in favour of including it in core:\n...\n>\n> I agree with all of that, but ...\n>\n> It's just past feature freeze, so the earliest release this could appear\n> in is 17, about 15 months away.\n>\n> Once stuff gets added to core, it's tied to the release cycle, so no new\n> features in between.\n>\n> Presumably people would like to use the extension in the release they\n> already use, without backporting.\n>\n> Postgres is extensible for a reason, exactly so that we don't need to\n> have everything in core.\n\nInteresting, I've never thought about this one before:\nWhat if something is deemed to be fundamental and therefore qualify for core inclusion,\nand at the same time is suitable to be made an extension.\nWould it be possible to ship such extension as pre-installed?\n\nWhat was the json/jsonb story, was it ever an extension before\nbeing included in core?\n\n/Joel\n\n\n",
"msg_date": "Sun, 11 Jun 2023 22:15:37 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-11 Su 16:15, Joel Jacobson wrote:\n> On Sun, Jun 11, 2023, at 17:03, Tomas Vondra wrote:\n>> On 6/11/23 12:26, Joel Jacobson wrote:\n>>> I think there are some good arguments that speaks in favour of including it in core:\n> ...\n>> I agree with all of that, but ...\n>>\n>> It's just past feature freeze, so the earliest release this could appear\n>> in is 17, about 15 months away.\n>>\n>> Once stuff gets added to core, it's tied to the release cycle, so no new\n>> features in between.\n>>\n>> Presumably people would like to use the extension in the release they\n>> already use, without backporting.\n>>\n>> Postgres is extensible for a reason, exactly so that we don't need to\n>> have everything in core.\n> Interesting, I've never thought about this one before:\n> What if something is deemed to be fundamental and therefore qualify for core inclusion,\n> and at the same time is suitable to be made an extension.\n> Would it be possible to ship such extension as pre-installed?\n>\n> What was the json/jsonb story, was it ever an extension before\n> being included in core?\n\n\nNo, and the difficulty is that an in-core type and associated functions \nwill have different oids, so migrating from one to the other would be at \nbest painful.\n\nSo it's a kind of now or never decision. I think extensions are \nexcellent for specialized types. But I don't regard a set type in that \nlight.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-11 Su 16:15, Joel Jacobson\n wrote:\n\n\nOn Sun, Jun 11, 2023, at 17:03, Tomas Vondra wrote:\n\n\nOn 6/11/23 12:26, Joel Jacobson wrote:\n\n\nI think there are some good arguments that speaks in favour of including it in core:\n\n\n\n...\n\n\n\nI agree with all of that, but ...\n\nIt's just past feature freeze, so the earliest release this could appear\nin is 17, about 15 months away.\n\nOnce stuff gets added to core, it's tied to the release cycle, so no new\nfeatures in between.\n\nPresumably people would like to use the extension in the release they\nalready use, without backporting.\n\nPostgres is extensible for a reason, exactly so that we don't need to\nhave everything in core.\n\n\n\nInteresting, I've never thought about this one before:\nWhat if something is deemed to be fundamental and therefore qualify for core inclusion,\nand at the same time is suitable to be made an extension.\nWould it be possible to ship such extension as pre-installed?\n\nWhat was the json/jsonb story, was it ever an extension before\nbeing included in core?\n\n\n\n\nNo, and the difficulty is that an in-core type and associated\n functions will have different oids, so migrating from one to the\n other would be at best painful.\nSo it's a kind of now or never decision. I think extensions are\n excellent for specialized types. But I don't regard a set type in\n that light.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 12 Jun 2023 08:46:28 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/11/23 22:15, Joel Jacobson wrote:\n> On Sun, Jun 11, 2023, at 17:03, Tomas Vondra wrote:\n>> On 6/11/23 12:26, Joel Jacobson wrote:\n>>> I think there are some good arguments that speaks in favour of including it in core:\n> ...\n>>\n>> I agree with all of that, but ...\n>>\n>> It's just past feature freeze, so the earliest release this could appear\n>> in is 17, about 15 months away.\n>>\n>> Once stuff gets added to core, it's tied to the release cycle, so no new\n>> features in between.\n>>\n>> Presumably people would like to use the extension in the release they\n>> already use, without backporting.\n>>\n>> Postgres is extensible for a reason, exactly so that we don't need to\n>> have everything in core.\n> \n> Interesting, I've never thought about this one before:\n> What if something is deemed to be fundamental and therefore qualify for core inclusion,\n> and at the same time is suitable to be made an extension.\n> Would it be possible to ship such extension as pre-installed?\n> \n\nI think it's always a matter of judgment - I don't think there's some\nclear set into a stone. If something is \"fundamental\" and can be done in\nan extension, there's always the option to have it in contrib (with all\nthe limitations I mentioned).\n\nFWIW I'm not strictly against adding it to contrib, if there's agreement\nit's worth it. But if we consider it to be a fundamental data structure\nand widely useful, maybe we should consider making it a built-in data\ntype, with fixed OID etc. That'd allow support at the SQL grammar level,\nand perhaps also from the proposed SQL/PGQ (and GQL?). AFAIK moving data\ntypes from extension (even if from contrib) to core is not painless.\n\nEither way it might be a nice / useful first patch, I guess.\n\n> What was the json/jsonb story, was it ever an extension before\n> being included in core?\n\nI don't recall what the exact story was, but I guess the \"json\" type was\nadded to core very long ago (before we started to push back a bit), and\nwe added some SQL grammar stuff too, which can't be done from extension.\nSo when we added jsonb (much later than json), there wasn't much point\nin not having it in core.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Jun 2023 15:00:07 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/12/23 14:46, Andrew Dunstan wrote:\n> \n> On 2023-06-11 Su 16:15, Joel Jacobson wrote:\n>> On Sun, Jun 11, 2023, at 17:03, Tomas Vondra wrote:\n>>> On 6/11/23 12:26, Joel Jacobson wrote:\n>>>> I think there are some good arguments that speaks in favour of including it in core:\n>> ...\n>>> I agree with all of that, but ...\n>>>\n>>> It's just past feature freeze, so the earliest release this could appear\n>>> in is 17, about 15 months away.\n>>>\n>>> Once stuff gets added to core, it's tied to the release cycle, so no new\n>>> features in between.\n>>>\n>>> Presumably people would like to use the extension in the release they\n>>> already use, without backporting.\n>>>\n>>> Postgres is extensible for a reason, exactly so that we don't need to\n>>> have everything in core.\n>> Interesting, I've never thought about this one before:\n>> What if something is deemed to be fundamental and therefore qualify for core inclusion,\n>> and at the same time is suitable to be made an extension.\n>> Would it be possible to ship such extension as pre-installed?\n>>\n>> What was the json/jsonb story, was it ever an extension before\n>> being included in core?\n> \n> \n> No, and the difficulty is that an in-core type and associated functions\n> will have different oids, so migrating from one to the other would be at\n> best painful.\n> \n> So it's a kind of now or never decision. I think extensions are\n> excellent for specialized types. But I don't regard a set type in that\n> light.\n> \n\nPerhaps. So you're proposing to have this as a regular built-in type?\nIt's hard for me to judge how popular this feature would be, but I guess\npeople often use arrays while they actually want set semantics ...\n\nIf we do that, I wonder if we could do something similar to arrays, with\nthe polymorphism and SQL grammar support. Does SQL have any concept of\nsets (or arrays, for that matter) as data types?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Jun 2023 15:06:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-12 Mo 09:00, Tomas Vondra wrote:\n>\n>> What was the json/jsonb story, was it ever an extension before\n>> being included in core?\n> I don't recall what the exact story was, but I guess the \"json\" type was\n> added to core very long ago (before we started to push back a bit), and\n> we added some SQL grammar stuff too, which can't be done from extension.\n> So when we added jsonb (much later than json), there wasn't much point\n> in not having it in core.\n>\n>\n\nNot quite.\n\nThe json type as added in 9.2 (Sept 2012) and jsonb in 9.4 (Dec 2014). I \nwouldn't call those far apart or very long ago. Neither included any \ngrammar changes AFAIR.\n\nBut if they had been added as extensions we'd probably be in a whole lot \nmore trouble now implementing SQL/JSON, so whether that was foresight or \nlaziness I think the we landed on our feet there.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-12 Mo 09:00, Tomas Vondra\n wrote:\n\n\n\n\n\n\nWhat was the json/jsonb story, was it ever an extension before\nbeing included in core?\n\n\n\nI don't recall what the exact story was, but I guess the \"json\" type was\nadded to core very long ago (before we started to push back a bit), and\nwe added some SQL grammar stuff too, which can't be done from extension.\nSo when we added jsonb (much later than json), there wasn't much point\nin not having it in core.\n\n\n\n\n\n\nNot quite. \n\nThe json type as added in 9.2 (Sept 2012) and jsonb in 9.4 (Dec\n 2014). I wouldn't call those far apart or very long ago. Neither\n included any grammar changes AFAIR.\n\nBut if they had been added as extensions we'd probably be in a\n whole lot more trouble now implementing SQL/JSON, so whether that\n was foresight or laziness I think the we landed on our feet there.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 12 Jun 2023 09:19:08 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sat, Jun 10, 2023, at 17:46, Andrew Dunstan wrote:\n> Maybe you can post a full patch as well as incremental?\n\nAttached patch is based on tvondra's last commit 375b072.\n\n> Stylistically I think you should reduce reliance on magic numbers (like 13). Probably need some #define's?\n\nGreat idea, fixed, I've added a HASHSET_STEP definition (set to the value 13).\n\nOn Sat, Jun 10, 2023, at 17:51, jian he wrote:\n> int32 value = strtol(str, &endptr, 10);\n> there is no int32 value range check? \n> imitate src/backend/utils/adt/int.c. the following way is what I came up with.\n>\n>\n> int64 value = strtol(str, &endptr, 10);\n>\n> if (errno == ERANGE || value < INT_MIN || value > INT_MAX)\n\nThanks, fixed like suggested, except I used PG_INT32_MIN and PG_INT32_MAX,\nwhich explicitly represent the maximum value for a 32-bit integer,\nregardless of the platform or C implementation.\n\n> also it will infinity loop in hashset_in if supply the wrong value.... \n> example select '{1,2s}'::hashset;\n> I need kill -9 to kill the process. \n\nThanks. I've added a new test, `sql/invalid.sql` with that example query.\n\nHere is a summary of all other changes:\n \n* README.md: Added sections Usage, Data types, Functions and Aggregate Functions\n\n* Added test/ directory with some tests.\n\n* Added \"not production-ready\" notice at top of README, warning for breaking\nchanges and no migration scripts, until our first release.\n\n* Change version from 1.0.0 to 0.0.1 to indicate current status.\n\n* Added CEIL_DIV macro\n\n* Implemented hashset_in(), hashset_out()\n The syntax for the serialized format is comma separated integer values\n wrapped around curly braces, e.g '{1,2,3}'\n\n* Implemented hashset_recv() to match the existing hashset_send()\n\n* Removed/rewrote some tdigest related comments\n\n/Joel",
"msg_date": "Mon, 12 Jun 2023 17:43:28 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sat, Jun 10, 2023, at 22:12, Tomas Vondra wrote:\n>>> 1) better hash table implementation\n\nI noticed src/include/common/hashfn.h provides implementation\nof the Jenkins/lookup3 hash function, and thought maybe\nwe could simply use it in hashset?\n\nHowever, I noticed that according to SMHasher [1],\nJenkins/lookup3 has some quality problems:\n\"UB, 28% bias, collisions, 30% distr, BIC\"\n\nNot sure if that's true or maybe not a problem in the PostgreSQL implementation?\n\nAccording to SHHasher, the two fastest 32/64-bit hash functions\nfor non-cryptographic purposes without any quality problems\nthat are also portable seems to be these two:\n\nwyhash v4.1 (64-bit) [2]\nMiB/sec: 22513.04\ncycl./hash: 29.01\nsize: 474\n\nxxh3low (xxHash v3, 64-bit, low 32-bits part) [3]\nMiB/sec: 20722.94\ncycl./hash: 30.26\nsize: 756\n\n[1] https://github.com/rurban/smhasher\n[2] https://github.com/wangyi-fudan/wyhash\n[3] https://github.com/Cyan4973/xxHash\n\n>>> 5) more efficient storage format, with versioning etc.\n> I think the main question is whether to serialize the hash table as is,\n> or compact it in some way. The current code just uses the same thing for\n> both cases - on-disk format and in-memory representation (aggstate).\n> That's simple, but it also means the on-disk value is likely not well\n> compressible (because it's ~50% random data.\n>\n> I've thought about serializing just the values (as a simple array), but\n> that defeats the whole purpose of fast membership checks. I have two ideas:\n>\n> a) sort the data, and use binary search for this compact variant (and\n> then expand it into \"full\" hash table if needed)\n>\n> b) use a more compact hash table (with load factor much closer to 1.0)\n>\n> Not sure which if this option is the right one, each has cost for\n> converting between formats (but large on-disk value is not free either).\n>\n> That's roughly what I did for the tdigest extension.\n\nIs the choice of hash function (and it's in-memory representation)\nindependent of the on-disk format question, i.e. could we work\non experimenting and evaluating different hash functions first,\nto optimise for speed and quality, and then when done, proceed\nand optimise for space, or are the two intertwined somehow?\n\n/Joel\n\n\n",
"msg_date": "Mon, 12 Jun 2023 19:34:58 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/12/23 19:34, Joel Jacobson wrote:\n> On Sat, Jun 10, 2023, at 22:12, Tomas Vondra wrote:\n>>>> 1) better hash table implementation\n> \n> I noticed src/include/common/hashfn.h provides implementation\n> of the Jenkins/lookup3 hash function, and thought maybe\n> we could simply use it in hashset?\n> \n> However, I noticed that according to SMHasher [1],\n> Jenkins/lookup3 has some quality problems:\n> \"UB, 28% bias, collisions, 30% distr, BIC\"\n> \n> Not sure if that's true or maybe not a problem in the PostgreSQL implementation?\n> > According to SHHasher, the two fastest 32/64-bit hash functions\n> for non-cryptographic purposes without any quality problems\n> that are also portable seems to be these two:\n> \n> wyhash v4.1 (64-bit) [2]\n> MiB/sec: 22513.04\n> cycl./hash: 29.01\n> size: 474\n> \n> xxh3low (xxHash v3, 64-bit, low 32-bits part) [3]\n> MiB/sec: 20722.94\n> cycl./hash: 30.26\n> size: 756\n> \n> [1] https://github.com/rurban/smhasher\n> [2] https://github.com/wangyi-fudan/wyhash\n> [3] https://github.com/Cyan4973/xxHash\n> \n\nBut those are numbers for large keys - if you restrict the input to\n4B-16B (which is what we planned to do by focusing on int, bigint and\nuuid), there's no significant difference:\n\nlookup3:\n\nSmall key speed test - 4-byte keys - 30.17 cycles/hash\nSmall key speed test - 8-byte keys - 31.00 cycles/hash\nSmall key speed test - 16-byte keys - 49.00 cycles/hash\n\nxxh3low:\n\nSmall key speed test - 4-byte keys - 29.00 cycles/hash\nSmall key speed test - 8-byte keys - 29.58 cycles/hash\nSmall key speed test - 16-byte keys - 37.00 cycles/hash\n\nBut you can try doing some measurements, of course. Or just do profiling\nto see how much time we spend in the hash function - I'd bet it's pretty\ntiny fraction of the total time.\n\nAs for the \"quality\" issues - it's the same algorithm in Postgres, so it\nhas the same issues. I don't if that has measurable impact, though. I'd\nguess it does not, particularly for \"reasonably small\" sets.\n\n>>>> 5) more efficient storage format, with versioning etc.\n>> I think the main question is whether to serialize the hash table as is,\n>> or compact it in some way. The current code just uses the same thing for\n>> both cases - on-disk format and in-memory representation (aggstate).\n>> That's simple, but it also means the on-disk value is likely not well\n>> compressible (because it's ~50% random data.\n>>\n>> I've thought about serializing just the values (as a simple array), but\n>> that defeats the whole purpose of fast membership checks. I have two ideas:\n>>\n>> a) sort the data, and use binary search for this compact variant (and\n>> then expand it into \"full\" hash table if needed)\n>>\n>> b) use a more compact hash table (with load factor much closer to 1.0)\n>>\n>> Not sure which if this option is the right one, each has cost for\n>> converting between formats (but large on-disk value is not free either).\n>>\n>> That's roughly what I did for the tdigest extension.\n> \n> Is the choice of hash function (and it's in-memory representation)\n> independent of the on-disk format question, i.e. could we work\n> on experimenting and evaluating different hash functions first,\n> to optimise for speed and quality, and then when done, proceed\n> and optimise for space, or are the two intertwined somehow?\n> \n\nNot sure what you mean by \"optimizing for space\" - I imagined the\non-disk format would just use the same hash table with tiny amount of\nfree space (say 10% and not ~%50).\n\n\nMy suggestion is to be lazy, just use the lookup3 we have in hashfn.c\n(through hash_bytes or something), and at the same time make it possible\nto switch to a different function in the future. I'd store and ID of the\nhash function in the set, so that we can support a different algorithm\nin the future, if we choose to.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Jun 2023 21:58:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 12, 2023, at 21:58, Tomas Vondra wrote:\n> But those are numbers for large keys - if you restrict the input to\n> 4B-16B (which is what we planned to do by focusing on int, bigint and\n> uuid), there's no significant difference:\n\nOh, sorry, I completely failed to read the meaning of the Columns.\n\n> lookup3:\n>\n> Small key speed test - 4-byte keys - 30.17 cycles/hash\n> Small key speed test - 8-byte keys - 31.00 cycles/hash\n> Small key speed test - 16-byte keys - 49.00 cycles/hash\n>\n> xxh3low:\n>\n> Small key speed test - 4-byte keys - 29.00 cycles/hash\n> Small key speed test - 8-byte keys - 29.58 cycles/hash\n> Small key speed test - 16-byte keys - 37.00 cycles/hash\n\nThe winner of the \"Small key speed test\" competition seems to be:\n\nahash64 \"ahash 64bit\":\nSmall key speed test - 4-byte keys - 24.00 cycles/hash\nSmall key speed test - 8-byte keys - 24.00 cycles/hash\nSmall key speed test - 16-byte keys - 26.98 cycles/hash\n\nLooks like it's a popular one, e.g. it's used by Rust in their std::collections::HashSet.\n\nAnother notable property of ahash64 is that it's \"DOS resistant\",\nbut it isn't crucial for our use case, since we exclusively target\nauto-generated primary keys which are not influenced by end-users.\n\n> Not sure what you mean by \"optimizing for space\" - I imagined the\n> on-disk format would just use the same hash table with tiny amount of\n> free space (say 10% and not ~%50).\n\nWith \"optimizing for space\" I meant trying to find some alternative or\nintermediate data structure that is more likely to be compressible,\nlike your idea of sorting the data.\nWhat I wondered was if the on-disk format would be affected by\nthe choice of hash function. I guess it wouldn't, if the hashset\nis created by adding the elements one-by-one by iterating\nthrough the elements by reading the on-disk format.\nBut I thought maybe something more advanced could be\ndone, where conversion between the on-disk format\nand the in-memory format could be done without naively\niterating through all elements, i.e. something less complex\nthan O(n).\nNo idea what that mechanism would be though.\n\n> My suggestion is to be lazy, just use the lookup3 we have in hashfn.c\n> (through hash_bytes or something), and at the same time make it possible\n> to switch to a different function in the future. I'd store and ID of the\n> hash function in the set, so that we can support a different algorithm\n> in the future, if we choose to.\n\nSounds good to me.\n\nSmart idea to include the hash function algorithm ID in the set,\nto allow implementing a different one in the future!\n\n/Joel\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:36:01 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 12, 2023, at 22:36, Joel Jacobson wrote:\n> On Mon, Jun 12, 2023, at 21:58, Tomas Vondra wrote:\n>> My suggestion is to be lazy, just use the lookup3 we have in hashfn.c\n>> (through hash_bytes or something), and at the same time make it possible\n>> to switch to a different function in the future. I'd store and ID of the\n>> hash function in the set, so that we can support a different algorithm\n>> in the future, if we choose to.\n\nhashset is now using hash_bytes_uint32() from hashfn.h\n\nOther changes in the same commit:\n\n* Introduce hashfn_id field to specify hash function ID\n* Implement hashset_send and hashset_recv and add C-test using libpq\n* Add operators and operator classes for hashset comparison, sorting\n and distinct queries\n\nLooks good? If so, I wonder what's best to focus on next?\nPerhaps adding support for bigint? Other ideas?\n\n/Joel",
"msg_date": "Tue, 13 Jun 2023 20:50:50 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 13, 2023, at 20:50, Joel Jacobson wrote:\n> hashset is now using hash_bytes_uint32() from hashfn.h\n\nI spotted a problem in the ordering logic of the comparison functions.\n\nThe issue was with handling hashsets containing empty positions,\ncausing non-lexicographic ordering.\n\nThe updated implementation now correctly iterates over the hashsets,\nskipping any empty positions, which results in proper comparison\nand ordering of elements present in the hashset.\n\nNew patch attached.",
"msg_date": "Tue, 13 Jun 2023 22:23:57 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, 12 Jun 2023 at 22:37, Tomas Vondra <[email protected]>\nwrote:\n\n> Perhaps. So you're proposing to have this as a regular built-in type?\n> It's hard for me to judge how popular this feature would be, but I guess\n> people often use arrays while they actually want set semantics ...\n>\n\nPerspective from a potential user: I'm currently working on something where\nan array-like structure with fast membership test performance would be very\nuseful. The main type of query is doing an =ANY(the set) filter, where the\nset could contain anywhere from very few to thousands of entries (ints in\nour case). So we'd want the same index usage as =ANY(array) but would like\nfaster row checking than we get with an array when other indexes are used.\n\nOur app runs connecting to either an embedded postgres database that we\ncontrol or an external one controlled by customers - this is typically RDS\nor some other cloud vendor's DB. Having such a type as a separate extension\nwould make it unusable for us until all relevant cloud vendors decided that\nit was popular enough to include - something that may never happen, or even\nif it did, now any time soon.\n\nCheers\n\nTom\n\nOn Mon, 12 Jun 2023 at 22:37, Tomas Vondra <[email protected]> wrote:\nPerhaps. So you're proposing to have this as a regular built-in type?\nIt's hard for me to judge how popular this feature would be, but I guess\npeople often use arrays while they actually want set semantics ...Perspective from a potential user: I'm currently working on something where an array-like structure with fast membership test performance would be very useful. The main type of query is doing an =ANY(the set) filter, where the set could contain anywhere from very few to thousands of entries (ints in our case). So we'd want the same index usage as =ANY(array) but would like faster row checking than we get with an array when other indexes are used.Our app runs connecting to either an embedded postgres database that we control or an external one controlled by customers - this is typically RDS or some other cloud vendor's DB. Having such a type as a separate extension would make it unusable for us until all relevant cloud vendors decided that it was popular enough to include - something that may never happen, or even if it did, now any time soon.CheersTom",
"msg_date": "Wed, 14 Jun 2023 14:01:09 +0930",
"msg_from": "Tom Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 14, 2023, at 06:31, Tom Dunstan wrote:\n> On Mon, 12 Jun 2023 at 22:37, Tomas Vondra \n> <[email protected]> wrote:\n>> Perhaps. So you're proposing to have this as a regular built-in type?\n>> It's hard for me to judge how popular this feature would be, but I guess\n>> people often use arrays while they actually want set semantics ...\n>\n> Perspective from a potential user: I'm currently working on something \n> where an array-like structure with fast membership test performance \n> would be very useful. The main type of query is doing an =ANY(the set) \n> filter, where the set could contain anywhere from very few to thousands \n> of entries (ints in our case). So we'd want the same index usage as \n> =ANY(array) but would like faster row checking than we get with an \n> array when other indexes are used.\n\nThanks for providing an interesting use-case.\n\nIf you would like to help, one thing that would be helpful,\nwould be a complete runnable sql script,\nthat demonstrates exactly the various array-based queries\nyou currently use, with random data that resembles\nreality as closely as possible, i.e. the same number of rows\nin the tables, and similar distribution of values etc.\n\nThis would be helpful in terms of documentation,\nas I think it would be good to provide Usage examples\nthat are based on real-life scenarios.\n\nIt would also be helpful to create realistic benchmarks when\nevaluating and optimising the performance.\n\n> Our app runs connecting to either an embedded postgres database that we \n> control or an external one controlled by customers - this is typically \n> RDS or some other cloud vendor's DB. Having such a type as a separate \n> extension would make it unusable for us until all relevant cloud \n> vendors decided that it was popular enough to include - something that \n> may never happen, or even if it did, now any time soon.\n\nGood point.\n\n\n",
"msg_date": "Wed, 14 Jun 2023 09:56:21 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 6/14/23 06:31, Tom Dunstan wrote:\n> On Mon, 12 Jun 2023 at 22:37, Tomas Vondra\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n> \n> Perhaps. So you're proposing to have this as a regular built-in type?\n> It's hard for me to judge how popular this feature would be, but I guess\n> people often use arrays while they actually want set semantics ...\n> \n> \n> Perspective from a potential user: I'm currently working on something\n> where an array-like structure with fast membership test performance\n> would be very useful. The main type of query is doing an =ANY(the set)\n> filter, where the set could contain anywhere from very few to thousands\n> of entries (ints in our case). So we'd want the same index usage as\n> =ANY(array) but would like faster row checking than we get with an array\n> when other indexes are used.\n> \n\nWe kinda already do this since PG14 (commit 50e17ad281), actually. If\nthe list is long enough (9 values or more), we'll build a hash table\nduring query execution. So pretty much exactly what you're asking for.\n\n> Our app runs connecting to either an embedded postgres database that we\n> control or an external one controlled by customers - this is typically\n> RDS or some other cloud vendor's DB. Having such a type as a separate\n> extension would make it unusable for us until all relevant cloud vendors\n> decided that it was popular enough to include - something that may never\n> happen, or even if it did, now any time soon.\n> \n\nUnderstood, but that's really a problem / choice of the cloud vendors.\n\nThe thing is, adding stuff to core is not free - it means the community\nbecomes responsible for maintenance, testing, fixing issues, etc. It's\nnot feasible (or desirable) to have all extensions in core, and cloud\nvendors generally do have ways to support some pre-vetted extensions\nthat they deem useful enough. Granted, it means vetting/maintenance for\nthem, but that's kinda the point of managed services. And it'd not be\nfree for us either.\n\nAnyway, that's mostly irrelevant, as PG14 already does the hash table\nfor this kind of queries. And I'm not strictly against adding some of\nthis into core, if it ends up being useful enough.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Jun 2023 11:44:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 14, 2023, at 11:44, Tomas Vondra wrote:\n>> Perspective from a potential user: I'm currently working on something\n>> where an array-like structure with fast membership test performance\n>> would be very useful. The main type of query is doing an =ANY(the set)\n>> filter, where the set could contain anywhere from very few to thousands\n>> of entries (ints in our case). So we'd want the same index usage as\n>> =ANY(array) but would like faster row checking than we get with an array\n>> when other indexes are used.\n>> \n>\n> We kinda already do this since PG14 (commit 50e17ad281), actually. If\n> the list is long enough (9 values or more), we'll build a hash table\n> during query execution. So pretty much exactly what you're asking for.\n\nWould it be feasible to teach the planner to utilize the internal hash table of\nhashset directly? In the case of arrays, the hash table construction is an\nad hoc operation, whereas with hashset, the hash table already exists, which\ncould potentially lead to a faster execution.\n\nEssentially, the aim would be to support:\n\n=ANY(hashset)\n\nInstead of the current:\n\n=ANY(hashset_to_array(hashset))\n\nThoughts?\n\n/Joel\n\n\n",
"msg_date": "Wed, 14 Jun 2023 14:57:07 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/14/23 14:57, Joel Jacobson wrote:\n> On Wed, Jun 14, 2023, at 11:44, Tomas Vondra wrote:\n>>> Perspective from a potential user: I'm currently working on something\n>>> where an array-like structure with fast membership test performance\n>>> would be very useful. The main type of query is doing an =ANY(the set)\n>>> filter, where the set could contain anywhere from very few to thousands\n>>> of entries (ints in our case). So we'd want the same index usage as\n>>> =ANY(array) but would like faster row checking than we get with an array\n>>> when other indexes are used.\n>>>\n>>\n>> We kinda already do this since PG14 (commit 50e17ad281), actually. If\n>> the list is long enough (9 values or more), we'll build a hash table\n>> during query execution. So pretty much exactly what you're asking for.\n> \n> Would it be feasible to teach the planner to utilize the internal hash table of\n> hashset directly? In the case of arrays, the hash table construction is an\n> ad hoc operation, whereas with hashset, the hash table already exists, which\n> could potentially lead to a faster execution.\n> \n> Essentially, the aim would be to support:\n> \n> =ANY(hashset)\n> \n> Instead of the current:\n> \n> =ANY(hashset_to_array(hashset))\n> \n> Thoughts?\n\nThat should be possible, but probably only when hashset is a built-in\ndata type (maybe polymorphic).\n\nI don't know if it'd be worth it, the general idea is that building the\nhash table is way cheaper than repeated lookups in an array. Yeah, it\nmight save something, but likely only a tiny fraction of the runtime.\n\nIt's definitely something I'd leave out of v0, personally.\n\n=ANY(set) should probably work with an implicit ARRAY cast, I believe.\nIt'll do the ad hoc build, ofc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Jun 2023 15:16:26 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-14 We 05:44, Tomas Vondra wrote:\n>\n> The thing is, adding stuff to core is not free - it means the community\n> becomes responsible for maintenance, testing, fixing issues, etc. It's\n> not feasible (or desirable) to have all extensions in core, and cloud\n> vendors generally do have ways to support some pre-vetted extensions\n> that they deem useful enough. Granted, it means vetting/maintenance for\n> them, but that's kinda the point of managed services. And it'd not be\n> free for us either.\n\n\nI agree it's a judgement call. But the code burden here seems pretty \nsmall, far smaller than, say, the SQL/JSON patches. And I think the \nrange of applications that could benefit is quite significant.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-14 We 05:44, Tomas Vondra\n wrote:\n\n\nThe thing is, adding stuff to core is not free - it means the community\nbecomes responsible for maintenance, testing, fixing issues, etc. It's\nnot feasible (or desirable) to have all extensions in core, and cloud\nvendors generally do have ways to support some pre-vetted extensions\nthat they deem useful enough. Granted, it means vetting/maintenance for\nthem, but that's kinda the point of managed services. And it'd not be\nfree for us either.\n\n\n\nI agree it's a judgement call. But the code burden here seems\n pretty small, far smaller than, say, the SQL/JSON patches. And I\n think the range of applications that could benefit is quite\n significant.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 10:38:19 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 14, 2023, at 15:16, Tomas Vondra wrote:\n> On 6/14/23 14:57, Joel Jacobson wrote:\n>> Would it be feasible to teach the planner to utilize the internal hash table of\n>> hashset directly? In the case of arrays, the hash table construction is an\n...\n> It's definitely something I'd leave out of v0, personally.\n\nOK, thanks for guidance, I'll stay away from it.\n\nI've been doing some preparatory work on this todo item:\n\n> 3) support for other types (now it only works with int32)\n\nI've renamed the type from \"hashset\" to \"int4hashset\",\nand the SQL-functions are now prefixed with \"int4\"\nwhen necessary. The overloaded functions with\nint4hashset as input parameters don't need to be prefixed,\ne.g. hashset_add(int4hashset, int).\n\nOther changes since last update (4e60615):\n\n* Support creation of empty hashset using '{}'::hashset\n* Introduced a new function hashset_capacity() to return the current capacity\n of a hashset.\n* Refactored hashset initialization:\n - Replaced hashset_init(int) with int4hashset() to initialize an empty hashset\n with zero capacity.\n - Added int4hashset_with_capacity(int) to initialize a hashset with\n a specified capacity.\n* Improved README.md and testing\n\nAs a next step, I'm planning on adding int8 support.\n\nLooks and sounds good?\n\n/Joel",
"msg_date": "Wed, 14 Jun 2023 23:04:32 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, 14 Jun 2023 at 19:14, Tomas Vondra <[email protected]>\nwrote:\n\n> > ...So we'd want the same index usage as\n> > =ANY(array) but would like faster row checking than we get with an array\n> > when other indexes are used.\n>\n> We kinda already do this since PG14 (commit 50e17ad281), actually. If\n> the list is long enough (9 values or more), we'll build a hash table\n> during query execution. So pretty much exactly what you're asking for.\n>\n\nHa! That is great. Unfortunately we can't rely on it as we have customers\nusing versions back to 12. But good to know that it's available when we\nbump the required versions.\n\nThanks\n\nTom\n\nOn Wed, 14 Jun 2023 at 19:14, Tomas Vondra <[email protected]> wrote:\n> ...So we'd want the same index usage as\n> =ANY(array) but would like faster row checking than we get with an array\n> when other indexes are used.\n\nWe kinda already do this since PG14 (commit 50e17ad281), actually. If\nthe list is long enough (9 values or more), we'll build a hash table\nduring query execution. So pretty much exactly what you're asking for.Ha! That is great. Unfortunately we can't rely on it as we have customers using versions back to 12. But good to know that it's available when we bump the required versions.ThanksTom",
"msg_date": "Thu, 15 Jun 2023 09:27:31 +0930",
"msg_from": "Tom Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 5:04 AM Joel Jacobson <[email protected]> wrote:\n\n> On Wed, Jun 14, 2023, at 15:16, Tomas Vondra wrote:\n> > On 6/14/23 14:57, Joel Jacobson wrote:\n> >> Would it be feasible to teach the planner to utilize the internal hash\n> table of\n> >> hashset directly? In the case of arrays, the hash table construction is\n> an\n> ...\n> > It's definitely something I'd leave out of v0, personally.\n>\n> OK, thanks for guidance, I'll stay away from it.\n>\n> I've been doing some preparatory work on this todo item:\n>\n> > 3) support for other types (now it only works with int32)\n>\n> I've renamed the type from \"hashset\" to \"int4hashset\",\n> and the SQL-functions are now prefixed with \"int4\"\n> when necessary. The overloaded functions with\n> int4hashset as input parameters don't need to be prefixed,\n> e.g. hashset_add(int4hashset, int).\n>\n> Other changes since last update (4e60615):\n>\n> * Support creation of empty hashset using '{}'::hashset\n> * Introduced a new function hashset_capacity() to return the current\n> capacity\n> of a hashset.\n> * Refactored hashset initialization:\n> - Replaced hashset_init(int) with int4hashset() to initialize an empty\n> hashset\n> with zero capacity.\n> - Added int4hashset_with_capacity(int) to initialize a hashset with\n> a specified capacity.\n> * Improved README.md and testing\n>\n> As a next step, I'm planning on adding int8 support.\n>\n> Looks and sounds good?\n>\n> /Joel\n\n\nstill playing around with hashset-0.0.1-a8a282a.patch.\n\nI think \"postgres.h\" should be on the top, (someone have said it on another\nemail thread, I forgot who said that)\n\nIn my\nlocal /home/jian/postgres/pg16/include/postgresql/server/libpq/pqformat.h:\n\n> /*\n> * Append a binary integer to a StringInfo buffer\n> *\n> * This function is deprecated; prefer use of the functions above.\n> */\n> static inline void\n> pq_sendint(StringInfo buf, uint32 i, int b)\n\n\nSo I changed to pq_sendint32.\n\nending and beginning, and in between white space should be stripped. The\nfollowing c example seems ok for now. but I am not sure, I don't know how\nto glue it in hashset_in.\n\nforgive me the patch name....\n\n/*\ngcc /home/jian/Desktop/regress_pgsql/strip_white_space.c && ./a.out\n*/\n\n#include<stdio.h>\n#include<stdint.h>\n#include<string.h>\n#include<stdbool.h>\n#include <ctype.h>\n#include<stdlib.h>\n\n/*\n * array_isspace() --- a non-locale-dependent isspace()\n *\n * We used to use isspace() for parsing array values, but that has\n * undesirable results: an array value might be silently interpreted\n * differently depending on the locale setting. Now we just hard-wire\n * the traditional ASCII definition of isspace().\n */\nstatic bool\narray_isspace(char ch)\n{\nif (ch == ' ' ||\nch == '\\t' ||\nch == '\\n' ||\nch == '\\r' ||\nch == '\\v' ||\nch == '\\f')\nreturn true;\nreturn false;\n}\n\nint main(void)\n{\n long *temp = malloc(10 * sizeof(long));\n memset(temp,0,10);\n char source[5][50] = {{0}};\n snprintf(source[0],sizeof(source[0]),\"%s\",\" { 1 , 20 }\");\n snprintf(source[1],sizeof(source[0]),\"%s\",\" { 1 ,20 , 30 \");\n snprintf(source[2],sizeof(source[0]),\"%s\",\" {1 ,20 , 30 \");\n snprintf(source[3],sizeof(source[0]),\"%s\",\" {1 , 20 , 30 }\");\n snprintf(source[4],sizeof(source[0]),\"%s\",\" {1 , 20 , 30 }\n\");\n /* Make a modifiable copy of the input */\nchar *p;\n char string_save[50];\n\n for(int j = 0; j < 5; j++)\n {\n snprintf(string_save,sizeof(string_save),\"%s\",source[j]);\n p = string_save;\n\n int i = 0;\n while (array_isspace(*p))\n p++;\n if (*p != '{')\n {\n printf(\"line: %d should be {\\n\",__LINE__);\n exit(EXIT_FAILURE);\n }\n\n for (;;)\n {\n char *q;\n if (*p == '{')\n p++;\n temp[i] = strtol(p, &q,10);\n printf(\"temp[j=%d] [%d]=%ld\\n\",j,i,temp[i]);\n\n if (*q == '}' && (*(q+1) == '\\0'))\n {\n printf(\"all works ok now exit\\n\");\n break;\n }\n if( !array_isspace(*q) && *q != ',')\n {\n printf(\"wrong format. program will exit\\n\");\n exit(EXIT_FAILURE);\n }\n while(array_isspace(*q))\n q++;\n if(*q != ',')\n break;\n else\n p = q+1;\n i++;\n }\n }\n}",
"msg_date": "Thu, 15 Jun 2023 10:22:09 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 5:04 AM Joel Jacobson <[email protected]> wrote:\n\n> On Wed, Jun 14, 2023, at 15:16, Tomas Vondra wrote:\n> > On 6/14/23 14:57, Joel Jacobson wrote:\n> >> Would it be feasible to teach the planner to utilize the internal hash\n> table of\n> >> hashset directly? In the case of arrays, the hash table construction is\n> an\n> ...\n> > It's definitely something I'd leave out of v0, personally.\n>\n> OK, thanks for guidance, I'll stay away from it.\n>\n> I've been doing some preparatory work on this todo item:\n>\n> > 3) support for other types (now it only works with int32)\n>\n> I've renamed the type from \"hashset\" to \"int4hashset\",\n> and the SQL-functions are now prefixed with \"int4\"\n> when necessary. The overloaded functions with\n> int4hashset as input parameters don't need to be prefixed,\n> e.g. hashset_add(int4hashset, int).\n>\n> Other changes since last update (4e60615):\n>\n> * Support creation of empty hashset using '{}'::hashset\n> * Introduced a new function hashset_capacity() to return the current\n> capacity\n> of a hashset.\n> * Refactored hashset initialization:\n> - Replaced hashset_init(int) with int4hashset() to initialize an empty\n> hashset\n> with zero capacity.\n> - Added int4hashset_with_capacity(int) to initialize a hashset with\n> a specified capacity.\n> * Improved README.md and testing\n>\n> As a next step, I'm planning on adding int8 support.\n>\n> Looks and sounds good?\n>\n> /Joel\n\nI am not sure the following results are correct.\nwith cte as (\n select hashset(x) as x\n ,hashset_capacity(hashset(x))\n ,hashset_count(hashset(x))\n from generate_series(1,10) g(x))\nselect *\n ,'|' as delim\n , hashset_add(x,11111::int)\n ,hashset_capacity(hashset_add(x,11111::int))\n ,hashset_count(hashset_add(x,11111::int))\nfrom cte \\gx\n\n\nresults:\n-[ RECORD 1 ]----+-----------------------------\nx | {8,1,10,3,9,4,6,2,11111,5,7}\nhashset_capacity | 64\nhashset_count | 10\ndelim | |\nhashset_add | {8,1,10,3,9,4,6,2,11111,5,7}\nhashset_capacity | 64\nhashset_count | 11\n\nbut:\nwith cte as(select '{1,2}'::int4hashset as x) select\nx,hashset_add(x,3::int) from cte;\n\nreturns\n x | hashset_add\n-------+-------------\n {1,2} | {3,1,2}\n(1 row)\nlast simple query seems more sensible to me.\n\nOn Thu, Jun 15, 2023 at 5:04 AM Joel Jacobson <[email protected]> wrote:On Wed, Jun 14, 2023, at 15:16, Tomas Vondra wrote:\n> On 6/14/23 14:57, Joel Jacobson wrote:\n>> Would it be feasible to teach the planner to utilize the internal hash table of\n>> hashset directly? In the case of arrays, the hash table construction is an\n...\n> It's definitely something I'd leave out of v0, personally.\n\nOK, thanks for guidance, I'll stay away from it.\n\nI've been doing some preparatory work on this todo item:\n\n> 3) support for other types (now it only works with int32)\n\nI've renamed the type from \"hashset\" to \"int4hashset\",\nand the SQL-functions are now prefixed with \"int4\"\nwhen necessary. The overloaded functions with\nint4hashset as input parameters don't need to be prefixed,\ne.g. hashset_add(int4hashset, int).\n\nOther changes since last update (4e60615):\n\n* Support creation of empty hashset using '{}'::hashset\n* Introduced a new function hashset_capacity() to return the current capacity\n of a hashset.\n* Refactored hashset initialization:\n - Replaced hashset_init(int) with int4hashset() to initialize an empty hashset\n with zero capacity.\n - Added int4hashset_with_capacity(int) to initialize a hashset with\n a specified capacity.\n* Improved README.md and testing\n\nAs a next step, I'm planning on adding int8 support.\n\nLooks and sounds good?\n\n/JoelI am not sure the following results are correct.with cte as ( select hashset(x) as x ,hashset_capacity(hashset(x)) ,hashset_count(hashset(x)) from generate_series(1,10) g(x))select * ,'|' as delim , hashset_add(x,11111::int) ,hashset_capacity(hashset_add(x,11111::int)) ,hashset_count(hashset_add(x,11111::int))from cte \\gxresults: -[ RECORD 1 ]----+-----------------------------x | {8,1,10,3,9,4,6,2,11111,5,7}hashset_capacity | 64hashset_count | 10delim | |hashset_add | {8,1,10,3,9,4,6,2,11111,5,7}hashset_capacity | 64hashset_count | 11but:with cte as(select '{1,2}'::int4hashset as x) select x,hashset_add(x,3::int) from cte;returns x | hashset_add-------+------------- {1,2} | {3,1,2}(1 row)last simple query seems more sensible to me.",
"msg_date": "Thu, 15 Jun 2023 12:29:14 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023, at 04:22, jian he wrote:\n> Attachments:\n> * temp.patch\n\nThanks for good suggestions.\nNew patch attached:\n\nEnhance parsing and reorder headers in hashset module\n\nAllow whitespaces in hashset input and reorder the inclusion of\nheader files, placing PostgreSQL headers first. Additionally, update\ndeprecated pq_sendint calls to pq_sendint32. Add tests for improved\nparsing functionality.\n\n/Joel",
"msg_date": "Thu, 15 Jun 2023 10:06:40 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "In hashset/test/sql/order.sql, can we add the following to test\nwhether the optimizer\nwill use our index.\n\nCREATE INDEX ON test_int4hashset_order (int4hashset_col\n int4hashset_btree_ops);\n\n-- to make sure that this work with just two rows\nSET enable_seqscan TO off;\n\nexplain(costs off) SELECT * FROM test_int4hashset_order WHERE\nint4hashset_col = '{1,2}'::int4hashset;\nreset enable_seqscan;\n\nSince most contrib modules, one module, only one test file, maybe we need\nto consolidate all the test sql files to one sql file (int4hashset.sql)?\n--------------\nI didn't install the extension directly. I copied the hashset--0.0.1.sql to\nanother place, using gcc to compile these functions.\ngcc -I/home/jian/postgres/2023_05_25_beta5421/include/server -fPIC -c\n/home/jian/hashset/hashset.c\ngcc -shared -o /home/jian/hashset/hashset.so /home/jian/hashset/hashset.o\nthen modify hashset--0.0.1.sql then in psql \\i fullsqlfilename to create\nthese functions, types.\n\nBecause even make\nPG_CONFIG=/home/jian/postgres/2023_05_25_beta5421/bin/pg_config still has\nan error.\n fatal error: libpq-fe.h: No such file or directory\n 3 | #include <libpq-fe.h>\n\nIs there any way to put test_send_recv.c to sql test file?\nAttached is a patch slightly modified README.md. feel free to change, since\ni am not native english speaker...",
"msg_date": "Thu, 15 Jun 2023 17:44:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023, at 11:44, jian he wrote:\n> I didn't install the extension directly. I copied the \n> hashset--0.0.1.sql to another place, using gcc to compile these \n> functions. \n..\n> Because even make \n> PG_CONFIG=/home/jian/postgres/2023_05_25_beta5421/bin/pg_config still \n> has an error.\n> fatal error: libpq-fe.h: No such file or directory\n> 3 | #include <libpq-fe.h>\n\nWhat platform are you on?\nYou seem to be missing the postgresql dev package.\nFor instance, here is how to compile and install the extension on Ubuntu 22.04.1 LTS:\n\nsudo apt install postgresql-15 postgresql-server-dev-15 postgresql-client-15\ngit clone https://github.com/tvondra/hashset.git\ncd hashset\nmake\nsudo make install\nmake installcheck\n\n> Is there any way to put test_send_recv.c to sql test file? \n\nUnfortunately, there doesn't seem to be a way to test *_recv() functions from SQL,\nsince they take `internal` as input. The only way I could figure out to test them\nwas to write a C-program using libpq's binary mode.\n\nI also note that the test_send_recv test was broken; I had forgot to change\nthe type from \"hashset\" to \"int4hashset\". Fixed in attached commit.\n\nOn Ubuntu, you can now run the test by specifying to connect via the UNIX socket:\n\nPGHOST=/var/run/postgresql make run_c_tests\ncd test/c_tests && ./test_send_recv.sh\ntest test_send_recv ... ok\n\n> Attached is a patch slightly modified README.md. feel free to change, \n> since i am not native english speaker... \n> Attachments:\n> * 0001-add-instruction-using-PG_CONFIG-to-install-extension.patch\n\nThanks, will have a look later.\n\n/Joel",
"msg_date": "Thu, 15 Jun 2023 17:55:56 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023, at 06:29, jian he wrote:\n> I am not sure the following results are correct.\n> with cte as (\n> select hashset(x) as x\n> ,hashset_capacity(hashset(x))\n> ,hashset_count(hashset(x))\n> from generate_series(1,10) g(x))\n> select *\n> ,'|' as delim\n> , hashset_add(x,11111::int)\n> ,hashset_capacity(hashset_add(x,11111::int))\n> ,hashset_count(hashset_add(x,11111::int))\n> from cte \\gx\n>\n>\n> results: \n> -[ RECORD 1 ]----+-----------------------------\n> x | {8,1,10,3,9,4,6,2,11111,5,7}\n> hashset_capacity | 64\n> hashset_count | 10\n> delim | |\n> hashset_add | {8,1,10,3,9,4,6,2,11111,5,7}\n> hashset_capacity | 64\n> hashset_count | 11\n\nNice catch, you found a bug!\n\nFixed in attached patch:\n\n---\nEnsure hashset_add and hashset_merge operate on copied data\n\nPreviously, the hashset_add() and hashset_merge() functions were\nmodifying the original hashset in-place. This was leading to unexpected\nresults because the original data in the hashset was being altered.\n\nThis commit introduces the macro PG_GETARG_INT4HASHSET_COPY(), ensuring\na copy of the hashset is created and modified, leaving the original\nhashset untouched.\n\nThis adjustment ensures hashset_add() and hashset_merge() operate\ncorrectly on the copied hashset and prevent modification of the\noriginal data.\n\nA new regression test file `reported_bugs.sql` has been added to\nvalidate the proper functionality of these changes. Future reported\nbugs and their corresponding tests will also be added to this file.\n---\n\nI wonder if this function:\n\nstatic int4hashset_t *\nint4hashset_copy(int4hashset_t *src)\n{\n\treturn src;\n}\n\n...that was previously named hashset_copy(),\nshould be implemented to actually copy the struct,\ninstead of just returning the input?\n\nIt is being used by int4hashset_agg_combine() like this:\n\n/* copy the hashset into the right long-lived memory context */\noldcontext = MemoryContextSwitchTo(aggcontext);\nsrc = int4hashset_copy(src);\nMemoryContextSwitchTo(oldcontext);\n\n/Joel",
"msg_date": "Thu, 15 Jun 2023 23:05:38 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 15, 2023, at 11:44, jian he wrote:\n> In hashset/test/sql/order.sql, can we add the following to test whether \n> the optimizer will use our index.\n>\n> CREATE INDEX ON test_int4hashset_order (int4hashset_col \n> int4hashset_btree_ops);\n>\n> -- to make sure that this work with just two rows\n> SET enable_seqscan TO off; \n>\n> explain(costs off) SELECT * FROM test_int4hashset_order WHERE \n> int4hashset_col = '{1,2}'::int4hashset;\n> reset enable_seqscan;\n\nNot sure I can see the value of that test,\nsince we've already tested the comparison functions,\nwhich are used by the int4hashset_btree_ops operator class.\n\nI think a test that verifies the btree index is actually used,\nwould more be a test of the query planner than hashset.\n\nI might be missing something here, please tell me if so.\n\n> Since most contrib modules, one module, only one test file, maybe we \n> need to consolidate all the test sql files to one sql file \n> (int4hashset.sql)?\n\nI've also made the same observation; I wonder if it's by design\nor by coincidence? I think multiple test files improves modularity,\nisolation and overall organisation of the testing.\n\nAs long as we are developing in the pre-release phase,\nI think it's beneficial and affordable with rigorous testing.\n\nHowever, if hashset would ever be considered\nfor core inclusion, then we should consolidate all tests into\none file and retain only essential tests, thereby minimizing\nimpact on PostgreSQL's overall test suite runtime\nwhere every millisecond matters.\n\n> Attached is a patch slightly modified README.md. feel free to change, \n> since i am not native english speaker... \n>\n> Attachments:\n> * 0001-add-instruction-using-PG_CONFIG-to-install-extension.patch\n\nThanks, improvements incorporated with some minor changes.\n\n/Joel",
"msg_date": "Fri, 16 Jun 2023 02:27:33 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "New patch attached:\n\nAdd customizable params to int4hashset() and collision count function\n\nThis commit enhances int4hashset() by introducing adjustable capacity,\nload, and growth factors, providing flexibility for performance optimization.\n\nAlso added is a new function, hashset_collisions(), to report collision\ncounts, aiding in performance tuning.\n\nAggregate functions are renamed to hashset_agg() for consistency with\narray_agg() and range_agg().\n\nA new test file, test/sql/benchmark.sql, is added for evaluating the\nperformance of hash functions. It's not run automatically by\nmake installcheck.\n\nThe adjustable parameters and the naive hash function are useful for testing\nand performance comparison. However, to keep things simple and streamlined\nfor users, these features are likely to be removed in the final release,\nemphasizing the use of well-optimized default settings.\n\nSQL-function indentation is also adjusted to align with the PostgreSQL\nsource repo, improving readability.\n\nIn the benchmark results below, it was a bit surprising the naive hash\nfunction had no collisions, but that only held true when the input\nelements were sequential integers. When tested with random integers,\nall three hash functions caused collisions.\n\nTiming results not statistical significant, the purpose is just to\ngive an idea of the execution times.\n\n*** Elements in sequence 1..100000\n- Testing default hash function (Jenkins/lookup3)\npsql:test/sql/benchmark.sql:23: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:23: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:23: NOTICE: hashset_collisions: 31195\nDO\nTime: 1342.564 ms (00:01.343)\n- Testing Murmurhash32\npsql:test/sql/benchmark.sql:40: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:40: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:40: NOTICE: hashset_collisions: 30879\nDO\nTime: 1297.823 ms (00:01.298)\n- Testing naive hash function\npsql:test/sql/benchmark.sql:57: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:57: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:57: NOTICE: hashset_collisions: 0\nDO\nTime: 1400.936 ms (00:01.401)\n*** Testing 100000 random ints\n setseed\n---------\n\n(1 row)\n\nTime: 3.591 ms\n- Testing default hash function (Jenkins/lookup3)\npsql:test/sql/benchmark.sql:77: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:77: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:77: NOTICE: hashset_collisions: 30919\nDO\nTime: 1415.497 ms (00:01.415)\n setseed\n---------\n\n(1 row)\n\nTime: 1.282 ms\n- Testing Murmurhash32\npsql:test/sql/benchmark.sql:95: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:95: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:95: NOTICE: hashset_collisions: 30812\nDO\nTime: 2079.202 ms (00:02.079)\n setseed\n---------\n\n(1 row)\n\nTime: 0.122 ms\n- Testing naive hash function\npsql:test/sql/benchmark.sql:113: NOTICE: hashset_count: 100000\npsql:test/sql/benchmark.sql:113: NOTICE: hashset_capacity: 262144\npsql:test/sql/benchmark.sql:113: NOTICE: hashset_collisions: 30822\nDO\nTime: 1613.965 ms (00:01.614)\n\n/Joel",
"msg_date": "Fri, 16 Jun 2023 12:24:43 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "similar to (int[] || int4) and (int4 || int[])\nshould we expect ('{1,2}'::int4hashset || 3) == (3 ||\n '{1,2}'::int4hashset) == (select hashset_add('{1,2}'::int4hashset,3)); *?*\n\nThe following is the general idea on how to make it work by looking at\nsimilar code....\nCREATE OPERATOR || (\n leftarg = int4hashset,\n rightarg = int4,\n function = int4hashset_add,\n commutator = ||\n);\n\nCREATE OR REPLACE FUNCTION int4_add_int4hashset(int4, int4hashset)\nRETURNS int4hashset\nLANGUAGE sql\nIMMUTABLE PARALLEL SAFE STRICT COST 1\nRETURN $2 || $1;\n\nCREATE OPERATOR || (\n leftarg = int4,\n rightarg = int4hashset,\n function = int4_add_int4hashset,\n commutator = ||\n);\nwhile creating an operator. I am not sure how to specify\nNEGATOR,RESTRICT,JOIN clause.\n-----------------------------------------------------------------------------------------------------------------------------\nalso. I think the following query should return one row only? but now it\ndoesn't.\nselect hashset_cmp('{1,2}','{2,1}')\nunion\nselect hashset_cmp('{1,2}','{1,2,1}')\nunion\nselect hashset_cmp('{1,2}','{1,2}');\n----------------------------------------------------------------------------------------------------------------------\nsimilar to elem_contained_by_range, range_contains_elem. we should already\nconsider the operator *<@* and @*>? *\n\nCREATE OR REPLACE FUNCTION elem_contained_by_hashset(int4, int4hashset)\nRETURNS bool\nLANGUAGE sql\nIMMUTABLE PARALLEL SAFE STRICT COST 1\nRETURN hashset_contains ($2,$1);\n\nIs the integer contained in the int4hashset?\ninteger <@ int4hashset → boolean\n1 <@ int4hashset'{1,7}' → t\n\nCREATE OPERATOR <@ (\n leftarg = integer,\n rightarg = int4hashset,\n function = elem_contained_by_hashset\n);\n\nint4hashset @> integer → boolean\nDoes the int4hashset contain the element?\nint4hashset'{1,7}' @> 1 → t\n\nCREATE OPERATOR @> (\n leftarg = int4hashset,\n rightarg = integer,\n function = hashset_contains\n);\n-------------------\n\nsimilar to (int[] || int4) and (int4 || int[])should we expect ('{1,2}'::int4hashset || 3) == (3 || '{1,2}'::int4hashset) == (select hashset_add('{1,2}'::int4hashset,3)); ?The following is the general idea on how to make it work by looking at similar code....CREATE OPERATOR || ( leftarg = int4hashset, rightarg = int4, function = int4hashset_add, commutator = ||);CREATE OR REPLACE FUNCTION int4_add_int4hashset(int4, int4hashset)RETURNS int4hashsetLANGUAGE sqlIMMUTABLE PARALLEL SAFE STRICT COST 1RETURN $2 || $1;CREATE OPERATOR || ( leftarg = int4, rightarg = int4hashset, function = int4_add_int4hashset, commutator = ||);while creating an operator. I am not sure how to specify NEGATOR,RESTRICT,JOIN clause.-----------------------------------------------------------------------------------------------------------------------------also. I think the following query should return one row only? but now it doesn't.select hashset_cmp('{1,2}','{2,1}')union select hashset_cmp('{1,2}','{1,2,1}')union select hashset_cmp('{1,2}','{1,2}');----------------------------------------------------------------------------------------------------------------------similar to elem_contained_by_range, range_contains_elem. we should already consider the operator <@ and @>? CREATE OR REPLACE FUNCTION elem_contained_by_hashset(int4, int4hashset)RETURNS boolLANGUAGE sqlIMMUTABLE PARALLEL SAFE STRICT COST 1RETURN hashset_contains ($2,$1);Is the integer contained in the int4hashset?integer <@ int4hashset → boolean1 <@ int4hashset'{1,7}' → tCREATE OPERATOR <@ ( leftarg = integer, rightarg = int4hashset, function = elem_contained_by_hashset);int4hashset @> integer → booleanDoes the int4hashset contain the element?int4hashset'{1,7}' @> 1 → tCREATE OPERATOR @> ( leftarg = int4hashset, rightarg = integer, function = hashset_contains);-------------------",
"msg_date": "Fri, 16 Jun 2023 19:57:05 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 16, 2023, at 13:57, jian he wrote:\n> similar to (int[] || int4) and (int4 || int[])\n> should we expect ('{1,2}'::int4hashset || 3) == (3 || \n> '{1,2}'::int4hashset) == (select hashset_add('{1,2}'::int4hashset,3)); \n> *?*\n\nGood idea, makes sense to support it.\nImplemented in attached patch.\n\n> CREATE OPERATOR || (\n> leftarg = int4,\n> rightarg = int4hashset,\n> function = int4_add_int4hashset,\n> commutator = ||\n> );\n> while creating an operator. I am not sure how to specify \n> NEGATOR,RESTRICT,JOIN clause.\n\nI don't think we need those for this operator, might be wrong though.\n\n> -----------------------------------------------------------------------------------------------------------------------------\n> also. I think the following query should return one row only? but now \n> it doesn't.\n> select hashset_cmp('{1,2}','{2,1}')\n> union \n> select hashset_cmp('{1,2}','{1,2,1}')\n> union \n> select hashset_cmp('{1,2}','{1,2}');\n\nGood point.\n\nI realise int4hashset_hash() is broken,\nsince two int4hashset's that are considered equal,\ncan by coincidence get different hashes:\n\nSELECT '{1,2}'::int4hashset = '{2,1}'::int4hashset;\n ?column?\n----------\n t\n(1 row)\n\nSELECT hashset_hash('{1,2}'::int4hashset);\n hashset_hash\n--------------\n 990882385\n(1 row)\n\nSELECT hashset_hash('{2,1}'::int4hashset);\n hashset_hash\n--------------\n 996377797\n(1 row)\n\nDo we have any ideas on how to fix this without sacrificing performance?\n\nWe of course want to avoid having to sort the hashsets,\nwhich is the naive solution.\n\nTo understand why this is happening, consider this example:\n\nSELECT '{1,2}'::int4hashset;\n int4hashset\n-------------\n {1,2}\n(1 row)\n\nSELECT '{2,1}'::int4hashset;\n int4hashset\n-------------\n {2,1}\n(1 row)\n\nIf the hash of `1` and `2` modulus the capacity results in the same value,\nthey will be attempted to be inserted into the same position,\nand since the input text is parsed left-to-right, in the first case `1` will win\nthe first position, and `2` will get a collision and try the next position.\n\nIn the second case, the opposite happens.\n\nSince we do modulus capacity, the position depends on the capacity,\nwhich is why the output string can be different for the same input.\n\nSELECTint4hashset() || 1 || 2 || 3;\n {3,1,2}\n\nSELECTint4hashset(capacity:=1) || 1 || 2 || 3;\n {3,1,2}\n\nSELECTint4hashset(capacity:=2) || 1 || 2 || 3;\n {3,1,2}\n\nSELECTint4hashset(capacity:=3) || 1 || 2 || 3;\n {3,2,1}\n\nSELECTint4hashset(capacity:=4) || 1 || 2 || 3;\n {3,1,2}\n\nSELECTint4hashset(capacity:=5) || 1 || 2 || 3;\n {1,2,3}\n\nSELECTint4hashset(capacity:=6) || 1 || 2 || 3;\n {1,3,2}\n\n\n> ----------------------------------------------------------------------------------------------------------------------\n> similar to elem_contained_by_range, range_contains_elem. we should \n> already consider the operator *<@* and @*>? *\n\nThat could perhaps be nice.\nApart from possible syntax convenience,\nare there any other benefits than just using the function hashset_contains(int4hashset, integer) directly?\n\n/Joel",
"msg_date": "Fri, 16 Jun 2023 17:42:14 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 16, 2023, at 17:42, Joel Jacobson wrote:\n> I realise int4hashset_hash() is broken,\n> since two int4hashset's that are considered equal,\n> can by coincidence get different hashes:\n...\n> Do we have any ideas on how to fix this without sacrificing performance?\n\nThe problem was due to hashset_hash() function accumulating the hashes\nof individual elements in a non-commutative manner. As a consequence, the\nfinal hash value was sensitive to the order in which elements were inserted\ninto the hashset. This behavior led to inconsistencies, as logically\nequivalent sets (i.e., sets with the same elements but in different orders)\nproduced different hash values.\n\nSolved by modifying the hashset_hash() function to use a commutative operation\nwhen combining the hashes of individual elements. This change ensures that the\nfinal hash value is independent of the element insertion order, and logically\nequivalent sets produce the same hash.\n\nAn somewhat unfortunate side-effect of this fix, is that we can no longer\nvisually sort the hashset output format, since it's not lexicographically sorted.\nI think this is an acceptable trade-off for a hashset type,\nsince the only alternative I see would be to sort the elements,\nbut then it wouldn't be a hashset, but a treeset, which different\nBig-O complexity.\n\nNew patch is attached, which will henceforth always be a complete patch,\nto avoid the hassle of having to assemble incremental patches.\n\n/Joel",
"msg_date": "Sat, 17 Jun 2023 02:38:23 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\nOn 2023-06-16 Fr 20:38, Joel Jacobson wrote:\n>\n> New patch is attached, which will henceforth always be a complete patch,\n> to avoid the hassle of having to assemble incremental patches.\n\n\nCool, thanks.\n\n\nA couple of random thoughts:\n\n\n. It might be worth sending a version number with the send function \n(c.f. jsonb_send / jsonb_recv). That way would would not be tied forever \nto some wire representation.\n\n. I think there are some important set operations missing: most notably \nintersection, slightly less importantly asymmetric and symmetric \ndifference. I have no idea how easy these would be to add, but even for \nyour stated use I should have thought set intersection would be useful \n(\"Who is a member of both this set of friends and that set of friends?\").\n\n. While supporting int4 only is OK for now, I think we would at least \nwant to support int8, and probably UUID since a number of systems I know \nof use that as an object identifier.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 18 Jun 2023 12:45:32 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sun, Jun 18, 2023, at 18:45, Andrew Dunstan wrote:\n> . It might be worth sending a version number with the send function \n> (c.f. jsonb_send / jsonb_recv). That way would would not be tied forever \n> to some wire representation.\n\nGreat idea; implemented.\n\n> . I think there are some important set operations missing: most notably \n> intersection, slightly less importantly asymmetric and symmetric \n> difference. I have no idea how easy these would be to add, but even for \n> your stated use I should have thought set intersection would be useful \n> (\"Who is a member of both this set of friends and that set of friends?\").\n\nAnother great idea; implemented.\n\n> . While supporting int4 only is OK for now, I think we would at least \n> want to support int8, and probably UUID since a number of systems I know \n> of use that as an object identifier.\n\nI agree that's probably the most logical thing to focus on next. I'm on it.\n\nNew patch attached.\n\n/Joel",
"msg_date": "Sun, 18 Jun 2023 21:57:57 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 8:38 AM Joel Jacobson <[email protected]> wrote:\n>\n> On Fri, Jun 16, 2023, at 17:42, Joel Jacobson wrote:\n> > I realise int4hashset_hash() is broken,\n> > since two int4hashset's that are considered equal,\n> > can by coincidence get different hashes:\n> ...\n> > Do we have any ideas on how to fix this without sacrificing performance?\n>\n> The problem was due to hashset_hash() function accumulating the hashes\n> of individual elements in a non-commutative manner. As a consequence, the\n> final hash value was sensitive to the order in which elements were inserted\n> into the hashset. This behavior led to inconsistencies, as logically\n> equivalent sets (i.e., sets with the same elements but in different orders)\n> produced different hash values.\n>\n> Solved by modifying the hashset_hash() function to use a commutative operation\n> when combining the hashes of individual elements. This change ensures that the\n> final hash value is independent of the element insertion order, and logically\n> equivalent sets produce the same hash.\n>\n> An somewhat unfortunate side-effect of this fix, is that we can no longer\n> visually sort the hashset output format, since it's not lexicographically sorted.\n> I think this is an acceptable trade-off for a hashset type,\n> since the only alternative I see would be to sort the elements,\n> but then it wouldn't be a hashset, but a treeset, which different\n> Big-O complexity.\n>\n> New patch is attached, which will henceforth always be a complete patch,\n> to avoid the hassle of having to assemble incremental patches.\n>\n> /Joel\n\n\nselect hashset_contains('{1,2}'::int4hashset,NULL::int);\nshould return null?\n---------------------------------------------------------------------------------\nSELECT attname\n ,pc.relname\n ,CASE attstorage\n WHEN 'p' THEN 'plain'\n WHEN 'e' THEN 'external'\n WHEN 'm' THEN 'main'\n WHEN 'x' THEN 'extended'\n END AS storage\nFROM pg_attribute pa\njoin pg_class pc on pc.oid = pa.attrelid\nwhere attnum > 0 and pa.attstorage = 'e';\n\nIn my system catalog, it seems only the hashset type storage =\n'external'. most is extended.....\nI am not sure the consequence of switch from external to extended.\n------------------------------------------------------------------------------------------------------------\nselect hashset_hash('{-1,1}') as a1\n ,hashset_hash('{1,-2}') as a2\n ,hashset_hash('{-3,1}') as a3\n ,hashset_hash('{4,1}') as a4;\nreturns:\n a1 | a2 | a3 | a4\n-------------+-----------+------------+------------\n -1735582196 | 998516167 | 1337000903 | 1305426029\n(1 row)\n\nvalues {a1,a2,a3,a4} should be monotone increasing, based on the\nfunction int4hashset_cmp, but now it's not.\nso the following queries failed.\n\n--should return only one row.\nselect hashset_cmp('{2,1}','{3,1}')\nunion\nselect hashset_cmp('{3,1}','{4,1}')\nunion\nselect hashset_cmp('{1,3}','{4,1}');\n\nselect hashset_cmp('{9,10,11}','{10,9,-11}') =\nhashset_cmp('{9,10,11}','{10,9,-1}'); --should be true\nselect '{2,1}'::int4hashset > '{7}'::int4hashset; --should be false.\nbased on int array comparison,.\n-----------------------------------------------------------------------------------------\nI comment out following lines in hashset-api.c somewhere between {810,829}\n\n// if (a->hash < b->hash)\n// PG_RETURN_INT32(-1);\n// else if (a->hash > b->hash)\n// PG_RETURN_INT32(1);\n\n// if (a->nelements < b->nelements)\n// PG_RETURN_INT32(-1);\n// else if (a->nelements > b->nelements)\n// PG_RETURN_INT32(1);\n\n// Assert(a->nelements == b->nelements);\n\nSo hashset_cmp will directly compare int array. the above queries works.\n\n{int4hashset_equals,int4hashset_neq} two special cases of hashset_cmp.\nmaybe we can just wrap it just like int4hashset_le?\n\nnow store 10 element int4hashset need 99 bytes, similar one dimension\nbigint array with length 10, occupy 101 byte....\n\nin int4hashset_send, newly add struct attributes/member {load_factor\ngrowth_factor ncollisions hash} also need send to buf?\n\n\n",
"msg_date": "Mon, 19 Jun 2023 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n> should return null?\n\nHmm, that's a good philosophical question.\n\nI notice Tomas Vondra in the initial commit opted for allowing NULL inputs,\ntreating them as empty sets, e.g. in int4hashset_add() we create a\nnew hashset if the first argument is NULL.\n\nI guess the easiest perhaps most consistent NULL-handling strategy\nwould be to just mark all relevant functions STRICT except for the agg ones\nsince we probably want to allow skipping over rows with NULL values\nwithout the entire result becoming NULL.\n\nBut if we're not just going the STRICT route, then I think it's a bit more tricky,\nsince you could argue the hashset_contains() example should return FALSE\nsince the set doesn't contain the NULL value, but OTOH, since we don't\nstore NULL values, we don't know if has ever been added, hence a NULL\nresult would perhaps make more sense.\n\nI think I lean on thinking that if we want to be \"NULL-friendly\", like we\ncurrently are in hashset_add(), it would probably be most user-friendly\nto be consistent and let all functions return non-null return values in\nall cases where it is not unreasonable.\n\nSince we're essentially designing a set-theoretic system, I think we should\naim for the logical \"soundness\" property of it and think about how we can\nverify that it is.\n\nThoughts?\n\n/Joel\n\n\n",
"msg_date": "Mon, 19 Jun 2023 08:50:46 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/18/23 18:45, Andrew Dunstan wrote:\n> \n> On 2023-06-16 Fr 20:38, Joel Jacobson wrote:\n>>\n>> New patch is attached, which will henceforth always be a complete patch,\n>> to avoid the hassle of having to assemble incremental patches.\n> \n> \n> Cool, thanks.\n> \n\nIt might still be convenient to keep it split into smaller, easier to\nreview, parts. A patch that introduces basic functionality and then\npatches adding various \"advanced\" features.\n\n> \n> A couple of random thoughts:\n> \n> \n> . It might be worth sending a version number with the send function\n> (c.f. jsonb_send / jsonb_recv). That way would would not be tied forever\n> to some wire representation.\n> \n> . I think there are some important set operations missing: most notably\n> intersection, slightly less importantly asymmetric and symmetric\n> difference. I have no idea how easy these would be to add, but even for\n> your stated use I should have thought set intersection would be useful\n> (\"Who is a member of both this set of friends and that set of friends?\").\n> \n> . While supporting int4 only is OK for now, I think we would at least\n> want to support int8, and probably UUID since a number of systems I know\n> of use that as an object identifier.\n> \n\nI agree we should aim to support a wider range of data types. Could we\nhave a polymorphic type, similar to what we do for arrays and ranges? In\nfact, CREATE TYPE allows specifying ELEMENT, so wouldn't it be possible\nto implement this as a special variant of an array? Would be better than\nhaving a set of functions for every supported data type.\n\n(Note: It might still be possible to have a special implementation for\nselected fixed-length data types, as it allows optimization at compile\ntime. But that could be done later.)\n\n\nThe other thing I've been thinking about is the SQL syntax and what does\nthe SQL standard says about this.\n\nAFAICS the standard only defines arrays and multisets. Arrays are pretty\nmuch the thing we have, including the ARRAY[] constructor etc. Multisets\nare similar to hashset discussed here, except that it tracks the number\nof elements for each value (which would be trivial in hashset).\n\nSo if we want to make this a built-in feature, maybe we should aim to do\nthe multiset thing, with the standard SQL syntax? Extending the grammar\nshould not be hard, I think. I'm not sure of the underlying code\n(ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\nlot of separate code doing that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Jun 2023 11:21:53 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 2:51 PM Joel Jacobson <[email protected]> wrote:\n>\n> On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n> > select hashset_contains('{1,2}'::int4hashset,NULL::int);\n> > should return null?\n>\n> Hmm, that's a good philosophical question.\n>\n> I notice Tomas Vondra in the initial commit opted for allowing NULL\ninputs,\n> treating them as empty sets, e.g. in int4hashset_add() we create a\n> new hashset if the first argument is NULL.\n>\n> I guess the easiest perhaps most consistent NULL-handling strategy\n> would be to just mark all relevant functions STRICT except for the agg\nones\n> since we probably want to allow skipping over rows with NULL values\n> without the entire result becoming NULL.\n>\n> But if we're not just going the STRICT route, then I think it's a bit\nmore tricky,\n> since you could argue the hashset_contains() example should return FALSE\n> since the set doesn't contain the NULL value, but OTOH, since we don't\n> store NULL values, we don't know if has ever been added, hence a NULL\n> result would perhaps make more sense.\n>\n> I think I lean on thinking that if we want to be \"NULL-friendly\", like we\n> currently are in hashset_add(), it would probably be most user-friendly\n> to be consistent and let all functions return non-null return values in\n> all cases where it is not unreasonable.\n>\n> Since we're essentially designing a set-theoretic system, I think we\nshould\n> aim for the logical \"soundness\" property of it and think about how we can\n> verify that it is.\n>\n> Thoughts?\n>\n> /Joel\n\nhashset_to_array function should be strict?\n\nI noticed hashset_symmetric_difference and hashset_difference handle null\nin a different way, seems they should handle null in a consistent way?\n\nselect '{1,2,NULL}'::int[] operator (pg_catalog.@>) '{NULL}'::int[]; --false\nselect '{1,2,NULL}'::int[] operator (pg_catalog.&&) '{NULL}'::int[];\n--false.\nSo similarly I guess hashset_contains should be false.\nselect hashset_contains('{1,2}'::int4hashset,NULL::int);\n\nOn Mon, Jun 19, 2023 at 2:51 PM Joel Jacobson <[email protected]> wrote:>> On Mon, Jun 19, 2023, at 02:00, jian he wrote:> > select hashset_contains('{1,2}'::int4hashset,NULL::int);> > should return null?>> Hmm, that's a good philosophical question.>> I notice Tomas Vondra in the initial commit opted for allowing NULL inputs,> treating them as empty sets, e.g. in int4hashset_add() we create a> new hashset if the first argument is NULL.>> I guess the easiest perhaps most consistent NULL-handling strategy> would be to just mark all relevant functions STRICT except for the agg ones> since we probably want to allow skipping over rows with NULL values> without the entire result becoming NULL.>> But if we're not just going the STRICT route, then I think it's a bit more tricky,> since you could argue the hashset_contains() example should return FALSE> since the set doesn't contain the NULL value, but OTOH, since we don't> store NULL values, we don't know if has ever been added, hence a NULL> result would perhaps make more sense.>> I think I lean on thinking that if we want to be \"NULL-friendly\", like we> currently are in hashset_add(), it would probably be most user-friendly> to be consistent and let all functions return non-null return values in> all cases where it is not unreasonable.>> Since we're essentially designing a set-theoretic system, I think we should> aim for the logical \"soundness\" property of it and think about how we can> verify that it is.>> Thoughts?>> /Joelhashset_to_array function should be strict?I noticed hashset_symmetric_difference and hashset_difference handle null in a different way, seems they should handle null in a consistent way?select '{1,2,NULL}'::int[] operator (pg_catalog.@>) '{NULL}'::int[]; --falseselect '{1,2,NULL}'::int[] operator (pg_catalog.&&) '{NULL}'::int[]; --false.So similarly I guess hashset_contains should be false.select hashset_contains('{1,2}'::int4hashset,NULL::int);",
"msg_date": "Mon, 19 Jun 2023 17:49:48 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023, at 11:21, Tomas Vondra wrote:\n> AFAICS the standard only defines arrays and multisets. Arrays are pretty\n> much the thing we have, including the ARRAY[] constructor etc. Multisets\n> are similar to hashset discussed here, except that it tracks the number\n> of elements for each value (which would be trivial in hashset).\n>\n> So if we want to make this a built-in feature, maybe we should aim to do\n> the multiset thing, with the standard SQL syntax? Extending the grammar\n> should not be hard, I think. I'm not sure of the underlying code\n> (ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\n> lot of separate code doing that.\n\nMultisets handle duplicates uniquely, this may bring unexpected issues. Sets\nand multisets have distinct utility in C++, Rust, Java, etc. However, sets are\nmore fundamental and prevalent in std libs than multisets.\n\nDespite SQL's multiset possibility, a distinct hashset type is my preference,\nhelping appropriate data structure choice and reducing misuse.\n\nThe necessity of multisets is vague beyond standards compliance.\n\n/Joel\n\n\n",
"msg_date": "Mon, 19 Jun 2023 13:33:35 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-19 Mo 05:21, Tomas Vondra wrote:\n>\n> On 6/18/23 18:45, Andrew Dunstan wrote:\n>> On 2023-06-16 Fr 20:38, Joel Jacobson wrote:\n>>> New patch is attached, which will henceforth always be a complete patch,\n>>> to avoid the hassle of having to assemble incremental patches.\n>>\n>> Cool, thanks.\n>>\n> It might still be convenient to keep it split into smaller, easier to\n> review, parts. A patch that introduces basic functionality and then\n> patches adding various \"advanced\" features.\n>\n>> A couple of random thoughts:\n>>\n>>\n>> . It might be worth sending a version number with the send function\n>> (c.f. jsonb_send / jsonb_recv). That way would would not be tied forever\n>> to some wire representation.\n>>\n>> . I think there are some important set operations missing: most notably\n>> intersection, slightly less importantly asymmetric and symmetric\n>> difference. I have no idea how easy these would be to add, but even for\n>> your stated use I should have thought set intersection would be useful\n>> (\"Who is a member of both this set of friends and that set of friends?\").\n>>\n>> . While supporting int4 only is OK for now, I think we would at least\n>> want to support int8, and probably UUID since a number of systems I know\n>> of use that as an object identifier.\n>>\n> I agree we should aim to support a wider range of data types. Could we\n> have a polymorphic type, similar to what we do for arrays and ranges? In\n> fact, CREATE TYPE allows specifying ELEMENT, so wouldn't it be possible\n> to implement this as a special variant of an array? Would be better than\n> having a set of functions for every supported data type.\n>\n> (Note: It might still be possible to have a special implementation for\n> selected fixed-length data types, as it allows optimization at compile\n> time. But that could be done later.)\n\n\nInteresting idea. There's also the keyword SETOF that we could possibly \nmake use of.\n\n\n>\n>\n> The other thing I've been thinking about is the SQL syntax and what does\n> the SQL standard says about this.\n>\n> AFAICS the standard only defines arrays and multisets. Arrays are pretty\n> much the thing we have, including the ARRAY[] constructor etc. Multisets\n> are similar to hashset discussed here, except that it tracks the number\n> of elements for each value (which would be trivial in hashset).\n>\n> So if we want to make this a built-in feature, maybe we should aim to do\n> the multiset thing, with the standard SQL syntax? Extending the grammar\n> should not be hard, I think. I'm not sure of the underlying code\n> (ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\n> lot of separate code doing that.\n>\n>\n\nYes, Multisets (a.k.a. bags and a large number of other names) would be \ninteresting. But I wouldn't like to abandon pure sets either. Maybe a \ntypmod indicating the allowed multiplicity of the type?\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-19 Mo 05:21, Tomas Vondra\n wrote:\n\n\n\n\nOn 6/18/23 18:45, Andrew Dunstan wrote:\n\n\n\nOn 2023-06-16 Fr 20:38, Joel Jacobson wrote:\n\n\n\nNew patch is attached, which will henceforth always be a complete patch,\nto avoid the hassle of having to assemble incremental patches.\n\n\n\n\nCool, thanks.\n\n\n\n\nIt might still be convenient to keep it split into smaller, easier to\nreview, parts. A patch that introduces basic functionality and then\npatches adding various \"advanced\" features.\n\n\n\n\nA couple of random thoughts:\n\n\n. It might be worth sending a version number with the send function\n(c.f. jsonb_send / jsonb_recv). That way would would not be tied forever\nto some wire representation.\n\n. I think there are some important set operations missing: most notably\nintersection, slightly less importantly asymmetric and symmetric\ndifference. I have no idea how easy these would be to add, but even for\nyour stated use I should have thought set intersection would be useful\n(\"Who is a member of both this set of friends and that set of friends?\").\n\n. While supporting int4 only is OK for now, I think we would at least\nwant to support int8, and probably UUID since a number of systems I know\nof use that as an object identifier.\n\n\n\n\nI agree we should aim to support a wider range of data types. Could we\nhave a polymorphic type, similar to what we do for arrays and ranges? In\nfact, CREATE TYPE allows specifying ELEMENT, so wouldn't it be possible\nto implement this as a special variant of an array? Would be better than\nhaving a set of functions for every supported data type.\n\n(Note: It might still be possible to have a special implementation for\nselected fixed-length data types, as it allows optimization at compile\ntime. But that could be done later.)\n\n\n\nInteresting idea. There's also the keyword SETOF that we could\n possibly make use of.\n\n\n\n\n\n\n\nThe other thing I've been thinking about is the SQL syntax and what does\nthe SQL standard says about this.\n\nAFAICS the standard only defines arrays and multisets. Arrays are pretty\nmuch the thing we have, including the ARRAY[] constructor etc. Multisets\nare similar to hashset discussed here, except that it tracks the number\nof elements for each value (which would be trivial in hashset).\n\nSo if we want to make this a built-in feature, maybe we should aim to do\nthe multiset thing, with the standard SQL syntax? Extending the grammar\nshould not be hard, I think. I'm not sure of the underlying code\n(ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\nlot of separate code doing that.\n\n\n\n\n\n\nYes, Multisets (a.k.a. bags and a large number of other names)\n would be interesting. But I wouldn't like to abandon pure sets\n either. Maybe a typmod indicating the allowed multiplicity of the\n type?\n\n\n\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 19 Jun 2023 07:50:31 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023, at 11:49, jian he wrote:\n> hashset_to_array function should be strict?\n>\n> I noticed hashset_symmetric_difference and hashset_difference handle \n> null in a different way, seems they should handle null in a consistent \n> way?\n\nYes, I agree, they should be consistent.\n\nI've thought a bit more on this, and came to the conclusion that I think it\nwould be easiest, safest and least confusing to just mark all functions STRICT.\n\nThat way, it's the user's responsibility to ensure null operands are not passed\nto the functions, which is simply a WHERE ... or FILTER (WHERE ...). And if\nmaking a mistake and passing, it's better to make the entire result blow up by\nletting the result be NULL, than to silently ignore the operand or return some\ntrue/false value that is questionable.\n\nSQL has a quite unique NULL handling compared to other languages, so I think\nit's better to let the user use the full arsenal of SQL to deal with nulls,\nrather than trying to shoehorn some null semantics into a set-theoretic system.\n\n/Joel\n\n\n",
"msg_date": "Mon, 19 Jun 2023 13:54:36 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 6/19/23 13:50, Andrew Dunstan wrote:\n> \n> On 2023-06-19 Mo 05:21, Tomas Vondra wrote:\n>> On 6/18/23 18:45, Andrew Dunstan wrote:\n>>> On 2023-06-16 Fr 20:38, Joel Jacobson wrote:\n>>>> New patch is attached, which will henceforth always be a complete patch,\n>>>> to avoid the hassle of having to assemble incremental patches.\n>>> Cool, thanks.\n>>>\n>> It might still be convenient to keep it split into smaller, easier to\n>> review, parts. A patch that introduces basic functionality and then\n>> patches adding various \"advanced\" features.\n>>\n>>> A couple of random thoughts:\n>>>\n>>>\n>>> . It might be worth sending a version number with the send function\n>>> (c.f. jsonb_send / jsonb_recv). That way would would not be tied forever\n>>> to some wire representation.\n>>>\n>>> . I think there are some important set operations missing: most notably\n>>> intersection, slightly less importantly asymmetric and symmetric\n>>> difference. I have no idea how easy these would be to add, but even for\n>>> your stated use I should have thought set intersection would be useful\n>>> (\"Who is a member of both this set of friends and that set of friends?\").\n>>>\n>>> . While supporting int4 only is OK for now, I think we would at least\n>>> want to support int8, and probably UUID since a number of systems I know\n>>> of use that as an object identifier.\n>>>\n>> I agree we should aim to support a wider range of data types. Could we\n>> have a polymorphic type, similar to what we do for arrays and ranges? In\n>> fact, CREATE TYPE allows specifying ELEMENT, so wouldn't it be possible\n>> to implement this as a special variant of an array? Would be better than\n>> having a set of functions for every supported data type.\n>>\n>> (Note: It might still be possible to have a special implementation for\n>> selected fixed-length data types, as it allows optimization at compile\n>> time. But that could be done later.)\n> \n> \n> Interesting idea. There's also the keyword SETOF that we could possibly\n> make use of.\n> \n> \n>> The other thing I've been thinking about is the SQL syntax and what does\n>> the SQL standard says about this.\n>>\n>> AFAICS the standard only defines arrays and multisets. Arrays are pretty\n>> much the thing we have, including the ARRAY[] constructor etc. Multisets\n>> are similar to hashset discussed here, except that it tracks the number\n>> of elements for each value (which would be trivial in hashset).\n>>\n>> So if we want to make this a built-in feature, maybe we should aim to do\n>> the multiset thing, with the standard SQL syntax? Extending the grammar\n>> should not be hard, I think. I'm not sure of the underlying code\n>> (ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\n>> lot of separate code doing that.\n>>\n>>\n> \n> Yes, Multisets (a.k.a. bags and a large number of other names) would be\n> interesting. But I wouldn't like to abandon pure sets either. Maybe a\n> typmod indicating the allowed multiplicity of the type?\n> \n\nMaybe, although I'm not sure if that can be specified with a multiset\nconstructor, i.e. when using MULTISET[...] in places where we now use\nARRAY[...] to specify arrays.\n\nI was thinking more about having one set of operators, one considering\nthe duplicity (and thus doing what SQL standard says) and one ignoring\nit (thus treating MULTISETS as plain sets).\n\nAnyway, I'm just thinking aloud. I'm not sure if this is the way to do,\nbut it'd be silly to end up implementing stuff unnecessarily and/or\ninventing something that contradicts the SQL standard (or is somehow\ninconsistent with similar stuff).\n\n\nregard\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Jun 2023 13:59:17 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/19/23 13:33, Joel Jacobson wrote:\n> On Mon, Jun 19, 2023, at 11:21, Tomas Vondra wrote:\n>> AFAICS the standard only defines arrays and multisets. Arrays are pretty\n>> much the thing we have, including the ARRAY[] constructor etc. Multisets\n>> are similar to hashset discussed here, except that it tracks the number\n>> of elements for each value (which would be trivial in hashset).\n>>\n>> So if we want to make this a built-in feature, maybe we should aim to do\n>> the multiset thing, with the standard SQL syntax? Extending the grammar\n>> should not be hard, I think. I'm not sure of the underlying code\n>> (ArrayType, ARRAY_SUBLINK stuff, etc.) we could reuse or if we'd need a\n>> lot of separate code doing that.\n> \n> Multisets handle duplicates uniquely, this may bring unexpected issues. Sets\n> and multisets have distinct utility in C++, Rust, Java, etc. However, sets are\n> more fundamental and prevalent in std libs than multisets.\n> \n\nWhat unexpected issues you mean? Sure, if someone uses multisets as if\nthey were sets (so ignoring the handling of duplicates), things will go\nbooom! quickly.\n\nI imagined (if we ended up doing MULTISET) we'd provide interface (e.g.\noperators) that'd allow perhaps help with this.\n\n> Despite SQL's multiset possibility, a distinct hashset type is my preference,\n> helping appropriate data structure choice and reducing misuse.\n> \n> The necessity of multisets is vague beyond standards compliance.\n> \n\nTrue - we haven't had any requests/proposal to implement MULTISETs.\n\nI've looked at the SQL standard primarily to check if maybe there's some\nprecedent that'd give us guidance on the SQL syntax etc. And I think\nmultisets are that - even if we end up not implementing them, it'd be\nsad to have unnecessarily inconsistent syntax (in case someone decides\nto add multisets in the future).\n\nWe could invent \"SET\" data type, so while standard has ARRAY / MULTISET,\nwe'd have ARRAY / MULTISET / SET, and the difference between the last\ntwo would be just handling of duplicates.\n\nThe other way to look at sets is that they are pretty similar to arrays,\nexcept that there are no duplicates and order does not matter. Sure, the\non-disk format and code is different, but from the SQL perspective it'd\nbe nice to allow using sets in most places where arrays are allowed\n(which is what the standard does for MULTISETS, more or less).\n\nThat'd mean we could probably search through gram.y for places working\nwith arrays (\"ARRAY array_expr\", \"ARRAY select_with_parens\", ...) and\nmake them work with sets too, say by having SET_SUBLINK instead of\nARRAY_SUBLINK, set_expression instead of array_expression, etc.\n\nThis might be also \"consistent\" with defining hashset type using CREATE\nTYPE with ELEMENT, because we consider the type to be \"array\". So that\nwould be polymorphic type, but we don't have pre-defined array for every\ntype (and I'm not sure we want to).\n\nOf course, maybe there's some fatal flaw in these idea, I don't know.\nAnd I don't want to move the goalposts too far - but it seems like this\nmight make some stuff actually simpler to implement (by piggy-backing on\nthe existing array infrastructure).\n\n\nA mostly unrelated thought - I wonder if this might be somehow related\nto the foreign key array patch ([1] might be the most recent attempt in\nthis direction). Not to hashset itself, but I recalled these patches\nbecause it'd mean we don't need the separate \"edges\" link table (so the\nhashset column would be the think backing the FK).\n\n[1]\nhttps://www.postgresql.org/message-id/CAJvoCut7zELHnBSC8HrM6p-R6q-NiBN1STKhqnK5fPE-9%3DGq3g%40mail.gmail.com\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Jun 2023 14:59:01 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Yes, Multisets (a.k.a. bags and a large number of other names) would be \n> interesting. But I wouldn't like to abandon pure sets either. Maybe a \n> typmod indicating the allowed multiplicity of the type?\n\nI don't think trying to use typmod to carry fundamental semantic\ninformation will work, because we drop it in too many places\n(e.g. there's no way to pass it through a function). If you want\nboth sets and multisets, they'll need to be two different container\ntypes, even if code is shared under the hood.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jun 2023 09:32:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023, at 14:59, Tomas Vondra wrote:\n> What unexpected issues you mean? Sure, if someone uses multisets as if\n> they were sets (so ignoring the handling of duplicates), things will go\n> booom! quickly.\n\nThe unexpected issues I had in mind are subtle bugs due to treating multisets\nas sets, which could go undetected due to having no duplicates initially.\nMultisets might initially therefore seem equal, but later diverge due to\ndifferent element counts, leading to hard-to-detect issues.\n\n> I imagined (if we ended up doing MULTISET) we'd provide interface (e.g.\n> operators) that'd allow perhaps help with this.\n\nMight help. But still think providing both structures would be a more foolproof\nsolution, offering users the choice to select what's best for their use-case.\n\n>> Despite SQL's multiset possibility, a distinct hashset type is my preference,\n>> helping appropriate data structure choice and reducing misuse.\n>> \n>> The necessity of multisets is vague beyond standards compliance.\n>\n> True - we haven't had any requests/proposal to implement MULTISETs.\n>\n> I've looked at the SQL standard primarily to check if maybe there's some\n> precedent that'd give us guidance on the SQL syntax etc. And I think\n> multisets are that - even if we end up not implementing them, it'd be\n> sad to have unnecessarily inconsistent syntax (in case someone decides\n> to add multisets in the future).\n>\n> We could invent \"SET\" data type, so while standard has ARRAY / MULTISET,\n> we'd have ARRAY / MULTISET / SET, and the difference between the last\n> two would be just handling of duplicates.\n\nIs the idea to use the \"SET\" keyword for the syntax?\nIsn't it a risk that will be confusing, since \"SET\" is currently\nonly used for configuration and update operations?\n\n> The other way to look at sets is that they are pretty similar to arrays,\n> except that there are no duplicates and order does not matter. Sure, the\n> on-disk format and code is different, but from the SQL perspective it'd\n> be nice to allow using sets in most places where arrays are allowed\n> (which is what the standard does for MULTISETS, more or less).\n>\n> That'd mean we could probably search through gram.y for places working\n> with arrays (\"ARRAY array_expr\", \"ARRAY select_with_parens\", ...) and\n> make them work with sets too, say by having SET_SUBLINK instead of\n> ARRAY_SUBLINK, set_expression instead of array_expression, etc.\n>\n> This might be also \"consistent\" with defining hashset type using CREATE\n> TYPE with ELEMENT, because we consider the type to be \"array\". So that\n> would be polymorphic type, but we don't have pre-defined array for every\n> type (and I'm not sure we want to).\n>\n> Of course, maybe there's some fatal flaw in these idea, I don't know.\n> And I don't want to move the goalposts too far - but it seems like this\n> might make some stuff actually simpler to implement (by piggy-backing on\n> the existing array infrastructure).\n\nI think it's very interesting thoughts and ambitions.\n\nI wonder though, from a user-perspective, if a new hashset type still\nwouldn't just be considered simpler, than introducing new SQL syntax?\n\nHowever, it would be interesting to see how the piggy-backing on the\nexisting array infrastructure would look in practise code-wise though.\n\nI think it's still meaningful to continue hacking on the int4-type\nhashset extension, to see if we can agree on the semantics,\nespecially around null handling and sorting.\n\n> A mostly unrelated thought - I wonder if this might be somehow related\n> to the foreign key array patch ([1] might be the most recent attempt in\n> this direction). Not to hashset itself, but I recalled these patches\n> because it'd mean we don't need the separate \"edges\" link table (so the\n> hashset column would be the think backing the FK).\n>\n> [1]\n> https://www.postgresql.org/message-id/CAJvoCut7zELHnBSC8HrM6p-R6q-NiBN1STKhqnK5fPE-9%3DGq3g%40mail.gmail.com\n\nI remember that one! We tried to revive that one, but didn't manage to keep it alive.\nIt's a really good idea though. Good idea to see if there might be synergies\nbetween arrays and hashsets in this area, since if we envision the elements in\na hashset mostly will be PKs, then it would be nice to enforce reference\nintegrity.\n\n/Joel\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 00:50:55 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/20/23 00:50, Joel Jacobson wrote:\n> On Mon, Jun 19, 2023, at 14:59, Tomas Vondra wrote:\n>> What unexpected issues you mean? Sure, if someone uses multisets as if\n>> they were sets (so ignoring the handling of duplicates), things will go\n>> booom! quickly.\n> \n> The unexpected issues I had in mind are subtle bugs due to treating multisets\n> as sets, which could go undetected due to having no duplicates initially.\n> Multisets might initially therefore seem equal, but later diverge due to\n> different element counts, leading to hard-to-detect issues.\n> \n\nUnderstood.\n\n>> I imagined (if we ended up doing MULTISET) we'd provide interface (e.g.\n>> operators) that'd allow perhaps help with this.\n> \n> Might help. But still think providing both structures would be a more foolproof\n> solution, offering users the choice to select what's best for their use-case.\n> \n\nYeah. Not confusing people is better.\n\n>>> Despite SQL's multiset possibility, a distinct hashset type is my preference,\n>>> helping appropriate data structure choice and reducing misuse.\n>>>\n>>> The necessity of multisets is vague beyond standards compliance.\n>>\n>> True - we haven't had any requests/proposal to implement MULTISETs.\n>>\n>> I've looked at the SQL standard primarily to check if maybe there's some\n>> precedent that'd give us guidance on the SQL syntax etc. And I think\n>> multisets are that - even if we end up not implementing them, it'd be\n>> sad to have unnecessarily inconsistent syntax (in case someone decides\n>> to add multisets in the future).\n>>\n>> We could invent \"SET\" data type, so while standard has ARRAY / MULTISET,\n>> we'd have ARRAY / MULTISET / SET, and the difference between the last\n>> two would be just handling of duplicates.\n> \n> Is the idea to use the \"SET\" keyword for the syntax?\n> Isn't it a risk that will be confusing, since \"SET\" is currently\n> only used for configuration and update operations?\n> \n\nI haven't tried doing that, so not sure if there would be any conflicts\nin the grammar. But I can't think of a case that'd be confusing for\nusers - when setting internal GUC variables it's a completely different\ncontext, there's no use for SQL-level collections (arrays, sets, ...).\n\nFor UPDATE, it'd be pretty clear too, I think. It's possible to do\n\n UPDATE table SET col = SET[1,2,3]\n\nand it's clear the first is the command SET, while the second is a set\nconstructor. For SELECT there'd be conflict, and for ALTER TABLE it'd be\npossible to do\n\n ALTER TABLE table ALTER COLUMN col SET DEFAULT SET[1,2,3];\n\nSeems clear to me too, I think.\n\n\n>> The other way to look at sets is that they are pretty similar to arrays,\n>> except that there are no duplicates and order does not matter. Sure, the\n>> on-disk format and code is different, but from the SQL perspective it'd\n>> be nice to allow using sets in most places where arrays are allowed\n>> (which is what the standard does for MULTISETS, more or less).\n>>\n>> That'd mean we could probably search through gram.y for places working\n>> with arrays (\"ARRAY array_expr\", \"ARRAY select_with_parens\", ...) and\n>> make them work with sets too, say by having SET_SUBLINK instead of\n>> ARRAY_SUBLINK, set_expression instead of array_expression, etc.\n>>\n>> This might be also \"consistent\" with defining hashset type using CREATE\n>> TYPE with ELEMENT, because we consider the type to be \"array\". So that\n>> would be polymorphic type, but we don't have pre-defined array for every\n>> type (and I'm not sure we want to).\n>>\n>> Of course, maybe there's some fatal flaw in these idea, I don't know.\n>> And I don't want to move the goalposts too far - but it seems like this\n>> might make some stuff actually simpler to implement (by piggy-backing on\n>> the existing array infrastructure).\n> \n> I think it's very interesting thoughts and ambitions.\n> \n> I wonder though, from a user-perspective, if a new hashset type still\n> wouldn't just be considered simpler, than introducing new SQL syntax?\n> \n\nIt's a matter of personal taste, I guess. I'm fine with calling function\nAPI and what not, but a sensible SQL syntax seems nicer.\n\n> However, it would be interesting to see how the piggy-backing on the\n> existing array infrastructure would look in practise code-wise though.\n> \n> I think it's still meaningful to continue hacking on the int4-type\n> hashset extension, to see if we can agree on the semantics,\n> especially around null handling and sorting.\n> \n\nDefinitely. It certainly was not my intention to derail the work by\nproposing more and more stuff. So feel free to pursue what makes sense\nto you / helps the use case.\n\n\nTBH I don't particularly see why we'd want to sort sets.\n\nI wonder if the SQL standard says something about these things (for\nMULTISETs), especially for the NULL handling. If it does, I'd try to\nstick with those rules.\n\n>> A mostly unrelated thought - I wonder if this might be somehow related\n>> to the foreign key array patch ([1] might be the most recent attempt in\n>> this direction). Not to hashset itself, but I recalled these patches\n>> because it'd mean we don't need the separate \"edges\" link table (so the\n>> hashset column would be the think backing the FK).\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/CAJvoCut7zELHnBSC8HrM6p-R6q-NiBN1STKhqnK5fPE-9%3DGq3g%40mail.gmail.com\n> \n> I remember that one! We tried to revive that one, but didn't manage to keep it alive.\n> It's a really good idea though. Good idea to see if there might be synergies\n> between arrays and hashsets in this area, since if we envision the elements in\n> a hashset mostly will be PKs, then it would be nice to enforce reference\n> integrity.\n\nI haven't followed that at all, but I wonder how difficult would it be\nto also support other collection types (like sets) and not just arrays.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 02:04:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 20, 2023, at 02:04, Tomas Vondra wrote:\n> For UPDATE, it'd be pretty clear too, I think. It's possible to do\n>\n> UPDATE table SET col = SET[1,2,3]\n>\n> and it's clear the first is the command SET, while the second is a set\n> constructor. For SELECT there'd be conflict, and for ALTER TABLE it'd be\n> possible to do\n>\n> ALTER TABLE table ALTER COLUMN col SET DEFAULT SET[1,2,3];\n>\n> Seems clear to me too, I think.\n...\n> It's a matter of personal taste, I guess. I'm fine with calling function\n> API and what not, but a sensible SQL syntax seems nicer.\n\nNow when I see it written out, I actually agree looks nice.\n\n>> I think it's still meaningful to continue hacking on the int4-type\n>> hashset extension, to see if we can agree on the semantics,\n>> especially around null handling and sorting.\n>> \n>\n> Definitely. It certainly was not my intention to derail the work by\n> proposing more and more stuff. So feel free to pursue what makes sense\n> to you / helps the use case.\n\nOK, cool, and didn't mean at all that you did. I appreciate the long-term\nperspective, otherwise our short-term work might go wasted.\n\n> TBH I don't particularly see why we'd want to sort sets.\n\nMe neither, sorting sets in the conventional, visually coherent sense\n(i.e., lexicographically) doesn't seem necessary. However, for ORDER BY hashset\nfunctionality, we need a we need a stable and deterministic method.\n\nThis can be achieved performance-efficiently by computing a commutative hash of\nthe hashset, XORing each new value's hash with set->hash:\n\n\t\tset->hash ^= hash;\n\n...and then sort primarily by set->hash.\n\nThough resulting in an apparently random order, this approach, already employed\nin int4hashset_add_element() and int4hashset_cmp(), ensures a deterministic and\nstable sorting order.\n\nI think this an acceptable trade-off, better than not supporting ORDER BY.\n\nJian He had some comments on hashset_cmp() which I will look at.\n\n/Joel\n\n\n",
"msg_date": "Tue, 20 Jun 2023 07:38:56 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n> should return null?\n\nI agree, it should.\n\nI've now changed all functions except int4hashset() (the init function)\nand the aggregate functions to be STRICT.\nI think this patch is OK to send as an incremental one, since it's an isolated change:\n\nApply STRICT to hashset functions; clean up null handling in hashset-api.c\n\nSet hashset functions to be STRICT, thereby letting the system reject null\ninputs automatically. This change reflects the nature of hashset as an\nimplementation of a set-theoretic system, where null values are conceptually\nunusual.\n\nAlongside, the hashset-api.c code has been refactored for clarity, consolidating\nnull checks and assignments into single lines.\n\nA 'strict' test case has been added to account for these changes.\n\n/Joel",
"msg_date": "Tue, 20 Jun 2023 12:59:32 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/20/23 12:59, Joel Jacobson wrote:\n> On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n>> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n>> should return null?\n> \n> I agree, it should.\n> \n> I've now changed all functions except int4hashset() (the init function)\n> and the aggregate functions to be STRICT.\n\nI don't think this is correct / consistent with what we do elsewhere.\nIMHO it's perfectly fine to have a hashset containing a NULL value,\nbecause then it can affect results of membership checks.\n\nConsider these IN / ANY queries:\n\n test=# select 4 in (1,2,3);\n ?column?\n ----------\n f\n (1 row)\n\n test=# select 4 = ANY(ARRAY[1,2,3]);\n ?column?\n ----------\n f\n (1 row)\n\nnow add a NULL:\n\n test=# select 4 in (1,2,3,null);\n ?column?\n ----------\n\n (1 row)\n\n test=# select 4 = ANY(ARRAY[1,2,3,NULL]);\n ?column?\n ----------\n\n (1 row)\n\nI don't see why a (hash)set should behave any differently. It's true\narrays don't behave like this:\n\n test=# select array[1,2,3,4,NULL] @> ARRAY[5];\n ?column?\n ----------\n f\n (1 row)\n\nbut I'd say that's more an anomaly than something we should replicate.\n\nThis is also what the SQL standard does for multisets - there's SQL:20nn\ndraft at http://www.wiscorp.com/SQLStandards.html, and the <member\npredicate> section (p. 475) explains how this should work with NULL.\n\nSo if we see a set as a special case of multiset (with no duplicates),\nthen we have to handle NULLs this way too. It'd be weird to have this\nbehavior inconsistent.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 14:10:17 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/20/23 14:10, Tomas Vondra wrote:\n> ...\n>\n> This is also what the SQL standard does for multisets - there's SQL:20nn\n> draft at http://www.wiscorp.com/SQLStandards.html, and the <member\n> predicate> section (p. 475) explains how this should work with NULL.\n> \n\nBTW I just notices there's also a multiset proposal at the wiscorp page:\n\n http://www.wiscorp.com/sqlmultisets.zip\n\nIt's just the initial proposal and I'm not sure how much it changed over\ntime, but it likely provide way more context for the choices than the\n(rather dry) SQL standard.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 14:23:29 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n> On 6/20/23 12:59, Joel Jacobson wrote:\n>> On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n>>> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n>>> should return null?\n>> \n>> I agree, it should.\n>> \n>> I've now changed all functions except int4hashset() (the init function)\n>> and the aggregate functions to be STRICT.\n>\n> I don't think this is correct / consistent with what we do elsewhere.\n> IMHO it's perfectly fine to have a hashset containing a NULL value,\n\nThe reference to consistency with what we do elsewhere might not be entirely\napplicable in this context, since the set feature we're designing is a new beast\nin the SQL landscape.\n\nI think adhering to the theoretical purity of sets by excluding NULLs aligns us\nwith set theory, simplifies our code, and parallels set implementations in other\nlanguages.\n\nI think we have an opportunity here to innovate and potentially influence a\nfuture set concept in the SQL standard.\n\nHowever, I see how one could argue against this reasoning, on the basis that\nPostgreSQL users might be more familiar with and expect NULLs can exist\neverywhere in all data structures.\n\nA different perspective is to look at what use-cases we can foresee.\n\nI've been trying hard, but I can't find compelling use-cases where a NULL element\nin a set would offer a more natural SQL query than handling NULLs within SQL and\nkeeping the set NULL-free.\n\nDoes anyone else have a strong realistic example where including NULLs in the\nset would simplify the SQL query?\n\n/Joel\n\n\n",
"msg_date": "Tue, 20 Jun 2023 16:56:12 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 20, 2023, at 16:56, Joel Jacobson wrote:\n> I think we have an opportunity here to innovate and potentially influence a\n> future set concept in the SQL standard.\n\nAdding to my previous note - If there's a worry about future SQL standards\nintroducing SETs with NULLs, causing compatibility issues, we could address it\nproactively. We could set up set functions to throw errors when passed NULL\ninputs, rather than being STRICT. This keeps our theoretical alignment now, and\noffers a smooth transition if standards evolve.\n\nConsidering we have a flag field in the struct, we could use it to indicate\nwhether a value stored on disk was written with NULL support or not.\n\n/Joel\n\n\n",
"msg_date": "Tue, 20 Jun 2023 18:20:33 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/20/23 16:56, Joel Jacobson wrote:\n> On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n>> On 6/20/23 12:59, Joel Jacobson wrote:\n>>> On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n>>>> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n>>>> should return null?\n>>>\n>>> I agree, it should.\n>>>\n>>> I've now changed all functions except int4hashset() (the init function)\n>>> and the aggregate functions to be STRICT.\n>>\n>> I don't think this is correct / consistent with what we do elsewhere.\n>> IMHO it's perfectly fine to have a hashset containing a NULL value,\n> \n> The reference to consistency with what we do elsewhere might not be entirely\n> applicable in this context, since the set feature we're designing is a new beast\n> in the SQL landscape.\n> \n\nI don't see how it's new, considering relational algebra is pretty much\nbased on (multi)sets, and the three-valued logic with NULL values is\npretty well established part of that.\n\n> I think adhering to the theoretical purity of sets by excluding NULLs aligns us\n> with set theory, simplifies our code, and parallels set implementations in other\n> languages.\n> \n\nI don't see how that would be more theoretically pure, really. The\nthree-valued logic is a well established part of relational algebra, so\nnot respecting that is more a violation of the purity.\n\n> I think we have an opportunity here to innovate and potentially influence a\n> future set concept in the SQL standard.\n> \n\nI doubt this going to influence what the SQL standard says, especially\nbecause it already defined the behavior for MULTISETS (of which the sets\nare a special case, pretty much). So this has 0% chance of success.\n\n> However, I see how one could argue against this reasoning, on the basis that\n> PostgreSQL users might be more familiar with and expect NULLs can exist\n> everywhere in all data structures.\n> \n\nRight, it's what we already do for similar cases, and if you have NULLS\nin the data, you better be aware of the behavior. Granted, some people\nare surprised by three-valued logic, but using a different behavior for\nsome new features would just increase the confusion.\n\n> A different perspective is to look at what use-cases we can foresee.\n> \n> I've been trying hard, but I can't find compelling use-cases where a NULL element\n> in a set would offer a more natural SQL query than handling NULLs within SQL and\n> keeping the set NULL-free.\n> \n\nIMO if you have NULL values in the data, you better be aware of it and\nhandle the case accordingly (e.g. by filtering them out when building\nthe set). If you don't have NULLs in the data, there's no issue.\n\nAnd in the graph case, I don't see why you'd have any NULLs, considering\nwe're dealing with adjacent nodes, and if there's adjacent node, it's ID\nis not NULL.\n\n> Does anyone else have a strong realistic example where including NULLs in the\n> set would simplify the SQL query?\n> \n\nI'm sure there are cases where you have NULLs in the dat aand need to\nfilter them out, but that's just natural consequence of having NULLs. If\nyou have them you better know what NULLs do ...\n\nIt's too early to make any strong statements, but it's going to be hard\nto convince me we should handle NULLs differently from what we already\ndo elsewhere.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 18:25:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 12:25 AM Tomas Vondra\n<[email protected]> wrote:\n>\n>\n>\n> On 6/20/23 16:56, Joel Jacobson wrote:\n> > On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n> >> On 6/20/23 12:59, Joel Jacobson wrote:\n> >>> On Mon, Jun 19, 2023, at 02:00, jian he wrote:\n> >>>> select hashset_contains('{1,2}'::int4hashset,NULL::int);\n> >>>> should return null?\n> >>>\n> >>> I agree, it should.\n> >>>\n> >>> I've now changed all functions except int4hashset() (the init function)\n> >>> and the aggregate functions to be STRICT.\n> >>\n> >> I don't think this is correct / consistent with what we do elsewhere.\n> >> IMHO it's perfectly fine to have a hashset containing a NULL value,\n> >\n> > The reference to consistency with what we do elsewhere might not be entirely\n> > applicable in this context, since the set feature we're designing is a new beast\n> > in the SQL landscape.\n> >\n>\n> I don't see how it's new, considering relational algebra is pretty much\n> based on (multi)sets, and the three-valued logic with NULL values is\n> pretty well established part of that.\n>\n> > I think adhering to the theoretical purity of sets by excluding NULLs aligns us\n> > with set theory, simplifies our code, and parallels set implementations in other\n> > languages.\n> >\n>\n> I don't see how that would be more theoretically pure, really. The\n> three-valued logic is a well established part of relational algebra, so\n> not respecting that is more a violation of the purity.\n>\n> > I think we have an opportunity here to innovate and potentially influence a\n> > future set concept in the SQL standard.\n> >\n>\n> I doubt this going to influence what the SQL standard says, especially\n> because it already defined the behavior for MULTISETS (of which the sets\n> are a special case, pretty much). So this has 0% chance of success.\n>\n> > However, I see how one could argue against this reasoning, on the basis that\n> > PostgreSQL users might be more familiar with and expect NULLs can exist\n> > everywhere in all data structures.\n> >\n>\n> Right, it's what we already do for similar cases, and if you have NULLS\n> in the data, you better be aware of the behavior. Granted, some people\n> are surprised by three-valued logic, but using a different behavior for\n> some new features would just increase the confusion.\n>\n> > A different perspective is to look at what use-cases we can foresee.\n> >\n> > I've been trying hard, but I can't find compelling use-cases where a NULL element\n> > in a set would offer a more natural SQL query than handling NULLs within SQL and\n> > keeping the set NULL-free.\n> >\n>\n> IMO if you have NULL values in the data, you better be aware of it and\n> handle the case accordingly (e.g. by filtering them out when building\n> the set). If you don't have NULLs in the data, there's no issue.\n>\n> And in the graph case, I don't see why you'd have any NULLs, considering\n> we're dealing with adjacent nodes, and if there's adjacent node, it's ID\n> is not NULL.\n>\n> > Does anyone else have a strong realistic example where including NULLs in the\n> > set would simplify the SQL query?\n> >\n>\n> I'm sure there are cases where you have NULLs in the dat aand need to\n> filter them out, but that's just natural consequence of having NULLs. If\n> you have them you better know what NULLs do ...\n>\n> It's too early to make any strong statements, but it's going to be hard\n> to convince me we should handle NULLs differently from what we already\n> do elsewhere.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n> http://www.wiscorp.com/sqlmultisets.zip\n\n> Conceptually, a multiset is an unordered collection of elements, all of the same type, with dupli-\n> cates permitted. Unlike arrays, a multiset is an unbounded collection, with no declared maximum\n> cardinality. This does not mean that the user can insert elements in a multiset without limit, just\n> that the standard does not mandate that there should be a limit. This is analogous to tables, which\n> have no declared maximum number of rows.\n\nPostgres arrays don't have size limits.\nunordered means no need to use subscript?\nSo multiset is a more limited array type?\n\nnull is fine. but personally I feel like so far the hashset main\nfeature is the quickly aggregate unique value using hashset.\nI found using hashset count distinct (non null values) is quite faster.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 02:08:48 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 6/20/23 20:08, jian he wrote:\n> On Wed, Jun 21, 2023 at 12:25 AM Tomas Vondra\n> ...\n>> http://www.wiscorp.com/sqlmultisets.zip\n> \n>> Conceptually, a multiset is an unordered collection of elements, all of the same type, with dupli-\n>> cates permitted. Unlike arrays, a multiset is an unbounded collection, with no declared maximum\n>> cardinality. This does not mean that the user can insert elements in a multiset without limit, just\n>> that the standard does not mandate that there should be a limit. This is analogous to tables, which\n>> have no declared maximum number of rows.\n> \n> Postgres arrays don't have size limits.\n\nRight. You can say int[5] but we don't enforce that limit (I haven't\nchecked why, but presumably because we had arrays before the standard\nexisted, and it was more like a list in LISP or something.)\n\n> unordered means no need to use subscript?\n\nYeah - there's no obvious way to subscript the items when there's no\nimplicit ordering.\n\n> So multiset is a more limited array type?\n> \n\nYes and no - both are collection types, so there are similarities and\ndifferences. Multiset does not need to keep the ordering, so in this\nsense it's a relaxed version of array.\n\n\n> null is fine. but personally I feel like so far the hashset main\n> feature is the quickly aggregate unique value using hashset.\n> I found using hashset count distinct (non null values) is quite faster.\n\nTrue. That's related to fast membership checks.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Jun 2023 20:43:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 20, 2023, at 18:25, Tomas Vondra wrote:\n> On 6/20/23 16:56, Joel Jacobson wrote:\n>> The reference to consistency with what we do elsewhere might not be entirely\n>> applicable in this context, since the set feature we're designing is a new beast\n>> in the SQL landscape.\n>\n> I don't see how it's new, considering relational algebra is pretty much\n> based on (multi)sets, and the three-valued logic with NULL values is\n> pretty well established part of that.\n\nWhat I meant was that the SET-feature is new; since it doesn't exist in PostgreSQL nor SQL.\n\n>> I think adhering to the theoretical purity of sets by excluding NULLs aligns us\n>> with set theory, simplifies our code, and parallels set implementations in other\n>> languages.\n>\n> I don't see how that would be more theoretically pure, really. The\n> three-valued logic is a well established part of relational algebra, so\n> not respecting that is more a violation of the purity.\n\nHmm, I think it's pure in different ways;\nSet Theory is well established and is based on two-values logic,\nbut at the same time SQL's three-valued logic is also well established.\n\n>> I think we have an opportunity here to innovate and potentially influence a\n>> future set concept in the SQL standard.\n>\n> I doubt this going to influence what the SQL standard says, especially\n> because it already defined the behavior for MULTISETS (of which the sets\n> are a special case, pretty much). So this has 0% chance of success.\n\nOK. 0% is 1% too low for me to work with. :)\n\n>> However, I see how one could argue against this reasoning, on the basis that\n>> PostgreSQL users might be more familiar with and expect NULLs can exist\n>> everywhere in all data structures.\n>\n> Right, it's what we already do for similar cases, and if you have NULLS\n> in the data, you better be aware of the behavior. Granted, some people\n> are surprised by three-valued logic, but using a different behavior for\n> some new features would just increase the confusion.\n\nGood point.\n\n>> I've been trying hard, but I can't find compelling use-cases where a NULL element\n>> in a set would offer a more natural SQL query than handling NULLs within SQL and\n>> keeping the set NULL-free.\n>\n> IMO if you have NULL values in the data, you better be aware of it and\n> handle the case accordingly (e.g. by filtering them out when building\n> the set). If you don't have NULLs in the data, there's no issue.\n\nAs long as the data model and queries would ensure there can never be\nany NULLs, fine, then there's is no issue.\n\n> And in the graph case, I don't see why you'd have any NULLs, considering\n> we're dealing with adjacent nodes, and if there's adjacent node, it's ID\n> is not NULL.\n\nMe neither, can't see the need for any NULLs there.\n\n>> Does anyone else have a strong realistic example where including NULLs in the\n>> set would simplify the SQL query?\n>\n> I'm sure there are cases where you have NULLs in the dat aand need to\n> filter them out, but that's just natural consequence of having NULLs. If\n> you have them you better know what NULLs do ...\n\nWhat I tried to find was an example for was when you wouldn't want to\nfilter out the NULLs, when you would want to include the NULL\nin the set.\n\nIf we could just find one should realistic use-case, that would be very\nhelpful, since it would then kill my argument completely that we couldn't\ndo without storing a NULL in the set.\n\n> It's too early to make any strong statements, but it's going to be hard\n> to convince me we should handle NULLs differently from what we already\n> do elsewhere.\n\nI think it's a trade-off, and I don't have any strong preference for the simplicity\nof a classical two-valued set-theoretic system vs a three-valued\nmultiset-based one. I was 51/49 but given your feedback I'm now 49/51.\n\nI think the next step is to think about how the hashset type should work\nwith three-valued logic, and then implement it to get a feeling for it.\n\nFor instance, how should hashset_count() work?\n\nGiven the query,\n\nSELECT hashset_count('{1,2,3,null}'::int4hashset);\n\nShould we,\n\na) threat NULL as a distinct value and return 4?\n\nb) ignore NULL and return 3?\n\nc) return NULL? (since the presence of NULL can be thought to render the entire count indeterminate)\n\nI think my personal preference is (b) since it is then consistent with how COUNT() works.\n\n/Joel\n\n\n",
"msg_date": "Thu, 22 Jun 2023 07:51:28 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n> This is also what the SQL standard does for multisets - there's SQL:20nn\n> draft at http://www.wiscorp.com/SQLStandards.html, and the <member\n> predicate> section (p. 475) explains how this should work with NULL.\n\nI've looked again at the paper you mentioned and found something intriguing\nin section 2.6 (b). I'm a bit puzzled about this: why would we want to return\nnull when we're certain it's not null but just doesn't have any elements?\n\nIn the same vein, it says, \"If it has more than one element, an exception is\nraised.\" Makes sense to me, but what about when there are no elements at all?\nWhy not raise an exception in that case too?\n\nThe ELEMENT function is designed to do one simple thing: return the element of\na multiset if the multiset has only 1 element. This seems very similar to how\nour INTO STRICT operates, right?\n\nThe SQL:20nn seems to still be in draft form, and I can't help but wonder if we\nshould propose a bit of an improvement here:\n\n\"If it doesn't have exactly one element, an exception is raised.\"\n\nMeaning, it would raise an exception both if there are more elements,\nor zero elements (no elements).\n\nI think this would make the semantics more intuitive and less surprising.\n\n/Joel\n\n\n",
"msg_date": "Thu, 22 Jun 2023 19:52:10 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 6/22/23 19:52, Joel Jacobson wrote:\n> On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n>> This is also what the SQL standard does for multisets - there's SQL:20nn\n>> draft at http://www.wiscorp.com/SQLStandards.html, and the <member\n>> predicate> section (p. 475) explains how this should work with NULL.\n> \n> I've looked again at the paper you mentioned and found something intriguing\n> in section 2.6 (b). I'm a bit puzzled about this: why would we want to return\n> null when we're certain it's not null but just doesn't have any elements?\n> \n> In the same vein, it says, \"If it has more than one element, an exception is\n> raised.\" Makes sense to me, but what about when there are no elements at all?\n> Why not raise an exception in that case too?\n> \n> The ELEMENT function is designed to do one simple thing: return the element of\n> a multiset if the multiset has only 1 element. This seems very similar to how\n> our INTO STRICT operates, right?\n> \n\nI agree this looks a bit weird, but that's what I mentioned - this is an\ninitial a proposal, outlining the idea. Inevitably some of the stuff\nwill get reworked or just left out of the final version. It's useful\nmostly to explain the motivation / goal.\n\nI believe that's the case here - I don't think the ELEMENT got into the\nstandard at all, and the NULL rules for the MEMBER OF clause seem not to\nhave these strange bits.\n\n> The SQL:20nn seems to still be in draft form, and I can't help but wonder if we\n> should propose a bit of an improvement here:\n> \n> \"If it doesn't have exactly one element, an exception is raised.\"\n> \n> Meaning, it would raise an exception both if there are more elements,\n> or zero elements (no elements).\n> \n> I think this would make the semantics more intuitive and less surprising.\n> \n\nWell, the simple truth is the draft is freely available, but you'd need\nto buy the final version. It doesn't mean it's still being worked on or\nthat no SQL standard was released since then. In fact, SQL 2023 was\nreleased a couple weeks ago [1].\n\nIt'd be interesting to know the version that actually got into the SQL\nstandard (if at all), but I don't have access to the standard yet.\n\nregards\n\n\n[1] https://www.iso.org/standard/76584.html\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 23 Jun 2023 00:27:00 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "I played around array_func.c\nmany of the code can be used for multiset data type.\nnow I imagine multiset as something like one dimension array. (nested is\nsomehow beyond the imagination...).\n\n* A standard varlena array has the following internal structure:\n * <vl_len_> - standard varlena header word\n * <ndim> - number of dimensions of the array\n * <dataoffset> - offset to stored data, or 0 if no nulls bitmap\n * <elemtype> - element type OID\n * <dimensions> - length of each array axis (C array of int)\n * <lower bnds> - lower boundary of each dimension (C array of int)\n * <null bitmap> - bitmap showing locations of nulls (OPTIONAL)\n * <actual data> - whatever is the stored data\n\nin set/multiset, we don't need {ndim,lower bnds}, since we are only one\ndimension, also we don't need subscript.\nSo for set we can have following\n* int32 vl_len_; /* varlena header (do not touch directly!) */\n* int32 capacity; /* # of capacity */\n* int32 dataoffset; /* offset to data, or 0 if no bitmap */\n* int32 nelements; /* number of items added to the hashset */\n* Oid elemtype; /* element type OID */\n * <null bitmap> - bitmap showing locations of nulls (OPTIONAL)\n * <bitmap> - bitmap showing this slot is empty or not ( I am not sure\nthis part)\n * <actual data> - whatever is the stored data\n\nmany of the code in array_func.c can be reused.\narray_isspace ==> set_isspace\nArrayMetaState ==> SetMetastate\nArrayCount ==> SetCount (similar to ArrayCount return the dimension\nof set, should be zero (empty set) or one)\nArrayParseState ==> SetParseState\nReadArrayStr ==> ReadSetStr\n\nattached is a demo shows that use array_func.c to parse cstring. have\nsimilar effect of array_in.\nfor multiset_in set type input function. if no duplicate required then\nmultiset_in would just like array, so more code can be copied from\narray_func.c\nbut if unique required then we need first palloc0(capacity * datums (size\nper type)) then put valid value into to a specific slot?\n\n\n\n\nOn Fri, Jun 23, 2023 at 6:27 AM Tomas Vondra <[email protected]>\nwrote:\n\n> On 6/22/23 19:52, Joel Jacobson wrote:\n> > On Tue, Jun 20, 2023, at 14:10, Tomas Vondra wrote:\n> >> This is also what the SQL standard does for multisets - there's SQL:20nn\n> >> draft at http://www.wiscorp.com/SQLStandards.html, and the <member\n> >> predicate> section (p. 475) explains how this should work with NULL.\n> >\n> > I've looked again at the paper you mentioned and found something\n> intriguing\n> > in section 2.6 (b). I'm a bit puzzled about this: why would we want to\n> return\n> > null when we're certain it's not null but just doesn't have any elements?\n> >\n> > In the same vein, it says, \"If it has more than one element, an\n> exception is\n> > raised.\" Makes sense to me, but what about when there are no elements at\n> all?\n> > Why not raise an exception in that case too?\n> >\n> > The ELEMENT function is designed to do one simple thing: return the\n> element of\n> > a multiset if the multiset has only 1 element. This seems very similar\n> to how\n> > our INTO STRICT operates, right?\n> >\n>\n> I agree this looks a bit weird, but that's what I mentioned - this is an\n> initial a proposal, outlining the idea. Inevitably some of the stuff\n> will get reworked or just left out of the final version. It's useful\n> mostly to explain the motivation / goal.\n>\n> I believe that's the case here - I don't think the ELEMENT got into the\n> standard at all, and the NULL rules for the MEMBER OF clause seem not to\n> have these strange bits.\n>\n> > The SQL:20nn seems to still be in draft form, and I can't help but\n> wonder if we\n> > should propose a bit of an improvement here:\n> >\n> > \"If it doesn't have exactly one element, an exception is raised.\"\n> >\n> > Meaning, it would raise an exception both if there are more elements,\n> > or zero elements (no elements).\n> >\n> > I think this would make the semantics more intuitive and less surprising.\n> >\n>\n> Well, the simple truth is the draft is freely available, but you'd need\n> to buy the final version. It doesn't mean it's still being worked on or\n> that no SQL standard was released since then. In fact, SQL 2023 was\n> released a couple weeks ago [1].\n>\n> It'd be interesting to know the version that actually got into the SQL\n> standard (if at all), but I don't have access to the standard yet.\n>\n> regards\n>\n>\n> [1] https://www.iso.org/standard/76584.html\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian",
"msg_date": "Fri, 23 Jun 2023 14:40:34 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 23, 2023, at 08:40, jian he wrote:\n> I played around array_func.c\n> many of the code can be used for multiset data type.\n> now I imagine multiset as something like one dimension array. (nested \n> is somehow beyond the imagination...).\n\nAre you suggesting it might be a better idea to start over completely\nand work on a new code base that is based on arrayfuncs.c,\nand aim for MULTISET/SET or anyhashset from start, that would not\nonly support int4/int8/uuid but any type?\n\n/Joel\n\n\n",
"msg_date": "Fri, 23 Jun 2023 10:23:14 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 4:23 PM Joel Jacobson <[email protected]> wrote:\n\n> On Fri, Jun 23, 2023, at 08:40, jian he wrote:\n> > I played around array_func.c\n> > many of the code can be used for multiset data type.\n> > now I imagine multiset as something like one dimension array. (nested\n> > is somehow beyond the imagination...).\n>\n> Are you suggesting it might be a better idea to start over completely\n> and work on a new code base that is based on arrayfuncs.c,\n> and aim for MULTISET/SET or anyhashset from start, that would not\n> only support int4/int8/uuid but any type?\n>\n> /Joel\n>\n\nselect prosrc from pg_proc where proname ~*\n'(hash.*extended)|(extended.*hash)';\nreturn around 30 rows.\nso it's a bit generic?\n\nI tend to think set/multiset as a one dimension array, so the textual input\nshould be like a one dimension array.\nuse array_func.c functions to parse and validate the input.\nSo different types, one input validation function.\n\nDoes this make sense?\n\nOn Fri, Jun 23, 2023 at 4:23 PM Joel Jacobson <[email protected]> wrote:On Fri, Jun 23, 2023, at 08:40, jian he wrote:\n> I played around array_func.c\n> many of the code can be used for multiset data type.\n> now I imagine multiset as something like one dimension array. (nested \n> is somehow beyond the imagination...).\n\nAre you suggesting it might be a better idea to start over completely\nand work on a new code base that is based on arrayfuncs.c,\nand aim for MULTISET/SET or anyhashset from start, that would not\nonly support int4/int8/uuid but any type?\n\n/Joel\nselect prosrc from pg_proc where proname ~* '(hash.*extended)|(extended.*hash)';return around 30 rows.so it's a bit generic? I tend to think set/multiset as a one dimension array, so the textual input should be like a one dimension array. use array_func.c functions to parse and validate the input.So different types, one input validation function. Does this make sense?",
"msg_date": "Fri, 23 Jun 2023 16:52:27 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On 2023-06-23 Fr 04:23, Joel Jacobson wrote:\n> On Fri, Jun 23, 2023, at 08:40, jian he wrote:\n>> I played around array_func.c\n>> many of the code can be used for multiset data type.\n>> now I imagine multiset as something like one dimension array. (nested\n>> is somehow beyond the imagination...).\n> Are you suggesting it might be a better idea to start over completely\n> and work on a new code base that is based on arrayfuncs.c,\n> and aim for MULTISET/SET or anyhashset from start, that would not\n> only support int4/int8/uuid but any type?\n>\n\nBefore we run too far down this rabbit hole, let's discuss the storage \nimplications of using multisets. ISTM that for small base datums like \nintegers it will be a substantial increase in size, since you'll need an \naddition int for the item count, unless some very clever tricks are played.\n\nAs for this older discussion referred to upthread, if the SQL Standards \nCommittee hasn't acted on it by now it seem reasonable to think they are \nunlikely to.\n\nJust for reference, Here's some description of Oracle's suport for \nMultisets from \n<https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/Oracle-Support-for-Optional-Features-of-SQLFoundation2011.html#GUID-3BA98AEC-FAAD-4F21-A6AD-F696B5D36D56>:\n\n> Multisets in the standard are supported as nested table types in \n> Oracle. The Oracle nested table data type based on a scalar type ST is \n> equivalent, in standard terminology, to a multiset of rows having a \n> single field of type ST and named column_value. The Oracle nested \n> table type based on an object type is equivalent to a multiset of \n> structured type in the standard.\n>\n> Oracle supports the following elements of this feature on nested \n> tables using the same syntax as the standard has for multisets:\n>\n> The CARDINALITY function\n>\n> The SET function\n>\n> The MEMBER predicate\n>\n> The IS A SET predicate\n>\n> The COLLECT aggregate\n>\n> All other aspects of this feature are supported with non-standard \n> syntax, as follows:\n>\n> To create an empty multiset, denoted MULTISET[] in the standard, \n> use an empty constructor of the nested table type.\n>\n> To obtain the sole element of a multiset with one element, denoted \n> ELEMENT (<multiset value expression>) in the standard, use a scalar \n> subquery to select the single element from the nested table.\n>\n> To construct a multiset by enumeration, use the constructor of the \n> nested table type.\n>\n> To construct a multiset by query, use CAST with a multiset \n> argument, casting to the nested table type.\n>\n> To unnest a multiset, use the TABLE operator in the FROM clause.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-23 Fr 04:23, Joel Jacobson\n wrote:\n\n\nOn Fri, Jun 23, 2023, at 08:40, jian he wrote:\n\n\nI played around array_func.c\nmany of the code can be used for multiset data type.\nnow I imagine multiset as something like one dimension array. (nested \nis somehow beyond the imagination...).\n\n\n\nAre you suggesting it might be a better idea to start over completely\nand work on a new code base that is based on arrayfuncs.c,\nand aim for MULTISET/SET or anyhashset from start, that would not\nonly support int4/int8/uuid but any type?\n\n\n\n\n\nBefore we run too far down this rabbit hole, let's discuss the\n storage implications of using multisets. ISTM that for small base\n datums like integers it will be a substantial increase in size,\n since you'll need an addition int for the item count, unless some\n very clever tricks are played.\nAs for this older discussion referred to upthread, if the SQL\n Standards Committee hasn't acted on it by now it seem reasonable\n to think they are unlikely to.\nJust for reference, Here's some description of Oracle's suport\n for Multisets from\n<https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/Oracle-Support-for-Optional-Features-of-SQLFoundation2011.html#GUID-3BA98AEC-FAAD-4F21-A6AD-F696B5D36D56>:\n\nMultisets in the standard are supported as\n nested table types in Oracle. The Oracle nested table data type\n based on a scalar type ST is equivalent, in standard\n terminology, to a multiset of rows having a single field of type\n ST and named column_value. The Oracle nested table type based on\n an object type is equivalent to a multiset of structured type in\n the standard.\n\n Oracle supports the following elements of this feature on nested\n tables using the same syntax as the standard has for multisets:\n\n The CARDINALITY function\n\n The SET function\n\n The MEMBER predicate\n\n The IS A SET predicate\n\n The COLLECT aggregate\n\n All other aspects of this feature are supported with\n non-standard syntax, as follows:\n\n To create an empty multiset, denoted MULTISET[] in the\n standard, use an empty constructor of the nested table type.\n\n To obtain the sole element of a multiset with one element,\n denoted ELEMENT (<multiset value expression>) in the\n standard, use a scalar subquery to select the single element\n from the nested table.\n\n To construct a multiset by enumeration, use the constructor\n of the nested table type.\n\n To construct a multiset by query, use CAST with a multiset\n argument, casting to the nested table type.\n\n To unnest a multiset, use the TABLE operator in the FROM\n clause.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 23 Jun 2023 07:47:50 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/23/23 13:47, Andrew Dunstan wrote:\n> \n> On 2023-06-23 Fr 04:23, Joel Jacobson wrote:\n>> On Fri, Jun 23, 2023, at 08:40, jian he wrote:\n>>> I played around array_func.c\n>>> many of the code can be used for multiset data type.\n>>> now I imagine multiset as something like one dimension array. (nested \n>>> is somehow beyond the imagination...).\n>> Are you suggesting it might be a better idea to start over completely\n>> and work on a new code base that is based on arrayfuncs.c,\n>> and aim for MULTISET/SET or anyhashset from start, that would not\n>> only support int4/int8/uuid but any type?\n>>\n> \n> Before we run too far down this rabbit hole, let's discuss the storage\n> implications of using multisets. ISTM that for small base datums like\n> integers it will be a substantial increase in size, since you'll need an\n> addition int for the item count, unless some very clever tricks are played.\n> \n\nI honestly don't quite understand what exactly is meant by the proposal\nto \"reuse array_func.c for multisets\". We're implementing sets, not\nmultisets (those were mentioned only to illustrate behavior). And the\nwhole point is that sets are not arrays - no duplicates, ordering does\nnot matter (so no index).\n\nI mentioned that maybe we can model sets based on arrays (say, gram.y\nwould do similar stuff for SET[] and ARRAY[], polymorphism), not that we\nshould store sets as arrays. Would it be possible - maybe, if we extend\narrays to also maintain some hash hash table. But I'd bet that'll just\nmake arrays more complex, and will make sets slower.\n\nOr maybe I just don't understand the proposal. Perhaps it'd be best if\njian wrote a patch illustrating the idea, and showing how it performs\ncompared to the current approach.\n\nAs for the storage size, I don't think an extra \"count\" field would make\nany measurable difference. If we're storing a hash table, we're bound to\nhave a couple percent of wasted space due to load factor (likely between\n0.75 and 0.9).\n\n> As for this older discussion referred to upthread, if the SQL Standards\n> Committee hasn't acted on it by now it seem reasonable to think they are\n> unlikely to.\n> \n\nAFAIK multisets are included in SQL 2023, pretty much matching the draft\nwe discussed earlier. Yeah, it's unlikely to change in the future.\n\n> Just for reference, Here's some description of Oracle's suport for\n> Multisets from\n> <https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/Oracle-Support-for-Optional-Features-of-SQLFoundation2011.html#GUID-3BA98AEC-FAAD-4F21-A6AD-F696B5D36D56>:\n> \n\ngood to know\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 23 Jun 2023 15:12:37 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 22, 2023, at 07:51, Joel Jacobson wrote:\n> For instance, how should hashset_count() work?\n>\n> Given the query,\n>\n> SELECT hashset_count('{1,2,3,null}'::int4hashset);\n>\n> Should we,\n>\n> a) threat NULL as a distinct value and return 4?\n>\n> b) ignore NULL and return 3?\n>\n> c) return NULL? (since the presence of NULL can be thought to render \n> the entire count indeterminate)\n>\n> I think my personal preference is (b) since it is then consistent with \n> how COUNT() works.\n\nHaving thought a bit more on this matter,\nI think it's better to remove hashset_count() since the semantics are not obvious,\nand instead provide a hashset_cardinality() function, that would obviously\ninclude a possible null value in the number of elements:\n\nSELECT hashset_cardinality('{1,2,3,null}'::int4hashset);\n4\n\nSELECT hashset_cardinality('{null}'::int4hashset);\n1\n\nSELECT hashset_cardinality('{null,null}'::int4hashset);\n1\n\nSELECT hashset_cardinality('{}'::int4hashset);\n0\n\nSELECT hashset_cardinality(NULL::int4hashset);\nNULL\n\nSounds good?\n\n/Joel\n\n\n",
"msg_date": "Sat, 24 Jun 2023 10:33:25 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "New version of int4hashset_contains() that should follow the same\nGeneral Rules as MULTISET's MEMBER OF (8.16 <member predicate>).\n\nThe first rule is to return False if the cardinality is 0 (zero).\nHowever, we must first check if the first argument is null,\nin which case the cardinality cannot be 0 (zero),\nso if the first argument is null then we return Unknown\n(represented as null).\n\nWe then proceed and check if the set is empty,\nwhich is defined as nelements being 0 (zero)\nas well as the new null_element field being false.\nIf the set is empty, then we always return False,\nregardless of the second argument, that is,\neven if it would be null we would still return False,\nsince the set is empty and can therefore not contain\nany element.\n\nThe second rule is to return Unknown (represented as null)\nif any of the arguments are null. We've already checked that\nthe first argument is not null, so now we check the second\nargument, and return Unknown (represented as null) if it is null.\n\nThe third rule is to check for the element, and return True if\nthe set contains the element. Otherwise, if the set contains\nthe null element, we don't know if the element we're checking\nfor is in the set, so we then return Unknown (represented as null).\nFinally, if the set doesn't contain the null element and nor the\nelement we're checking for, then we return False.\n\nDatum\nint4hashset_contains(PG_FUNCTION_ARGS)\n{\n\tint4hashset_t *set;\n\tint32\t\t\tvalue;\n\tbool\t\t\tresult;\n\n\tif (PG_ARGISNULL(0))\n\t\tPG_RETURN_NULL();\n\n\tset = PG_GETARG_INT4HASHSET(0);\n\n\tif (set->nelements == 0 && !set->null_element)\n\t\tPG_RETURN_BOOL(false);\n\n\tif (PG_ARGISNULL(1))\n\t\tPG_RETURN_NULL();\n\n\tvalue = PG_GETARG_INT32(1);\n\tresult = int4hashset_contains_element(set, value);\n\n\tif (!result && set->null_element)\n\t\tPG_RETURN_NULL();\n\n\tPG_RETURN_BOOL(result);\n}\n\nExample queries and expected results:\n\nSELECT hashset_contains(NULL::int4hashset, NULL::int); -- null\nSELECT hashset_contains(NULL::int4hashset, 1::int); -- null\nSELECT hashset_contains('{}'::int4hashset, NULL::int); -- false\nSELECT hashset_contains('{}'::int4hashset, 1::int); -- false\nSELECT hashset_contains('{null}'::int4hashset, NULL::int); -- null\nSELECT hashset_contains('{null}'::int4hashset, 1::int); -- null\nSELECT hashset_contains('{1}'::int4hashset, NULL::int); -- null\nSELECT hashset_contains('{1}'::int4hashset, 1::int); -- true\nSELECT hashset_contains('{1}'::int4hashset, 2::int); -- false\n\nLooks good?\n\n/Joel\n\n\n",
"msg_date": "Sat, 24 Jun 2023 21:16:30 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sat, Jun 24, 2023, at 21:16, Joel Jacobson wrote:\n> New version of int4hashset_contains() that should follow the same\n> General Rules as MULTISET's MEMBER OF (8.16 <member predicate>).\n...\n> SELECT hashset_contains('{}'::int4hashset, NULL::int); -- false\n...\n> SELECT hashset_contains('{null}'::int4hashset, NULL::int); -- null\n\nWhen it comes to SQL, the general rule of thumb is that expressions and functions \nhandling null usually return the null value. This is why it might feel a bit out \nof the ordinary to return False when checking if an empty set contains NULL.\n\nHowever, that's my understanding of the General Rules on page 553 of \nISO/IEC 9075-2:2023(E). Rule 3 Case a) specifically states:\n\n \"If N is 0 (zero), then the <member predicate> is False.\",\n\nwhere N is the cardinality, and for an empty set, that's 0 (zero).\n\nRule 3 Case b) goes on to say:\n\n \"If at least one of XV and MV is the null value, then the \n <member predicate> is Unknown.\"\n\nBut since b) follows a), and the condition for a) already matches, b) is out of \nthe running. This leads me to believe that the result of:\n\n SELECT hashset_contains('{}'::int4hashset, NULL::int);\n\nwould be False, according to the General Rules.\n\nNow, this is based on the assumption that the Case conditions are evaluated in \nsequence, stopping at the first match. Does that assumption hold water?\n\nApplying the same rules, we'd have to return Unknown (which we represent as\nnull) for:\n\n SELECT hashset_contains('{null}'::int4hashset, NULL::int);\n\nHere, since the cardinality N is 1, Case a) doesn't apply, but Case b) does \nsince XV is null.\n\nLooking ahead, we're entertaining the possibility of a future SET SQL-syntax\nfeature and wondering how our hashset type could be adapted to be compatible and\nreusable for such a development. It's a common prediction that any future SET\nsyntax feature would probably operate on Three-Valued Logic. Therefore, it's key\nfor our hashset to handle null values, whether storing, identifying, or adding\nthem.\n\nBut here's my two cents, and remember it's just a personal viewpoint. I'm not so \nsure that the hashset type functions need to mirror the corresponding MULTISET \nlanguage constructs exactly. In my book, our hashset catalog functions could\ntake a more clear-cut route with null handling, as long as our data structure is\nprepared to handle null values.\n\nThink about this possibility:\n\nhashset_contains_null(int4hashset) -> boolean\nhashset_add_null(int4hashset) -> int4hashset\nhashset_contains(..., NULL) -> ERROR\nhashset_add(..., NULL) -> ERROR\n\nIn my mind, this explicit null handling could simplify things, clear up any\npotential confusion, and at the same time pave the way for compatibility with\nany future SET SQL-syntax feature.\n\nThoughts?\n\n/Joel\n\n\n",
"msg_date": "Sun, 25 Jun 2023 11:42:52 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "> Or maybe I just don't understand the proposal. Perhaps it'd be best if\n> jian wrote a patch illustrating the idea, and showing how it performs\n> compared to the current approach.\n\ncurrently joel's idea is a int4hashset. based on the code first tomas wrote.\nit looks like a non-nested an collection of unique int4. external text\nformat looks like {int4, int4,int4}\nstructure looks like (header + capacity slots * int4).\nWithin the capacity slots, some slots are empty, some have unique values.\n\nThe textual int4hashset looks like a one dimensional array.\nso I copied/imitated src/backend/utils/adt/arrayfuncs.c code, rewrote a\nslight generic hashset input and output function.\n\nsee the attached c file.\nIt works fine for non-null input output for {int4hashset, int8hashset,\ntimestamphashset,intervalhashset,uuidhashset).",
"msg_date": "Sun, 25 Jun 2023 21:32:08 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "\n\nOn 6/25/23 15:32, jian he wrote:\n>> Or maybe I just don't understand the proposal. Perhaps it'd be best if\n>> jian wrote a patch illustrating the idea, and showing how it performs\n>> compared to the current approach.\n> \n> currently joel's idea is a int4hashset. based on the code first tomas wrote.\n> it looks like a non-nested an collection of unique int4. external text\n> format looks like {int4, int4,int4}\n> structure looks like (header + capacity slots * int4).\n> Within the capacity slots, some slots are empty, some have unique values.\n> \n> The textual int4hashset looks like a one dimensional array.\n> so I copied/imitated src/backend/utils/adt/arrayfuncs.c code, rewrote a\n> slight generic hashset input and output function.\n> \n> see the attached c file.\n> It works fine for non-null input output for {int4hashset, int8hashset,\n> timestamphashset,intervalhashset,uuidhashset).\n\nSo how do you define a table with a \"set\" column? I mean, with the\noriginal patch we could have done\n\n CREATE TABLE (a int4hashset);\n\nand then store / query this. How do you do that with this approach?\n\nI've looked at the patch only very briefly - it's really difficult to\ngrok such patches - large, with half the comments possibly obsolete etc.\nSo what does reusing the array code give us, really?\n\nI'm not against reusing some of the array code, but arrays seem to be\nmuch more elaborate (multiple dimensions, ...) so the code needs to do\nsignificantly more stuff in various cases.\n\nWhen I previously suggested that maybe we should get \"inspiration\" from\nthe array code, I was mostly talking about (a) type polymorphism, i.e.\ndoing sets for arbitrary types, and (b) integrating this into grammar\n(instead of using functions).\n\nI don't see how copying arrayfuncs.c like this achieves either of these\nthings. It still hardcodes just a handful of selected data types, and\nthe array polymorphism relies on automatic creation of array type for\nevery scalar type.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Jun 2023 20:56:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Sun, Jun 25, 2023, at 11:42, Joel Jacobson wrote:\n> SELECT hashset_contains('{}'::int4hashset, NULL::int);\n>\n> would be False, according to the General Rules.\n>\n...\n> Applying the same rules, we'd have to return Unknown (which we represent as\n> null) for:\n>\n> SELECT hashset_contains('{null}'::int4hashset, NULL::int);\n>\n\nAha! I just discovered to my surprise that the corresponding array\nqueries gives the same result:\n\nSELECT NULL = ANY(ARRAY[]::int[]);\n ?column?\n----------\n f\n(1 row)\n\nSELECT NULL = ANY(ARRAY[NULL]::int[]);\n ?column?\n----------\n\n(1 row)\n\nI have no more objections; let's stick to the same null semantics as arrays and multisets.\n\n/Joel\n\n\n",
"msg_date": "Sun, 25 Jun 2023 22:35:47 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 2:56 AM Tomas Vondra <[email protected]>\nwrote:\n>\n>\n>\n> On 6/25/23 15:32, jian he wrote:\n> >> Or maybe I just don't understand the proposal. Perhaps it'd be best if\n> >> jian wrote a patch illustrating the idea, and showing how it performs\n> >> compared to the current approach.\n> >\n> > currently joel's idea is a int4hashset. based on the code first tomas\nwrote.\n> > it looks like a non-nested an collection of unique int4. external text\n> > format looks like {int4, int4,int4}\n> > structure looks like (header + capacity slots * int4).\n> > Within the capacity slots, some slots are empty, some have unique\nvalues.\n> >\n> > The textual int4hashset looks like a one dimensional array.\n> > so I copied/imitated src/backend/utils/adt/arrayfuncs.c code, rewrote a\n> > slight generic hashset input and output function.\n> >\n> > see the attached c file.\n> > It works fine for non-null input output for {int4hashset, int8hashset,\n> > timestamphashset,intervalhashset,uuidhashset).\n>\n> So how do you define a table with a \"set\" column? I mean, with the\n> original patch we could have done\n>\n> CREATE TABLE (a int4hashset);\n>\n> and then store / query this. How do you do that with this approach?\n>\n> I've looked at the patch only very briefly - it's really difficult to\n> grok such patches - large, with half the comments possibly obsolete etc.\n> So what does reusing the array code give us, really?\n>\n> I'm not against reusing some of the array code, but arrays seem to be\n> much more elaborate (multiple dimensions, ...) so the code needs to do\n> significantly more stuff in various cases.\n>\n> When I previously suggested that maybe we should get \"inspiration\" from\n> the array code, I was mostly talking about (a) type polymorphism, i.e.\n> doing sets for arbitrary types, and (b) integrating this into grammar\n> (instead of using functions).\n>\n> I don't see how copying arrayfuncs.c like this achieves either of these\n> things. It still hardcodes just a handful of selected data types, and\n> the array polymorphism relies on automatic creation of array type for\n> every scalar type.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nYou are right.\nI misread sql-createtype.html about type input_function that can take 3\narguments (cstring, oid, integer) part.\nI thought while creating data types, I can pass different params to the\ninput_function.\n\nOn Mon, Jun 26, 2023 at 2:56 AM Tomas Vondra <[email protected]> wrote:>>>> On 6/25/23 15:32, jian he wrote:> >> Or maybe I just don't understand the proposal. Perhaps it'd be best if> >> jian wrote a patch illustrating the idea, and showing how it performs> >> compared to the current approach.> >> > currently joel's idea is a int4hashset. based on the code first tomas wrote.> > it looks like a non-nested an collection of unique int4. external text> > format looks like {int4, int4,int4}> > structure looks like (header + capacity slots * int4).> > Within the capacity slots, some slots are empty, some have unique values.> >> > The textual int4hashset looks like a one dimensional array.> > so I copied/imitated src/backend/utils/adt/arrayfuncs.c code, rewrote a> > slight generic hashset input and output function.> >> > see the attached c file.> > It works fine for non-null input output for {int4hashset, int8hashset,> > timestamphashset,intervalhashset,uuidhashset).>> So how do you define a table with a \"set\" column? I mean, with the> original patch we could have done>> CREATE TABLE (a int4hashset);>> and then store / query this. How do you do that with this approach?>> I've looked at the patch only very briefly - it's really difficult to> grok such patches - large, with half the comments possibly obsolete etc.> So what does reusing the array code give us, really?>> I'm not against reusing some of the array code, but arrays seem to be> much more elaborate (multiple dimensions, ...) so the code needs to do> significantly more stuff in various cases.>> When I previously suggested that maybe we should get \"inspiration\" from> the array code, I was mostly talking about (a) type polymorphism, i.e.> doing sets for arbitrary types, and (b) integrating this into grammar> (instead of using functions).>> I don't see how copying arrayfuncs.c like this achieves either of these> things. It still hardcodes just a handful of selected data types, and> the array polymorphism relies on automatic creation of array type for> every scalar type.>>> regards>> --> Tomas Vondra> EnterpriseDB: http://www.enterprisedb.com> The Enterprise PostgreSQL CompanyYou are right.I misread sql-createtype.html about type input_function that can take 3 arguments (cstring, oid, integer) part.I thought while creating data types, I can pass different params to the input_function.",
"msg_date": "Mon, 26 Jun 2023 10:04:05 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 4:36 AM Joel Jacobson <[email protected]> wrote:\n>\n> On Sun, Jun 25, 2023, at 11:42, Joel Jacobson wrote:\n> > SELECT hashset_contains('{}'::int4hashset, NULL::int);\n> >\n> > would be False, according to the General Rules.\n> >\n> ...\n> > Applying the same rules, we'd have to return Unknown (which we\nrepresent as\n> > null) for:\n> >\n> > SELECT hashset_contains('{null}'::int4hashset, NULL::int);\n> >\n>\n> Aha! I just discovered to my surprise that the corresponding array\n> queries gives the same result:\n>\n> SELECT NULL = ANY(ARRAY[]::int[]);\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> SELECT NULL = ANY(ARRAY[NULL]::int[]);\n> ?column?\n> ----------\n>\n> (1 row)\n>\n> I have no more objections; let's stick to the same null semantics as\narrays and multisets.\n>\n> /Joel\n\nCan you try to glue the attached to the hashset data type input function.\nthe attached will parse cstring with double quote and not. so '{1,2,3}' ==\n'{\"1\",\"2\",\"3\"}'. obviously quote will preserve the inner string as is.\ncurrently int4hashset input is delimited by comma, if you want deal with\nrange then you need escape the comma.",
"msg_date": "Mon, 26 Jun 2023 19:06:21 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 26, 2023, at 13:06, jian he wrote:\n> Can you try to glue the attached to the hashset data type input \n> function.\n> the attached will parse cstring with double quote and not. so '{1,2,3}' \n> == '{\"1\",\"2\",\"3\"}'. obviously quote will preserve the inner string as \n> is.\n> currently int4hashset input is delimited by comma, if you want deal \n> with range then you need escape the comma.\n\nNot sure what you're trying to do here; what's the problem with\nthe current int4hashset_in()?\n\nI think it might be best to focus on null semantics / three-valued logic\nbefore moving on and trying to implement support for more types,\notherwise we would need to rewrite more code if we find general\nthinkos that are problems in all types.\n\nHelp wanted to reason about what the following queries should return:\n\nSELECT hashset_union(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_intersection(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_difference(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_symmetric_difference(NULL::int4hashset, '{}'::int4hashset);\n\nShould they return NULL, the empty set or something else?\n\nI've renamed hashset_merge() -> hashset_union() to better match\nSQL's MULTISET feature which has a MULTISET UNION.\n\n/Joel\n\n\n",
"msg_date": "Mon, 26 Jun 2023 22:55:03 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 4:55 PM Joel Jacobson <[email protected]> wrote:\n\n> On Mon, Jun 26, 2023, at 13:06, jian he wrote:\n> > Can you try to glue the attached to the hashset data type input\n> > function.\n> > the attached will parse cstring with double quote and not. so '{1,2,3}'\n> > == '{\"1\",\"2\",\"3\"}'. obviously quote will preserve the inner string as\n> > is.\n> > currently int4hashset input is delimited by comma, if you want deal\n> > with range then you need escape the comma.\n>\n> Not sure what you're trying to do here; what's the problem with\n> the current int4hashset_in()?\n>\n> I think it might be best to focus on null semantics / three-valued logic\n> before moving on and trying to implement support for more types,\n> otherwise we would need to rewrite more code if we find general\n> thinkos that are problems in all types.\n>\n> Help wanted to reason about what the following queries should return:\n>\n> SELECT hashset_union(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_intersection(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_difference(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_symmetric_difference(NULL::int4hashset, '{}'::int4hashset);\n>\n> Should they return NULL, the empty set or something else?\n>\n> I've renamed hashset_merge() -> hashset_union() to better match\n> SQL's MULTISET feature which has a MULTISET UNION.\n>\n\nShouldn't they return the same thing that left(NULL::text,1) returns?\n(NULL)...\nTypically any operation on NULL is NULL.\n\nKirk...\n\nOn Mon, Jun 26, 2023 at 4:55 PM Joel Jacobson <[email protected]> wrote:On Mon, Jun 26, 2023, at 13:06, jian he wrote:\n> Can you try to glue the attached to the hashset data type input \n> function.\n> the attached will parse cstring with double quote and not. so '{1,2,3}' \n> == '{\"1\",\"2\",\"3\"}'. obviously quote will preserve the inner string as \n> is.\n> currently int4hashset input is delimited by comma, if you want deal \n> with range then you need escape the comma.\n\nNot sure what you're trying to do here; what's the problem with\nthe current int4hashset_in()?\n\nI think it might be best to focus on null semantics / three-valued logic\nbefore moving on and trying to implement support for more types,\notherwise we would need to rewrite more code if we find general\nthinkos that are problems in all types.\n\nHelp wanted to reason about what the following queries should return:\n\nSELECT hashset_union(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_intersection(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_difference(NULL::int4hashset, '{}'::int4hashset);\n\nSELECT hashset_symmetric_difference(NULL::int4hashset, '{}'::int4hashset);\n\nShould they return NULL, the empty set or something else?\n\nI've renamed hashset_merge() -> hashset_union() to better match\nSQL's MULTISET feature which has a MULTISET UNION.Shouldn't they return the same thing that left(NULL::text,1) returns? (NULL)...Typically any operation on NULL is NULL.Kirk...",
"msg_date": "Mon, 26 Jun 2023 18:26:42 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 4:55 AM Joel Jacobson <[email protected]> wrote:\n>\n> On Mon, Jun 26, 2023, at 13:06, jian he wrote:\n> > Can you try to glue the attached to the hashset data type input\n> > function.\n> > the attached will parse cstring with double quote and not. so '{1,2,3}'\n> > == '{\"1\",\"2\",\"3\"}'. obviously quote will preserve the inner string as\n> > is.\n> > currently int4hashset input is delimited by comma, if you want deal\n> > with range then you need escape the comma.\n>\n> Not sure what you're trying to do here; what's the problem with\n> the current int4hashset_in()?\n>\n> I think it might be best to focus on null semantics / three-valued logic\n> before moving on and trying to implement support for more types,\n> otherwise we would need to rewrite more code if we find general\n> thinkos that are problems in all types.\n>\n> Help wanted to reason about what the following queries should return:\n>\n> SELECT hashset_union(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_intersection(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_difference(NULL::int4hashset, '{}'::int4hashset);\n>\n> SELECT hashset_symmetric_difference(NULL::int4hashset, '{}'::int4hashset);\n>\n> Should they return NULL, the empty set or something else?\n>\n> I've renamed hashset_merge() -> hashset_union() to better match\n> SQL's MULTISET feature which has a MULTISET UNION.\n>\n> /Joel\n\nin SQLMultiSets.pdf(previously thread) I found a related explanation\non page 45, 46.\n\n(CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\nT1.V FROM UNNEST (OP1) AS T1 (V) INTERSECT SQ SELECT T2.V FROM UNNEST\n(OP2) AS T2 (V) ) END)\n\nCASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\nT1.V FROM UNNEST (OP1) AS T1 (V) UNION SQ SELECT T2.V FROM UNNEST\n(OP2) AS T2 (V) ) END\n\n(CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\nT1.V FROM UNNEST (OP1) AS T1 (V) EXCEPT SQ SELECT T2.V FROM UNNEST\n(OP2) AS T2 (V) ) END)\n\nIn page11,\n>\n> Unlike the corresponding table operators UNION, INTERSECT and EXCEPT, we have chosen ALL as the default, since this is the most natural interpretation of MULTISET UNION, etc\n\nalso in page 11 aggregate name FUSION. (I like the name.................)\n\n\n",
"msg_date": "Tue, 27 Jun 2023 10:35:53 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 27, 2023, at 04:35, jian he wrote:\n> in SQLMultiSets.pdf(previously thread) I found a related explanation\n> on page 45, 46.\n>\n> (CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> T1.V FROM UNNEST (OP1) AS T1 (V) INTERSECT SQ SELECT T2.V FROM UNNEST\n> (OP2) AS T2 (V) ) END)\n>\n> CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> T1.V FROM UNNEST (OP1) AS T1 (V) UNION SQ SELECT T2.V FROM UNNEST\n> (OP2) AS T2 (V) ) END\n>\n> (CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> T1.V FROM UNNEST (OP1) AS T1 (V) EXCEPT SQ SELECT T2.V FROM UNNEST\n> (OP2) AS T2 (V) ) END)\n\nThanks! This was exactly what I was looking for, I knew I've seen it but failed to find it.\n\nAttached is a new incremental patch as well as a full patch, since this is a substantial change:\n\n Align null semantics with SQL:2023 array and multiset standards\n\n * Introduced a new boolean field, null_element, in the int4hashset_t type.\n\n * Rename hashset_count() to hashset_cardinality().\n\n * Rename hashset_merge() to hashset_union().\n\n * Rename hashset_equals() to hashset_eq().\n\n * Rename hashset_neq() to hashset_ne().\n\n * Add hashset_to_sorted_array().\n\n * Handle null semantics to work as in arrays and multisets.\n\n * Update int4hashset_add() to allow creating a new set if none exists.\n\n * Use more portable int32 typedef instead of int32_t.\n\n This also adds a thorough test suite in array-and-multiset-semantics.sql,\n which aims to test all relevant combinations of operations and values.\n\n Makefile | 2 +-\n README.md | 6 ++--\n hashset--0.0.1.sql | 37 +++++++++++---------\n hashset-api.c | 208 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------\n hashset.c | 12 ++++++-\n hashset.h | 11 +++---\n test/expected/array-and-multiset-semantics.out | 365 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n test/expected/basic.out | 12 +++----\n test/expected/reported_bugs.out | 6 ++--\n test/expected/strict.out | 114 ------------------------------------------------------------\n test/expected/table.out | 8 ++---\n test/sql/array-and-multiset-semantics.sql | 232 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n test/sql/basic.sql | 4 +--\n test/sql/benchmark.sql | 14 ++++----\n test/sql/reported_bugs.sql | 6 ++--\n test/sql/strict.sql | 32 -----------------\n test/sql/table.sql | 2 +-\n 17 files changed, 823 insertions(+), 248 deletions(-)\n\n/Joel",
"msg_date": "Tue, 27 Jun 2023 10:26:44 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 27, 2023, at 10:26, Joel Jacobson wrote:\n> Attachments:\n> * hashset-0.0.1-b7e5614-full.patch\n> * hashset-0.0.1-b7e5614-incremental.patch\n\nTo help verify that the semantics, I thought it might be helpful to provide\na comprehensive set of examples that tries to cover all different ways of varying\nthe arguments to the functions.\n\nPlease let me know if you find any possible errors or if you think it looks good.\n\nSELECT NULL::int4hashset;\n int4hashset\n-------------\n\n(1 row)\n\nSELECT '{}'::int4hashset;\n int4hashset\n-------------\n {}\n(1 row)\n\nSELECT int4hashset();\n int4hashset\n-------------\n {}\n(1 row)\n\nSELECT '{NULL}'::int4hashset;\n int4hashset\n-------------\n {NULL}\n(1 row)\n\nSELECT '{NULL,NULL}'::int4hashset;\n int4hashset\n-------------\n {NULL}\n(1 row)\n\nSELECT '{1,3,2,NULL,2,NULL,3,1}'::int4hashset;\n int4hashset\n--------------\n {2,1,3,NULL}\n(1 row)\n\nSELECT hashset_add(NULL, NULL);\n hashset_add\n-------------\n {NULL}\n(1 row)\n\nSELECT hashset_add(NULL, 1);\n hashset_add\n-------------\n {1}\n(1 row)\n\nSELECT hashset_add('{}', 1);\n hashset_add\n-------------\n {1}\n(1 row)\n\nSELECT hashset_add('{NULL}', 1);\n hashset_add\n-------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_add('{1}', 1);\n hashset_add\n-------------\n {1}\n(1 row)\n\nSELECT hashset_add('{1}', 2);\n hashset_add\n-------------\n {1,2}\n(1 row)\n\nSELECT hashset_add('{1}', NULL);\n hashset_add\n-------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_contains(NULL, NULL);\n hashset_contains\n------------------\n\n(1 row)\n\nSELECT hashset_contains('{}', NULL);\n hashset_contains\n------------------\n f\n(1 row)\n\nSELECT hashset_contains('{NULL}', NULL);\n hashset_contains\n------------------\n\n(1 row)\n\nSELECT hashset_contains('{1}', 1);\n hashset_contains\n------------------\n t\n(1 row)\n\nSELECT hashset_contains('{1,NULL}', 1);\n hashset_contains\n------------------\n t\n(1 row)\n\nSELECT hashset_contains('{1}', 2);\n hashset_contains\n------------------\n f\n(1 row)\n\nSELECT hashset_contains('{1,NULL}', 2);\n hashset_contains\n------------------\n\n(1 row)\n\nSELECT hashset_to_array(NULL);\n hashset_to_array\n------------------\n\n(1 row)\n\nSELECT hashset_to_array('{}');\n hashset_to_array\n------------------\n {}\n(1 row)\n\nSELECT hashset_to_array('{NULL}');\n hashset_to_array\n------------------\n {NULL}\n(1 row)\n\nSELECT hashset_to_array('{3,1,NULL,2}');\n hashset_to_array\n------------------\n {1,3,2,NULL}\n(1 row)\n\nSELECT hashset_to_sorted_array(NULL);\n hashset_to_sorted_array\n-------------------------\n\n(1 row)\n\nSELECT hashset_to_sorted_array('{}');\n hashset_to_sorted_array\n-------------------------\n {}\n(1 row)\n\nSELECT hashset_to_sorted_array('{NULL}');\n hashset_to_sorted_array\n-------------------------\n {NULL}\n(1 row)\n\nSELECT hashset_to_sorted_array('{3,1,NULL,2}');\n hashset_to_sorted_array\n-------------------------\n {1,2,3,NULL}\n(1 row)\n\nSELECT hashset_cardinality(NULL);\n hashset_cardinality\n---------------------\n\n(1 row)\n\nSELECT hashset_cardinality('{}');\n hashset_cardinality\n---------------------\n 0\n(1 row)\n\nSELECT hashset_cardinality('{NULL}');\n hashset_cardinality\n---------------------\n 1\n(1 row)\n\nSELECT hashset_cardinality('{NULL,NULL}');\n hashset_cardinality\n---------------------\n 1\n(1 row)\n\nSELECT hashset_cardinality('{1}');\n hashset_cardinality\n---------------------\n 1\n(1 row)\n\nSELECT hashset_cardinality('{1,1}');\n hashset_cardinality\n---------------------\n 1\n(1 row)\n\nSELECT hashset_cardinality('{1,2}');\n hashset_cardinality\n---------------------\n 2\n(1 row)\n\nSELECT hashset_cardinality('{1,2,NULL}');\n hashset_cardinality\n---------------------\n 3\n(1 row)\n\nSELECT hashset_union(NULL, NULL);\n hashset_union\n---------------\n\n(1 row)\n\nSELECT hashset_union(NULL, '{}');\n hashset_union\n---------------\n\n(1 row)\n\nSELECT hashset_union('{}', NULL);\n hashset_union\n---------------\n\n(1 row)\n\nSELECT hashset_union('{}', '{}');\n hashset_union\n---------------\n {}\n(1 row)\n\nSELECT hashset_union('{}', '{NULL}');\n hashset_union\n---------------\n {NULL}\n(1 row)\n\nSELECT hashset_union('{NULL}', '{}');\n hashset_union\n---------------\n {NULL}\n(1 row)\n\nSELECT hashset_union('{NULL}', '{NULL}');\n hashset_union\n---------------\n {NULL}\n(1 row)\n\nSELECT hashset_union('{}', '{1}');\n hashset_union\n---------------\n {1}\n(1 row)\n\nSELECT hashset_union('{1}', '{}');\n hashset_union\n---------------\n {1}\n(1 row)\n\nSELECT hashset_union('{1}', '{1}');\n hashset_union\n---------------\n {1}\n(1 row)\n\nSELECT hashset_union('{1}', NULL);\n hashset_union\n---------------\n\n(1 row)\n\nSELECT hashset_union(NULL, '{1}');\n hashset_union\n---------------\n\n(1 row)\n\nSELECT hashset_union('{1}', '{NULL}');\n hashset_union\n---------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_union('{NULL}', '{1}');\n hashset_union\n---------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_union('{1}', '{2}');\n hashset_union\n---------------\n {1,2}\n(1 row)\n\nSELECT hashset_union('{1,2}', '{2,3}');\n hashset_union\n---------------\n {3,1,2}\n(1 row)\n\nSELECT hashset_intersection(NULL, NULL);\n hashset_intersection\n----------------------\n\n(1 row)\n\nSELECT hashset_intersection(NULL, '{}');\n hashset_intersection\n----------------------\n\n(1 row)\n\nSELECT hashset_intersection('{}', NULL);\n hashset_intersection\n----------------------\n\n(1 row)\n\nSELECT hashset_intersection('{}', '{}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{}', '{NULL}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{NULL}', '{}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{NULL}', '{NULL}');\n hashset_intersection\n----------------------\n {NULL}\n(1 row)\n\nSELECT hashset_intersection('{}', '{1}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{1}', '{}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{1}', '{1}');\n hashset_intersection\n----------------------\n {1}\n(1 row)\n\nSELECT hashset_intersection('{1}', NULL);\n hashset_intersection\n----------------------\n\n(1 row)\n\nSELECT hashset_intersection(NULL, '{1}');\n hashset_intersection\n----------------------\n\n(1 row)\n\nSELECT hashset_intersection('{1}', '{NULL}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{NULL}', '{1}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{1}', '{2}');\n hashset_intersection\n----------------------\n {}\n(1 row)\n\nSELECT hashset_intersection('{1,2}', '{2,3}');\n hashset_intersection\n----------------------\n {2}\n(1 row)\n\nSELECT hashset_difference(NULL, NULL);\n hashset_difference\n--------------------\n\n(1 row)\n\nSELECT hashset_difference(NULL, '{}');\n hashset_difference\n--------------------\n\n(1 row)\n\nSELECT hashset_difference('{}', NULL);\n hashset_difference\n--------------------\n\n(1 row)\n\nSELECT hashset_difference('{}', '{}');\n hashset_difference\n--------------------\n {}\n(1 row)\n\nSELECT hashset_difference('{}', '{NULL}');\n hashset_difference\n--------------------\n {}\n(1 row)\n\nSELECT hashset_difference('{NULL}', '{}');\n hashset_difference\n--------------------\n {NULL}\n(1 row)\n\nSELECT hashset_difference('{NULL}', '{NULL}');\n hashset_difference\n--------------------\n {}\n(1 row)\n\nSELECT hashset_difference('{}', '{1}');\n hashset_difference\n--------------------\n {}\n(1 row)\n\nSELECT hashset_difference('{1}', '{}');\n hashset_difference\n--------------------\n {1}\n(1 row)\n\nSELECT hashset_difference('{1}', '{1}');\n hashset_difference\n--------------------\n {}\n(1 row)\n\nSELECT hashset_difference('{1}', NULL);\n hashset_difference\n--------------------\n\n(1 row)\n\nSELECT hashset_difference(NULL, '{1}');\n hashset_difference\n--------------------\n\n(1 row)\n\nSELECT hashset_difference('{1}', '{NULL}');\n hashset_difference\n--------------------\n {1}\n(1 row)\n\nSELECT hashset_difference('{NULL}', '{1}');\n hashset_difference\n--------------------\n {NULL}\n(1 row)\n\nSELECT hashset_difference('{1}', '{2}');\n hashset_difference\n--------------------\n {1}\n(1 row)\n\nSELECT hashset_difference('{1,2}', '{2,3}');\n hashset_difference\n--------------------\n {1}\n(1 row)\n\nSELECT hashset_symmetric_difference(NULL, NULL);\n hashset_symmetric_difference\n------------------------------\n\n(1 row)\n\nSELECT hashset_symmetric_difference(NULL, '{}');\n hashset_symmetric_difference\n------------------------------\n\n(1 row)\n\nSELECT hashset_symmetric_difference('{}', NULL);\n hashset_symmetric_difference\n------------------------------\n\n(1 row)\n\nSELECT hashset_symmetric_difference('{}', '{}');\n hashset_symmetric_difference\n------------------------------\n {}\n(1 row)\n\nSELECT hashset_symmetric_difference('{}', '{NULL}');\n hashset_symmetric_difference\n------------------------------\n {NULL}\n(1 row)\n\nSELECT hashset_symmetric_difference('{NULL}', '{}');\n hashset_symmetric_difference\n------------------------------\n {NULL}\n(1 row)\n\nSELECT hashset_symmetric_difference('{NULL}', '{NULL}');\n hashset_symmetric_difference\n------------------------------\n {}\n(1 row)\n\nSELECT hashset_symmetric_difference('{}', '{1}');\n hashset_symmetric_difference\n------------------------------\n {1}\n(1 row)\n\nSELECT hashset_symmetric_difference('{1}', '{}');\n hashset_symmetric_difference\n------------------------------\n {1}\n(1 row)\n\nSELECT hashset_symmetric_difference('{1}', '{1}');\n hashset_symmetric_difference\n------------------------------\n {}\n(1 row)\n\nSELECT hashset_symmetric_difference('{1}', NULL);\n hashset_symmetric_difference\n------------------------------\n\n(1 row)\n\nSELECT hashset_symmetric_difference(NULL, '{1}');\n hashset_symmetric_difference\n------------------------------\n\n(1 row)\n\nSELECT hashset_symmetric_difference('{1}', '{NULL}');\n hashset_symmetric_difference\n------------------------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_symmetric_difference('{NULL}', '{1}');\n hashset_symmetric_difference\n------------------------------\n {1,NULL}\n(1 row)\n\nSELECT hashset_symmetric_difference('{1}', '{2}');\n hashset_symmetric_difference\n------------------------------\n {1,2}\n(1 row)\n\nSELECT hashset_symmetric_difference('{1,2}', '{2,3}');\n hashset_symmetric_difference\n------------------------------\n {1,3}\n(1 row)\n\nSELECT hashset_eq(NULL, NULL);\n hashset_eq\n------------\n\n(1 row)\n\nSELECT hashset_eq(NULL, '{}');\n hashset_eq\n------------\n\n(1 row)\n\nSELECT hashset_eq('{}', NULL);\n hashset_eq\n------------\n\n(1 row)\n\nSELECT hashset_eq('{}', '{}');\n hashset_eq\n------------\n t\n(1 row)\n\nSELECT hashset_eq('{}', '{NULL}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{NULL}', '{}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{NULL}', '{NULL}');\n hashset_eq\n------------\n t\n(1 row)\n\nSELECT hashset_eq('{}', '{1}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{1}', '{}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{1}', '{1}');\n hashset_eq\n------------\n t\n(1 row)\n\nSELECT hashset_eq('{1}', NULL);\n hashset_eq\n------------\n\n(1 row)\n\nSELECT hashset_eq(NULL, '{1}');\n hashset_eq\n------------\n\n(1 row)\n\nSELECT hashset_eq('{1}', '{NULL}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{NULL}', '{1}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{1}', '{2}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_eq('{1,2}', '{2,3}');\n hashset_eq\n------------\n f\n(1 row)\n\nSELECT hashset_ne(NULL, NULL);\n hashset_ne\n------------\n\n(1 row)\n\nSELECT hashset_ne(NULL, '{}');\n hashset_ne\n------------\n\n(1 row)\n\nSELECT hashset_ne('{}', NULL);\n hashset_ne\n------------\n\n(1 row)\n\nSELECT hashset_ne('{}', '{}');\n hashset_ne\n------------\n f\n(1 row)\n\nSELECT hashset_ne('{}', '{NULL}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{NULL}', '{}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{NULL}', '{NULL}');\n hashset_ne\n------------\n f\n(1 row)\n\nSELECT hashset_ne('{}', '{1}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{1}', '{}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{1}', '{1}');\n hashset_ne\n------------\n f\n(1 row)\n\nSELECT hashset_ne('{1}', NULL);\n hashset_ne\n------------\n\n(1 row)\n\nSELECT hashset_ne(NULL, '{1}');\n hashset_ne\n------------\n\n(1 row)\n\nSELECT hashset_ne('{1}', '{NULL}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{NULL}', '{1}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{1}', '{2}');\n hashset_ne\n------------\n t\n(1 row)\n\nSELECT hashset_ne('{1,2}', '{2,3}');\n hashset_ne\n------------\n t\n(1 row)\n\n/Joel\n\n\n",
"msg_date": "Tue, 27 Jun 2023 22:25:52 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 4:27 PM Joel Jacobson <[email protected]> wrote:\n>\n> On Tue, Jun 27, 2023, at 04:35, jian he wrote:\n> > in SQLMultiSets.pdf(previously thread) I found a related explanation\n> > on page 45, 46.\n> > /home/jian/hashset/0001-make-int4hashset_contains-strict-and-header-file-change.patch\n> > (CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> > T1.V FROM UNNEST (OP1) AS T1 (V) INTERSECT SQ SELECT T2.V FROM UNNEST\n> > (OP2) AS T2 (V) ) END)\n> >\n> > CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> > T1.V FROM UNNEST (OP1) AS T1 (V) UNION SQ SELECT T2.V FROM UNNEST\n> > (OP2) AS T2 (V) ) END\n> >\n> > (CASE WHEN OP1 IS NULL OR OP2 IS NULL THEN NULL ELSE MULTISET ( SELECT\n> > T1.V FROM UNNEST (OP1) AS T1 (V) EXCEPT SQ SELECT T2.V FROM UNNEST\n> > (OP2) AS T2 (V) ) END)\n>\n> Thanks! This was exactly what I was looking for, I knew I've seen it but failed to find it.\n>\n> Attached is a new incremental patch as well as a full patch, since this is a substantial change:\n>\n> Align null semantics with SQL:2023 array and multiset standards\n>\n> * Introduced a new boolean field, null_element, in the int4hashset_t type.\n>\n> * Rename hashset_count() to hashset_cardinality().\n>\n> * Rename hashset_merge() to hashset_union().\n>\n> * Rename hashset_equals() to hashset_eq().\n>\n> * Rename hashset_neq() to hashset_ne().\n>\n> * Add hashset_to_sorted_array().\n>\n> * Handle null semantics to work as in arrays and multisets.\n>\n> * Update int4hashset_add() to allow creating a new set if none exists.\n>\n> * Use more portable int32 typedef instead of int32_t.\n>\n> This also adds a thorough test suite in array-and-multiset-semantics.sql,\n> which aims to test all relevant combinations of operations and values.\n>\n> Makefile | 2 +-\n> README.md | 6 ++--\n> hashset--0.0.1.sql | 37 +++++++++++---------\n> hashset-api.c | 208 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------------------\n> hashset.c | 12 ++++++-\n> hashset.h | 11 +++---\n> test/expected/array-and-multiset-semantics.out | 365 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n> test/expected/basic.out | 12 +++----\n> test/expected/reported_bugs.out | 6 ++--\n> test/expected/strict.out | 114 ------------------------------------------------------------\n> test/expected/table.out | 8 ++---\n> test/sql/array-and-multiset-semantics.sql | 232 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n> test/sql/basic.sql | 4 +--\n> test/sql/benchmark.sql | 14 ++++----\n> test/sql/reported_bugs.sql | 6 ++--\n> test/sql/strict.sql | 32 -----------------\n> test/sql/table.sql | 2 +-\n> 17 files changed, 823 insertions(+), 248 deletions(-)\n>\n> /Joel\n\nHi there.\nI changed the function hashset_contains to strict.\nalso change the way to return an empty array.\n\nin benchmark.sql, would it be ok to use EXPLAIN to demonstrate that\nint4hashset can speed distinct aggregate and distinct counts?\nlike the following:\n\nexplain(analyze, costs off, timing off, buffers)\nSELECT array_agg(DISTINCT i) FROM benchmark_input_100k \\watch c=3\n\nexplain(analyze, costs off, timing off, buffers)\nSELECT hashset_agg(i) FROM benchmark_input_100k \\watch c=3\n\nexplain(costs off,timing off, analyze,buffers)\nselect count(distinct rnd) from benchmark_input_100k \\watch c=3\n\nexplain(costs off,timing off, analyze,buffers)\nSELECT hashset_cardinality(x) FROM (SELECT hashset_agg(rnd) FROM\nbenchmark_input_100k) sub(x) \\watch c=3",
"msg_date": "Wed, 28 Jun 2023 14:26:52 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 28, 2023, at 08:26, jian he wrote:\n\n> Hi there.\n> I changed the function hashset_contains to strict.\n\nChanging hashset_contains to STRICT would cause it to return NULL\nif any of the operands are NULL, which I don't believe is correct, since:\n\nSELECT NULL = ANY('{}'::int4[]);\n ?column?\n----------\n f\n(1 row)\n\nHence, `hashset_contains('{}'::int4hashset, NULL)` should also return FALSE,\nto mimic the semantics of arrays and MULTISET's MEMBER OF predicate in SQL:2023.\n\nDid you try running `make installcheck` after your change?\nYou would then have seen one of the tests failing:\n\ntest array-and-multiset-semantics ... FAILED 21 ms\n\nCheck the content of `regression.diffs` to see why:\n\n% cat regression.diffs\ndiff -U3 /Users/joel/src/hashset/test/expected/array-and-multiset-semantics.out /Users/joel/src/hashset/results/array-and-multiset-semantics.out\n--- /Users/joel/src/hashset/test/expected/array-and-multiset-semantics.out\t2023-06-27 10:07:38\n+++ /Users/joel/src/hashset/results/array-and-multiset-semantics.out\t2023-06-28 10:13:27\n@@ -158,7 +158,7 @@\n | | {NULL} | {NULL} | |\n | 1 | {1} | {1} | |\n | 4 | {4} | {4} | |\n- {} | | {NULL} | {NULL} | f | f\n+ {} | | {NULL} | {NULL} | | f\n {} | 1 | {1} | {1} | f | f\n {} | 4 | {4} | {4} | f | f\n {NULL} | | {NULL} | {NULL,NULL} | |\n@@ -284,7 +284,8 @@\n \"= ANY(...)\";\n arg1 | arg2 | hashset_add | array_append | hashset_contains | = ANY(...)\n ------+------+-------------+--------------+------------------+------------\n-(0 rows)\n+ {} | | {NULL} | {NULL} | | f\n+(1 row)\n\n\n> also change the way to return an empty array.\n\nNice.\nI agree the `Datum d` variable was unnecessary.\nI also removed the unused includes.\n\n> in benchmark.sql, would it be ok to use EXPLAIN to demonstrate that\n> int4hashset can speed distinct aggregate and distinct counts?\n> like the following:\n>\n> explain(analyze, costs off, timing off, buffers)\n> SELECT array_agg(DISTINCT i) FROM benchmark_input_100k \\watch c=3\n>\n> explain(analyze, costs off, timing off, buffers)\n> SELECT hashset_agg(i) FROM benchmark_input_100k \\watch c=3\n\nThe 100k tables seems to be too small to give any meaningful results,\nwhen trying to measure individual queries:\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT array_agg(DISTINCT i) FROM benchmark_input_100k;\n Execution Time: 26.790 ms\n Execution Time: 30.616 ms\n Execution Time: 33.253 ms\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT hashset_agg(i) FROM benchmark_input_100k;\n Execution Time: 32.797 ms\n Execution Time: 27.605 ms\n Execution Time: 26.228 ms\n\nIf we instead try the 10M tables, it looks like array_agg(DISTINCT ...)\nis actually faster for the `i` column where all input integers are unique:\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT array_agg(DISTINCT i) FROM benchmark_input_10M;\n Execution Time: 799.017 ms\n Execution Time: 796.008 ms\n Execution Time: 799.121 ms\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT hashset_agg(i) FROM benchmark_input_10M;\n Execution Time: 1204.873 ms\n Execution Time: 1221.822 ms\n Execution Time: 1216.340 ms\n\nFor random integers, hashset is a win though:\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT array_agg(DISTINCT rnd) FROM benchmark_input_10M;\n Execution Time: 1874.722 ms\n Execution Time: 1878.760 ms\n Execution Time: 1861.640 ms\n\nEXPLAIN(analyze, costs off, timing off, buffers)\nSELECT hashset_agg(rnd) FROM benchmark_input_10M;\n Execution Time: 1253.709 ms\n Execution Time: 1222.651 ms\n Execution Time: 1237.849 ms\n\n> explain(costs off,timing off, analyze,buffers)\n> select count(distinct rnd) from benchmark_input_100k \\watch c=3\n>\n> explain(costs off,timing off, analyze,buffers)\n> SELECT hashset_cardinality(x) FROM (SELECT hashset_agg(rnd) FROM\n> benchmark_input_100k) sub(x) \\watch c=3\n\nI tried these with 10M:\n\nEXPLAIN(costs off,timing off, analyze,buffers)\nSELECT COUNT(DISTINCT rnd) FROM benchmark_input_10M;\n Execution Time: 1733.320 ms\n Execution Time: 1725.214 ms\n Execution Time: 1716.636 ms\n\nEXPLAIN(costs off,timing off, analyze,buffers)\nSELECT hashset_cardinality(x) FROM (SELECT hashset_agg(rnd) FROM benchmark_input_10M) sub(x);\n Execution Time: 1249.612 ms\n Execution Time: 1240.558 ms\n Execution Time: 1252.103 ms\n\nNot sure what I think of the current benchmark suite.\n\nI think it would be better to only include some realistic examples from\nreal-life, such as the graph query which was the reason I personally started\nworking on this. Otherwise there is a risk we optimise for some hypothetical\nscenario that is not relevant in practise.\n\nWould be good with more examples of typical work loads for when the hashset\ntype would be useful.\n\n/Joel\n\n\n",
"msg_date": "Wed, 28 Jun 2023 10:50:20 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 4:50 PM Joel Jacobson <[email protected]> wrote:\n>\n> On Wed, Jun 28, 2023, at 08:26, jian he wrote:\n>\n> > Hi there.\n> > I changed the function hashset_contains to strict.\n>\n> Changing hashset_contains to STRICT would cause it to return NULL\n> if any of the operands are NULL, which I don't believe is correct, since:\n>\n> SELECT NULL = ANY('{}'::int4[]);\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> Hence, `hashset_contains('{}'::int4hashset, NULL)` should also return FALSE,\n> to mimic the semantics of arrays and MULTISET's MEMBER OF predicate in SQL:2023.\n>\n> Did you try running `make installcheck` after your change?\n> You would then have seen one of the tests failing:\n>\n> test array-and-multiset-semantics ... FAILED 21 ms\n>\n> Check the content of `regression.diffs` to see why:\n>\n> % cat regression.diffs\n> diff -U3 /Users/joel/src/hashset/test/expected/array-and-multiset-semantics.out /Users/joel/src/hashset/results/array-and-multiset-semantics.out\n> --- /Users/joel/src/hashset/test/expected/array-and-multiset-semantics.out 2023-06-27 10:07:38\n> +++ /Users/joel/src/hashset/results/array-and-multiset-semantics.out 2023-06-28 10:13:27\n> @@ -158,7 +158,7 @@\n> | | {NULL} | {NULL} | |\n> | 1 | {1} | {1} | |\n> | 4 | {4} | {4} | |\n> - {} | | {NULL} | {NULL} | f | f\n> + {} | | {NULL} | {NULL} | | f\n> {} | 1 | {1} | {1} | f | f\n> {} | 4 | {4} | {4} | f | f\n> {NULL} | | {NULL} | {NULL,NULL} | |\n> @@ -284,7 +284,8 @@\n> \"= ANY(...)\";\n> arg1 | arg2 | hashset_add | array_append | hashset_contains | = ANY(...)\n> ------+------+-------------+--------------+------------------+------------\n> -(0 rows)\n> + {} | | {NULL} | {NULL} | | f\n> +(1 row)\n>\n>\n> > also change the way to return an empty array.\n>\n> Nice.\n> I agree the `Datum d` variable was unnecessary.\n> I also removed the unused includes.\n>\n> > in benchmark.sql, would it be ok to use EXPLAIN to demonstrate that\n> > int4hashset can speed distinct aggregate and distinct counts?\n> > like the following:\n> >\n> > explain(analyze, costs off, timing off, buffers)\n> > SELECT array_agg(DISTINCT i) FROM benchmark_input_100k \\watch c=3\n> >\n> > explain(analyze, costs off, timing off, buffers)\n> > SELECT hashset_agg(i) FROM benchmark_input_100k \\watch c=3\n>\n> The 100k tables seems to be too small to give any meaningful results,\n> when trying to measure individual queries:\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT array_agg(DISTINCT i) FROM benchmark_input_100k;\n> Execution Time: 26.790 ms\n> Execution Time: 30.616 ms\n> Execution Time: 33.253 ms\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT hashset_agg(i) FROM benchmark_input_100k;\n> Execution Time: 32.797 ms\n> Execution Time: 27.605 ms\n> Execution Time: 26.228 ms\n>\n> If we instead try the 10M tables, it looks like array_agg(DISTINCT ...)\n> is actually faster for the `i` column where all input integers are unique:\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT array_agg(DISTINCT i) FROM benchmark_input_10M;\n> Execution Time: 799.017 ms\n> Execution Time: 796.008 ms\n> Execution Time: 799.121 ms\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT hashset_agg(i) FROM benchmark_input_10M;\n> Execution Time: 1204.873 ms\n> Execution Time: 1221.822 ms\n> Execution Time: 1216.340 ms\n>\n> For random integers, hashset is a win though:\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT array_agg(DISTINCT rnd) FROM benchmark_input_10M;\n> Execution Time: 1874.722 ms\n> Execution Time: 1878.760 ms\n> Execution Time: 1861.640 ms\n>\n> EXPLAIN(analyze, costs off, timing off, buffers)\n> SELECT hashset_agg(rnd) FROM benchmark_input_10M;\n> Execution Time: 1253.709 ms\n> Execution Time: 1222.651 ms\n> Execution Time: 1237.849 ms\n>\n> > explain(costs off,timing off, analyze,buffers)\n> > select count(distinct rnd) from benchmark_input_100k \\watch c=3\n> >\n> > explain(costs off,timing off, analyze,buffers)\n> > SELECT hashset_cardinality(x) FROM (SELECT hashset_agg(rnd) FROM\n> > benchmark_input_100k) sub(x) \\watch c=3\n>\n> I tried these with 10M:\n>\n> EXPLAIN(costs off,timing off, analyze,buffers)\n> SELECT COUNT(DISTINCT rnd) FROM benchmark_input_10M;\n> Execution Time: 1733.320 ms\n> Execution Time: 1725.214 ms\n> Execution Time: 1716.636 ms\n>\n> EXPLAIN(costs off,timing off, analyze,buffers)\n> SELECT hashset_cardinality(x) FROM (SELECT hashset_agg(rnd) FROM benchmark_input_10M) sub(x);\n> Execution Time: 1249.612 ms\n> Execution Time: 1240.558 ms\n> Execution Time: 1252.103 ms\n>\n> Not sure what I think of the current benchmark suite.\n>\n> I think it would be better to only include some realistic examples from\n> real-life, such as the graph query which was the reason I personally started\n> working on this. Otherwise there is a risk we optimise for some hypothetical\n> scenario that is not relevant in practise.\n>\n> Would be good with more examples of typical work loads for when the hashset\n> type would be useful.\n>\n> /Joel\n\n> Did you try running `make installcheck` after your change?\n\nFirst I use make installcheck\nPG_CONFIG=/home/jian/postgres/2023_05_25_beta5421/bin/pg_config\nI found out it uses another active cluster.\nSo I killed another active cluster.\nlater i found another database port so I took me sometime to found out\nI need use following:\nmake installcheck\nPG_CONFIG=/home/jian/postgres/2023_05_25_beta5421/bin/pg_config\nPGPORT=5421\n\nAnyway, this time, I added another macro,which seems to simplify the code.\n\n#define SET_DATA_PTR(a) \\\n(((char *) (a->data)) + CEIL_DIV(a->capacity, 8))\n\nit passed all the tests on my local machine.\nI should have only made a patch, but when I was committed, I forgot to\nmention one file, so I needed 2 commits.\n\n\n> Not sure what I think of the current benchmark suite.\nyour result is so different from mine. I use the default config. I\nsee a big difference. yech, I agree, the performance test should be\nmore careful.",
"msg_date": "Thu, 29 Jun 2023 14:54:44 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 29, 2023, at 08:54, jian he wrote:\n> Anyway, this time, I added another macro,which seems to simplify the code.\n>\n> #define SET_DATA_PTR(a) \\\n> (((char *) (a->data)) + CEIL_DIV(a->capacity, 8))\n>\n> it passed all the tests on my local machine.\n\nHmm, this is interesting. There is a bug in your second patch,\nthat the tests catch, so it's really surprising if they pass on your machine.\n\nCan you try to run `make clean && make && make install && make installcheck`?\n\nI would guess you forgot to recompile or reinstall.\n\nThis is the bug in 0002-marco-SET_DATA_PTR-to-quicly-access-hashset-data-reg.patch:\n\n@@ -411,7 +411,7 @@ int4hashset_union(PG_FUNCTION_ARGS)\n \tint4hashset_t *seta = PG_GETARG_INT4HASHSET_COPY(0);\n \tint4hashset_t *setb = PG_GETARG_INT4HASHSET(1);\n \tchar\t\t *bitmap = setb->data;\n-\tint32\t\t *values = (int32 *) (bitmap + CEIL_DIV(setb->capacity, 8));\n+\tint32\t\t *values = (int32 *) SET_DATA_PTR(seta);\n\nYou accidentally replaced `setb` with `seta`.\n\nI renamed the macro to HASHSET_GET_VALUES and changed it slightly,\nalso added a HASHSET_GET_BITMAP for completeness:\n\n#define HASHSET_GET_BITMAP(set) ((set)->data)\n#define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data + CEIL_DIV((set)->capacity, 8)))\n\nInstead of your version:\n\n#define SET_DATA_PTR(a) \\\n\t\t(((char *) (a->data)) + CEIL_DIV(a->capacity, 8))\n\nChanges:\n* Parenthesize macro parameters.\n* Prefix the macro names with \"HASHSET_\" to avoid potential conflicts.\n* \"GET_VALUES\" more clearly communicates that it's the values we're extracting.\n\nNew patch attached.\n\nOther changes in same commit:\n\n* Add original friends-of-friends graph query to new benchmark/ directory\n* Add table of content to README\n* Update docs: Explain null semantics and add function examples\n* Simplify empty hashset handling, remove unused includes\n\n/Joel",
"msg_date": "Thu, 29 Jun 2023 10:43:12 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 4:43 PM Joel Jacobson <[email protected]> wrote:\n>\n> On Thu, Jun 29, 2023, at 08:54, jian he wrote:\n> > Anyway, this time, I added another macro,which seems to simplify the code.\n> >\n> > #define SET_DATA_PTR(a) \\\n> > (((char *) (a->data)) + CEIL_DIV(a->capacity, 8))\n> >\n> > it passed all the tests on my local machine.\n>\n> Hmm, this is interesting. There is a bug in your second patch,\n> that the tests catch, so it's really surprising if they pass on your machine.\n>\n> Can you try to run `make clean && make && make install && make installcheck`?\n>\n> I would guess you forgot to recompile or reinstall.\n>\n> This is the bug in 0002-marco-SET_DATA_PTR-to-quicly-access-hashset-data-reg.patch:\n>\n> @@ -411,7 +411,7 @@ int4hashset_union(PG_FUNCTION_ARGS)\n> int4hashset_t *seta = PG_GETARG_INT4HASHSET_COPY(0);\n> int4hashset_t *setb = PG_GETARG_INT4HASHSET(1);\n> char *bitmap = setb->data;\n> - int32 *values = (int32 *) (bitmap + CEIL_DIV(setb->capacity, 8));\n> + int32 *values = (int32 *) SET_DATA_PTR(seta);\n>\n> You accidentally replaced `setb` with `seta`.\n>\n> I renamed the macro to HASHSET_GET_VALUES and changed it slightly,\n> also added a HASHSET_GET_BITMAP for completeness:\n>\n> #define HASHSET_GET_BITMAP(set) ((set)->data)\n> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data + CEIL_DIV((set)->capacity, 8)))\n>\n> Instead of your version:\n>\n> #define SET_DATA_PTR(a) \\\n> (((char *) (a->data)) + CEIL_DIV(a->capacity, 8))\n>\n> Changes:\n> * Parenthesize macro parameters.\n> * Prefix the macro names with \"HASHSET_\" to avoid potential conflicts.\n> * \"GET_VALUES\" more clearly communicates that it's the values we're extracting.\n>\n> New patch attached.\n>\n> Other changes in same commit:\n>\n> * Add original friends-of-friends graph query to new benchmark/ directory\n> * Add table of content to README\n> * Update docs: Explain null semantics and add function examples\n> * Simplify empty hashset handling, remove unused includes\n>\n> /Joel\n\nmore like a C questions\nin this context does\n#define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\nCEIL_DIV((set)->capacity, 8)))\ndefine first, then define struct int4hashset_t. Is this normally ok?\n\nAlso does\n#define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\nCEIL_DIV((set)->capacity, 8)))\n\nremove (int32 *) will make it generic? then when you use it, you can\ncast whatever type you like?\n\n\n",
"msg_date": "Fri, 30 Jun 2023 12:50:25 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "On Fri, Jun 30, 2023, at 06:50, jian he wrote:\n> more like a C questions\n> in this context does\n> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n> CEIL_DIV((set)->capacity, 8)))\n> define first, then define struct int4hashset_t. Is this normally ok?\n\nYes, it's fine. Macros are just text substitutions done pre-compilation.\n\n> Also does\n> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n> CEIL_DIV((set)->capacity, 8)))\n>\n> remove (int32 *) will make it generic? then when you use it, you can\n> cast whatever type you like?\n\nMaybe, but might be less error-prone more descriptive with different\nmacros for each type, e.g. INT4HASHSET_GET_VALUES,\nsimilar to the existing PG_GETARG_INT4HASHSET\n\nCurious to hear what everybody thinks about the interface, documentation,\nsemantics and implementation?\n\nIs there anything missing or something that you think should be changed/improved?\n\n/Joel\n\n\n",
"msg_date": "Sat, 01 Jul 2023 11:04:05 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "Has anyone put this in a git repo / extension package or similar ? \n\nI’d like to try it out outside the core pg tree. \n\n> On 1 Jul 2023, at 12:04 PM, Joel Jacobson <[email protected]> wrote:\n> \n> On Fri, Jun 30, 2023, at 06:50, jian he wrote:\n>> more like a C questions\n>> in this context does\n>> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n>> CEIL_DIV((set)->capacity, 8)))\n>> define first, then define struct int4hashset_t. Is this normally ok?\n> \n> Yes, it's fine. Macros are just text substitutions done pre-compilation.\n> \n>> Also does\n>> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n>> CEIL_DIV((set)->capacity, 8)))\n>> \n>> remove (int32 *) will make it generic? then when you use it, you can\n>> cast whatever type you like?\n> \n> Maybe, but might be less error-prone more descriptive with different\n> macros for each type, e.g. INT4HASHSET_GET_VALUES,\n> similar to the existing PG_GETARG_INT4HASHSET\n> \n> Curious to hear what everybody thinks about the interface, documentation,\n> semantics and implementation?\n> \n> Is there anything missing or something that you think should be changed/improved?\n> \n> /Joel\n> \n> \n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 18:23:04 +0300",
"msg_from": "Florents Tselai <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
},
{
"msg_contents": "https://github.com/tvondra/hashset\n\nOn Mon, Aug 14, 2023 at 11:23 PM Florents Tselai\n<[email protected]> wrote:\n>\n> Has anyone put this in a git repo / extension package or similar ?\n>\n> I’d like to try it out outside the core pg tree.\n>\n> > On 1 Jul 2023, at 12:04 PM, Joel Jacobson <[email protected]> wrote:\n> >\n> > On Fri, Jun 30, 2023, at 06:50, jian he wrote:\n> >> more like a C questions\n> >> in this context does\n> >> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n> >> CEIL_DIV((set)->capacity, 8)))\n> >> define first, then define struct int4hashset_t. Is this normally ok?\n> >\n> > Yes, it's fine. Macros are just text substitutions done pre-compilation.\n> >\n> >> Also does\n> >> #define HASHSET_GET_VALUES(set) ((int32 *) ((set)->data +\n> >> CEIL_DIV((set)->capacity, 8)))\n> >>\n> >> remove (int32 *) will make it generic? then when you use it, you can\n> >> cast whatever type you like?\n> >\n> > Maybe, but might be less error-prone more descriptive with different\n> > macros for each type, e.g. INT4HASHSET_GET_VALUES,\n> > similar to the existing PG_GETARG_INT4HASHSET\n> >\n> > Curious to hear what everybody thinks about the interface, documentation,\n> > semantics and implementation?\n> >\n> > Is there anything missing or something that you think should be changed/improved?\n> >\n> > /Joel\n> >\n> >\n>\n\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\n\n",
"msg_date": "Tue, 15 Aug 2023 10:37:37 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Do we want a hashset type?"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nHere's a rebased version of the patch-set adding Incremental View\nMaintenance support for PostgreSQL. That was discussed in [1].\n\nThe patch-set consists of the following eleven patches. \n\n- 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n- 0002: Add relisivm column to pg_class system catalog\n- 0003: Allow to prolong life span of transition tables until transaction end\n- 0004: Add Incremental View Maintenance support to pg_dum\n- 0005: Add Incremental View Maintenance support to psql\n- 0006: Add Incremental View Maintenance support\n- 0007: Add DISTINCT support for IVM\n- 0008: Add aggregates support in IVM\n- 0009: Add support for min/max aggregates for IVM\n- 0010: regression tests\n- 0011: documentation\n\n[1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Thu, 1 Jun 2023 23:59:09 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, 1 Jun 2023 23:59:09 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> Hello hackers,\n> \n> Here's a rebased version of the patch-set adding Incremental View\n> Maintenance support for PostgreSQL. That was discussed in [1].\n\n> [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n\n---------------------------------------------------------------------------------------\n* Overview\n\nIncremental View Maintenance (IVM) is a way to make materialized views\nup-to-date by computing only incremental changes and applying them on\nviews. IVM is more efficient than REFRESH MATERIALIZED VIEW when\nonly small parts of the view are changed.\n\n** Feature\n\nThe attached patchset provides a feature that allows materialized views\nto be updated automatically and incrementally just after a underlying\ntable is modified. \n\nYou can create an incementally maintainable materialized view (IMMV)\nby using CREATE INCREMENTAL MATERIALIZED VIEW command.\n\nThe followings are supported in view definition queries:\n- SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n- some built-in aggregate functions (count, sum, avg, min, max)\n- GROUP BY clause\n- DISTINCT clause\n\nViews can contain multiple tuples with the same content (duplicate tuples).\n\n** Restriction\n\nThe following are not supported in a view definition:\n- Outer joins\n- Aggregates otehr than above, window functions, HAVING\n- Sub-queries, CTEs\n- Set operations (UNION, INTERSECT, EXCEPT)\n- DISTINCT ON, ORDER BY, LIMIT, OFFSET\n\nAlso, a view definition query cannot contain other views, materialized views,\nforeign tables, partitioned tables, partitions, VALUES, non-immutable functions,\nsystem columns, or expressions that contains aggregates.\n\n---------------------------------------------------------------------------------------\n* Design\n\nAn IMMV is maintained using statement-level AFTER triggers. \nWhen an IMMV is created, triggers are automatically created on all base\ntables contained in the view definition query. \n\nWhen a table is modified, changes that occurred in the table are extracted\nas transition tables in the AFTER triggers. Then, changes that will occur in\nthe view are calculated by a rewritten view dequery in which the modified table\nis replaced with the transition table. \n\nFor example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\ninto R are stored in a transiton table dR, the tuples that will be inserted into\nthe view are calculated as the result of \"SELECT * FROM dR, S\".\n\n** Multiple Tables Modification\n\nMultiple tables can be modified in a statement when using triggers, foreign key\nconstraint, or modifying CTEs. When multiple tables are modified, we need\nthe state of tables before the modification.\n\nFor example, when some tuples, dR and dS, are inserted into R and S respectively,\nthe tuples that will be inserted into the view are calculated by the following\ntwo queries:\n\n \"SELECT * FROM dR, S_pre\"\n \"SELECT * FROM R, dS\"\n\nwhere S_pre is the table before the modification, R is the current state of\ntable, that is, after the modification. This pre-update states of table \nis calculated by filtering inserted tuples and appending deleted tuples.\nThe subquery that represents pre-update state is generated in get_prestate_rte(). \nSpecifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\nin WHERE clause. This function checks the visibility of tuples by using\nthe snapshot taken before table modification. The deleted tuples are contained\nin the old transition table, and this table is appended using UNION ALL.\n\nTransition tables for each modification are collected in each AFTER trigger\nfunction call. Then, the view maintenance is performed in the last call of\nthe trigger. \n\nIn the original PostgreSQL, tuplestores of transition tables are freed at the\nend of each nested query. However, their lifespan needs to be prolonged to\nthe end of the out-most query in order to maintain the view in the last AFTER\ntrigger. For this purpose, SetTransitionTablePreserved is added in trigger.c. \n\n** Duplicate Tulpes\n\nWhen calculating changes that will occur in the view (= delta tables),\nmultiplicity of tuples are calculated by using count(*). \n\nWhen deleting tuples from the view, tuples to be deleted are identified by\njoining the delta table with the view, and tuples are deleted as many as\nspecified multiplicity by numbered using row_number() function. \nThis is implemented in apply_old_delta().\n\nWhen inserting tuples into the view, each tuple is duplicated to the\nspecified multiplicity using generate_series() function. This is implemented\nin apply_new_delta().\n\n** DISTINCT clause\n\nWhen DISTINCT is used, the view has a hidden column __ivm_count__ that\nstores multiplicity for tuples. When tuples are deleted from or inserted into\nthe view, the values of __ivm_count__ column is decreased or increased as many\nas specified multiplicity. Eventually, when the values becomes zero, the\ncorresponding tuple is deleted from the view. This is implemented in\napply_old_delta_with_count() and apply_new_delta_with_count().\n\n** Aggregates\n\nBuilt-in count sum, avg, min, and max are supported. Whether a given\naggregate function can be used or not is checked by using its OID in\ncheck_aggregate_supports_ivm().\n\nWhen creating a materialized view containing aggregates, in addition\nto __ivm_count__, more than one hidden columns for each aggregate are\nadded to the target list. For example, columns for storing sum(x),\ncount(x) are added if we have avg(x). When the view is maintained,\naggregated values are updated using these hidden columns, also hidden\ncolumns are updated at the same time.\n\nThe maintenance of aggregated view is performed in\napply_old_delta_with_count() and apply_new_delta_with_count(). The SET\nclauses for updating columns are generated by append_set_clause_*(). \n\nIf the view has min(x) or max(x) and the minimum or maximal value is\ndeleted from a table, we need to update the value to the new min/max\nrecalculated from the tables rather than incremental computation. This\nis performed in recalc_and_set_values().\n\n---------------------------------------------------------------------------------------\n* Details of the patch-set (v28)\n\n> The patch-set consists of the following eleven patches. \n\nIn the previous version, the number of patches were nine. \nIn the latest patch-set, the patches are divided more finely\naiming to make the review easier.\n\n> - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n\nThe prposed syntax to create an incrementally maintainable materialized\nview (IMMV) is;\n\n CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n\nHowever, this syntax is tentative, so any suggestions are welcomed.\n\n> - 0002: Add relisivm column to pg_class system catalog\n\nWe add a new field in pg_class to indicate a relation is IMMV.\nAnother alternative is to add a new catalog for managing materialized\nviews including IMMV, but I am not sure if we want this.\n\n> - 0003: Allow to prolong life span of transition tables until transaction end\n\nThis patch fixes the trigger system to allow to prolong lifespan of\ntuple stores for transition tables until the transaction end. We need\nthis because multiple transition tables have to be preserved until the\nend of the out-most query when multiple tables are modified by nested\ntriggers. (as explained above in Design - Multiple Tables Modification)\n\nIf we don't want to change the trigger system in such way, the alternative\nis to copy the contents of transition tables to other tuplestores, although\nit needs more time and memory.\n\n> - 0004: Add Incremental View Maintenance support to pg_dump\n\nThis patch enables pg_dump to output IMMV using the new syntax.\n\n> - 0005: Add Incremental View Maintenance support to psql\n\nThis patch implements tab-completion for the new syntax and adds\ninformation of IMMV to \\d meta-command results.\n\n> - 0006: Add Incremental View Maintenance support\n\nThis patch implements the basic IVM feature. \nDISTINCT and aggregate are not supported here.\n\nWhen an IMMV is created, the view query is checked, and if any\nnon-supported feature is used, it raises an error. If it is ok,\ntriggers are created on base tables and an unique index is\ncreated on the view if possible.\n\nIn BEFORE trigger, an entry is created for each IMMV and the number\nof trigger firing is counted. Also, the snapshot just before the\ntable modification is stored.\n\nIn AFTER triggers, each transition tables are preserved. The number\nof trigger firing is counted also here, and when the firing number of\nBEFORE and AFTER trigger reach the same, it is deemed the final AFTER\ntrigger call.\n\nIn the final AFTER trigger, the IMMV is maintained. Rewritten view\nquery is executed to generate delta tables, and deltas are applied\nto the view. If multiple tables are modified simultaneously, this\nprocess is iterated for each modified table. Tables before processed\nare represented in \"pre-update-state\", processed tables are\n\"post-update-state\" in the rewritten query.\n\n> - 0007: Add DISTINCT support for IVM\n\nThis patch adds DISTINCT clause support.\n\nWhen an IMMV including DISTINCT is created, a hidden column\n\"__ivm_count__\" is added to the target list. This column has the\nnumber of duplicity of the same tuples. The duplicity is calculated\nby adding \"count(*)\" and GROUP BY to the view query.\n\nWhen an IMMV is maintained, the duplicity in __ivm_count__ is updated, \nand a tuples whose duplicity becomes zero can be deleted from the view.\nThis logic is implemented by SQL in apply_old_delta_with_count and\napply_new_delta_with_count. \n\nColumns starting with \"__ivm_\" are deemed hidden columns that doesn't\nappear when a view is accessed by \"SELECT * FROM ....\". This is\nimplemented by fixing parse_relation.c. \n\n> - 0008: Add aggregates support in IVM\n\nThis patch provides codes for aggregates support, specifically\nfor builtin count, sum, and avg.\n\nWhen an IMMV containing an aggregate is created, it is checked if this\naggregate function is supported, and if it is ok, some hidden columns\nare added to the target list.\n\nWhen the IMMV is maintained, the aggregated value is updated as well as\nrelated hidden columns. The way of update depends the type of aggregate\nfunctions, and SET clause string is generated for each aggregate.\n\n> - 0009: Add support for min/max aggregates for IVM\n\nThis patch adds min/max aggregates support.\n\nThis is separated from #0008 because min/max needs more complicated\nwork than count, sum, and avg.\n\nIf the view has min(x) or max(x) and the minimum or maximal value is\ndeleted from a table, we need to update the value to the new min/max\nrecalculated from the tables rather than incremental computation.\nThis is performed in recalc_and_set_values().\n\nTIDs and keys of tuples that need re-calculation are returned as a\nresult of the query that deleted min/max values from the view using\nRETURNING clause. The plan to recalculate and set the new min/max value\nare stored and reused.\n\n> - 0010: regression tests\n\nThis patch provides regression tests for IVM.\n\n> - 0011: documentation\n\nThis patch provides documantation for IVM.\n\n---------------------------------------------------------------------------------------\n* Changes from the Previous Version (v27)\n\n- Allow TRUNCATE on base tables\n \nWhen a base table is truncated, the view content will be empty if the\nview definition query does not contain an aggregate without a GROUP clause.\nTherefore, such views can be truncated.\n \nAggregate views without a GROUP clause always have one row. Therefore,\nif a base table is truncated, the view will not be empty and will contain\na row with NULL value (or 0 for count()). So, in this case, we refresh the\nview instead of truncating it.\n\n- Fix bugs reported by huyajun [1]\n\n[1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n\n---------------------------------------------------------------------------------------\n* Discussion\n\n** Aggregate support\n\nThere were a few suggestions that general aggregate functions should be\nsupported [2][3], which may be possible by extending pg_aggregate catalog.\nHowever, we decided to leave supporting general aggregates to the future work [4]\nbecause it would need substantial works and make the patch more complex and\nbigger. \n\nThere has been no opposite opinion on this. However, if we need more discussion\non the design of aggregate support, we can omit aggregate support for the first\nrelease of IVM.\n\n[2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n[3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n\n** Hidden columns\n\nIn order to support DISTINCT or aggregates, our implementation uses hidden columns. \n\nColumns starting with \"__ivm_\" are hidden columns that doesn't appear when a\nview is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\nfixed. There was a proposal to enable hidden columns by adding a new flag to\npg_attribute [5], but this thread is no longer active, so we decided to check\nthe hidden column by its name [6].\n\n[5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n[6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n\n** Concurrent Transactions\n\nWhen the view definition has more than one table, we acquire an exclusive\nlock before the view maintenance in order to avoid inconsistent results.\nThis behavior was explained in [7]. The lock was improved to use weaker lock\nwhen the view has only one table based on a suggestion from Konstantin Knizhnik [8].\nHowever, due to the implementation that uses ctid for identifying target tuples, \nwe still have to use an exclusive lock for DELETE and UPDATE.\n\n[7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n[8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n\n** Automatic Index Creation\n\nWhen a view is created, a unique index is automatically created if\npossible, that is, if the view definition query has a GROUP BY or\nDISTINCT, or if the view contains all primary key attributes of\nits base tables in the target list. It is necessary for efficient\nview maintenance. This feature is based on a suggestion from\nKonstantin Knizhnik [9].\n\n[9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n\n\n** Trigger and Transition Tables\n\nWe implemented IVM based on triggers. This is because we want to use\ntransition tables to extract changes on base tables. Also, there are\nother constraint that are using triggers in its implementation, like\nforeign references. However, if we can use transition table like feature\nwithout relying triggers, we don't have to insist to use triggers and we\nmight implement IVM in the executor directly as similar as declarative\npartitioning.\n\n** Feature to be Supported in the First Release\n\nThe current patch-set supports DISTINCT and aggregates for built-in count,\nsum, avg, min and max. Do we need all these feature for the first IVM release? \nSupporting DISTINCT and aggregates needs discussion on hidden columns, and\nfor supporting min/max we need to discuss on re-calculation method. Before\nhandling such relatively advanced feature, maybe, should we focus to design\nand implement of the basic feature of IVM? \n\n\nAny suggestion and discussion are welcomed!\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Thu, 1 Jun 2023 03:47:03 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n>\n> On Thu, 1 Jun 2023 23:59:09 +0900\n> Yugo NAGATA <[email protected]> wrote:\n>\n> > Hello hackers,\n> >\n> > Here's a rebased version of the patch-set adding Incremental View\n> > Maintenance support for PostgreSQL. That was discussed in [1].\n>\n> > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n>\n> ---------------------------------------------------------------------------------------\n> * Overview\n>\n> Incremental View Maintenance (IVM) is a way to make materialized views\n> up-to-date by computing only incremental changes and applying them on\n> views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> only small parts of the view are changed.\n>\n> ** Feature\n>\n> The attached patchset provides a feature that allows materialized views\n> to be updated automatically and incrementally just after a underlying\n> table is modified.\n>\n> You can create an incementally maintainable materialized view (IMMV)\n> by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n>\n> The followings are supported in view definition queries:\n> - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> - some built-in aggregate functions (count, sum, avg, min, max)\n> - GROUP BY clause\n> - DISTINCT clause\n>\n> Views can contain multiple tuples with the same content (duplicate tuples).\n>\n> ** Restriction\n>\n> The following are not supported in a view definition:\n> - Outer joins\n> - Aggregates otehr than above, window functions, HAVING\n> - Sub-queries, CTEs\n> - Set operations (UNION, INTERSECT, EXCEPT)\n> - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n>\n> Also, a view definition query cannot contain other views, materialized views,\n> foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> system columns, or expressions that contains aggregates.\n>\n> ---------------------------------------------------------------------------------------\n> * Design\n>\n> An IMMV is maintained using statement-level AFTER triggers.\n> When an IMMV is created, triggers are automatically created on all base\n> tables contained in the view definition query.\n>\n> When a table is modified, changes that occurred in the table are extracted\n> as transition tables in the AFTER triggers. Then, changes that will occur in\n> the view are calculated by a rewritten view dequery in which the modified table\n> is replaced with the transition table.\n>\n> For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> into R are stored in a transiton table dR, the tuples that will be inserted into\n> the view are calculated as the result of \"SELECT * FROM dR, S\".\n>\n> ** Multiple Tables Modification\n>\n> Multiple tables can be modified in a statement when using triggers, foreign key\n> constraint, or modifying CTEs. When multiple tables are modified, we need\n> the state of tables before the modification.\n>\n> For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> the tuples that will be inserted into the view are calculated by the following\n> two queries:\n>\n> \"SELECT * FROM dR, S_pre\"\n> \"SELECT * FROM R, dS\"\n>\n> where S_pre is the table before the modification, R is the current state of\n> table, that is, after the modification. This pre-update states of table\n> is calculated by filtering inserted tuples and appending deleted tuples.\n> The subquery that represents pre-update state is generated in get_prestate_rte().\n> Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> in WHERE clause. This function checks the visibility of tuples by using\n> the snapshot taken before table modification. The deleted tuples are contained\n> in the old transition table, and this table is appended using UNION ALL.\n>\n> Transition tables for each modification are collected in each AFTER trigger\n> function call. Then, the view maintenance is performed in the last call of\n> the trigger.\n>\n> In the original PostgreSQL, tuplestores of transition tables are freed at the\n> end of each nested query. However, their lifespan needs to be prolonged to\n> the end of the out-most query in order to maintain the view in the last AFTER\n> trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n>\n> ** Duplicate Tulpes\n>\n> When calculating changes that will occur in the view (= delta tables),\n> multiplicity of tuples are calculated by using count(*).\n>\n> When deleting tuples from the view, tuples to be deleted are identified by\n> joining the delta table with the view, and tuples are deleted as many as\n> specified multiplicity by numbered using row_number() function.\n> This is implemented in apply_old_delta().\n>\n> When inserting tuples into the view, each tuple is duplicated to the\n> specified multiplicity using generate_series() function. This is implemented\n> in apply_new_delta().\n>\n> ** DISTINCT clause\n>\n> When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> stores multiplicity for tuples. When tuples are deleted from or inserted into\n> the view, the values of __ivm_count__ column is decreased or increased as many\n> as specified multiplicity. Eventually, when the values becomes zero, the\n> corresponding tuple is deleted from the view. This is implemented in\n> apply_old_delta_with_count() and apply_new_delta_with_count().\n>\n> ** Aggregates\n>\n> Built-in count sum, avg, min, and max are supported. Whether a given\n> aggregate function can be used or not is checked by using its OID in\n> check_aggregate_supports_ivm().\n>\n> When creating a materialized view containing aggregates, in addition\n> to __ivm_count__, more than one hidden columns for each aggregate are\n> added to the target list. For example, columns for storing sum(x),\n> count(x) are added if we have avg(x). When the view is maintained,\n> aggregated values are updated using these hidden columns, also hidden\n> columns are updated at the same time.\n>\n> The maintenance of aggregated view is performed in\n> apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> clauses for updating columns are generated by append_set_clause_*().\n>\n> If the view has min(x) or max(x) and the minimum or maximal value is\n> deleted from a table, we need to update the value to the new min/max\n> recalculated from the tables rather than incremental computation. This\n> is performed in recalc_and_set_values().\n>\n> ---------------------------------------------------------------------------------------\n> * Details of the patch-set (v28)\n>\n> > The patch-set consists of the following eleven patches.\n>\n> In the previous version, the number of patches were nine.\n> In the latest patch-set, the patches are divided more finely\n> aiming to make the review easier.\n>\n> > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n>\n> The prposed syntax to create an incrementally maintainable materialized\n> view (IMMV) is;\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n>\n> However, this syntax is tentative, so any suggestions are welcomed.\n>\n> > - 0002: Add relisivm column to pg_class system catalog\n>\n> We add a new field in pg_class to indicate a relation is IMMV.\n> Another alternative is to add a new catalog for managing materialized\n> views including IMMV, but I am not sure if we want this.\n>\n> > - 0003: Allow to prolong life span of transition tables until transaction end\n>\n> This patch fixes the trigger system to allow to prolong lifespan of\n> tuple stores for transition tables until the transaction end. We need\n> this because multiple transition tables have to be preserved until the\n> end of the out-most query when multiple tables are modified by nested\n> triggers. (as explained above in Design - Multiple Tables Modification)\n>\n> If we don't want to change the trigger system in such way, the alternative\n> is to copy the contents of transition tables to other tuplestores, although\n> it needs more time and memory.\n>\n> > - 0004: Add Incremental View Maintenance support to pg_dump\n>\n> This patch enables pg_dump to output IMMV using the new syntax.\n>\n> > - 0005: Add Incremental View Maintenance support to psql\n>\n> This patch implements tab-completion for the new syntax and adds\n> information of IMMV to \\d meta-command results.\n>\n> > - 0006: Add Incremental View Maintenance support\n>\n> This patch implements the basic IVM feature.\n> DISTINCT and aggregate are not supported here.\n>\n> When an IMMV is created, the view query is checked, and if any\n> non-supported feature is used, it raises an error. If it is ok,\n> triggers are created on base tables and an unique index is\n> created on the view if possible.\n>\n> In BEFORE trigger, an entry is created for each IMMV and the number\n> of trigger firing is counted. Also, the snapshot just before the\n> table modification is stored.\n>\n> In AFTER triggers, each transition tables are preserved. The number\n> of trigger firing is counted also here, and when the firing number of\n> BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> trigger call.\n>\n> In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> query is executed to generate delta tables, and deltas are applied\n> to the view. If multiple tables are modified simultaneously, this\n> process is iterated for each modified table. Tables before processed\n> are represented in \"pre-update-state\", processed tables are\n> \"post-update-state\" in the rewritten query.\n>\n> > - 0007: Add DISTINCT support for IVM\n>\n> This patch adds DISTINCT clause support.\n>\n> When an IMMV including DISTINCT is created, a hidden column\n> \"__ivm_count__\" is added to the target list. This column has the\n> number of duplicity of the same tuples. The duplicity is calculated\n> by adding \"count(*)\" and GROUP BY to the view query.\n>\n> When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> and a tuples whose duplicity becomes zero can be deleted from the view.\n> This logic is implemented by SQL in apply_old_delta_with_count and\n> apply_new_delta_with_count.\n>\n> Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> appear when a view is accessed by \"SELECT * FROM ....\". This is\n> implemented by fixing parse_relation.c.\n>\n> > - 0008: Add aggregates support in IVM\n>\n> This patch provides codes for aggregates support, specifically\n> for builtin count, sum, and avg.\n>\n> When an IMMV containing an aggregate is created, it is checked if this\n> aggregate function is supported, and if it is ok, some hidden columns\n> are added to the target list.\n>\n> When the IMMV is maintained, the aggregated value is updated as well as\n> related hidden columns. The way of update depends the type of aggregate\n> functions, and SET clause string is generated for each aggregate.\n>\n> > - 0009: Add support for min/max aggregates for IVM\n>\n> This patch adds min/max aggregates support.\n>\n> This is separated from #0008 because min/max needs more complicated\n> work than count, sum, and avg.\n>\n> If the view has min(x) or max(x) and the minimum or maximal value is\n> deleted from a table, we need to update the value to the new min/max\n> recalculated from the tables rather than incremental computation.\n> This is performed in recalc_and_set_values().\n>\n> TIDs and keys of tuples that need re-calculation are returned as a\n> result of the query that deleted min/max values from the view using\n> RETURNING clause. The plan to recalculate and set the new min/max value\n> are stored and reused.\n>\n> > - 0010: regression tests\n>\n> This patch provides regression tests for IVM.\n>\n> > - 0011: documentation\n>\n> This patch provides documantation for IVM.\n>\n> ---------------------------------------------------------------------------------------\n> * Changes from the Previous Version (v27)\n>\n> - Allow TRUNCATE on base tables\n>\n> When a base table is truncated, the view content will be empty if the\n> view definition query does not contain an aggregate without a GROUP clause.\n> Therefore, such views can be truncated.\n>\n> Aggregate views without a GROUP clause always have one row. Therefore,\n> if a base table is truncated, the view will not be empty and will contain\n> a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> view instead of truncating it.\n>\n> - Fix bugs reported by huyajun [1]\n>\n> [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n>\n> ---------------------------------------------------------------------------------------\n> * Discussion\n>\n> ** Aggregate support\n>\n> There were a few suggestions that general aggregate functions should be\n> supported [2][3], which may be possible by extending pg_aggregate catalog.\n> However, we decided to leave supporting general aggregates to the future work [4]\n> because it would need substantial works and make the patch more complex and\n> bigger.\n>\n> There has been no opposite opinion on this. However, if we need more discussion\n> on the design of aggregate support, we can omit aggregate support for the first\n> release of IVM.\n>\n> [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n>\n> ** Hidden columns\n>\n> In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n>\n> Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> fixed. There was a proposal to enable hidden columns by adding a new flag to\n> pg_attribute [5], but this thread is no longer active, so we decided to check\n> the hidden column by its name [6].\n>\n> [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n>\n> ** Concurrent Transactions\n>\n> When the view definition has more than one table, we acquire an exclusive\n> lock before the view maintenance in order to avoid inconsistent results.\n> This behavior was explained in [7]. The lock was improved to use weaker lock\n> when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> However, due to the implementation that uses ctid for identifying target tuples,\n> we still have to use an exclusive lock for DELETE and UPDATE.\n>\n> [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n>\n> ** Automatic Index Creation\n>\n> When a view is created, a unique index is automatically created if\n> possible, that is, if the view definition query has a GROUP BY or\n> DISTINCT, or if the view contains all primary key attributes of\n> its base tables in the target list. It is necessary for efficient\n> view maintenance. This feature is based on a suggestion from\n> Konstantin Knizhnik [9].\n>\n> [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n>\n>\n> ** Trigger and Transition Tables\n>\n> We implemented IVM based on triggers. This is because we want to use\n> transition tables to extract changes on base tables. Also, there are\n> other constraint that are using triggers in its implementation, like\n> foreign references. However, if we can use transition table like feature\n> without relying triggers, we don't have to insist to use triggers and we\n> might implement IVM in the executor directly as similar as declarative\n> partitioning.\n>\n> ** Feature to be Supported in the First Release\n>\n> The current patch-set supports DISTINCT and aggregates for built-in count,\n> sum, avg, min and max. Do we need all these feature for the first IVM release?\n> Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> for supporting min/max we need to discuss on re-calculation method. Before\n> handling such relatively advanced feature, maybe, should we focus to design\n> and implement of the basic feature of IVM?\n>\n>\n> Any suggestion and discussion are welcomed!\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n>\n\n\n> The followings are supported in view definition queries:\n> - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n\n\n> Also, a view definition query cannot contain other views, materialized views,\n> foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> system columns, or expressions that contains aggregates.\n\nDoes this also apply to tableoid? but tableoid is a constant, so it\nshould be fine?\ncan following two queries apply to this feature.\nselect tableoid, unique1 from tenk1;\nselect 1 as constant, unique1 from tenk1;\n\nI didn't apply the patch.(will do later, for someone to test, it would\nbe a better idea to dump a whole file separately....).\n\n\n",
"msg_date": "Wed, 28 Jun 2023 00:01:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Wed, 28 Jun 2023 00:01:02 +0800\njian he <[email protected]> wrote:\n\n> On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n> >\n> > On Thu, 1 Jun 2023 23:59:09 +0900\n> > Yugo NAGATA <[email protected]> wrote:\n> >\n> > > Hello hackers,\n> > >\n> > > Here's a rebased version of the patch-set adding Incremental View\n> > > Maintenance support for PostgreSQL. That was discussed in [1].\n> >\n> > > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n> >\n> > ---------------------------------------------------------------------------------------\n> > * Overview\n> >\n> > Incremental View Maintenance (IVM) is a way to make materialized views\n> > up-to-date by computing only incremental changes and applying them on\n> > views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> > only small parts of the view are changed.\n> >\n> > ** Feature\n> >\n> > The attached patchset provides a feature that allows materialized views\n> > to be updated automatically and incrementally just after a underlying\n> > table is modified.\n> >\n> > You can create an incementally maintainable materialized view (IMMV)\n> > by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n> >\n> > The followings are supported in view definition queries:\n> > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > - some built-in aggregate functions (count, sum, avg, min, max)\n> > - GROUP BY clause\n> > - DISTINCT clause\n> >\n> > Views can contain multiple tuples with the same content (duplicate tuples).\n> >\n> > ** Restriction\n> >\n> > The following are not supported in a view definition:\n> > - Outer joins\n> > - Aggregates otehr than above, window functions, HAVING\n> > - Sub-queries, CTEs\n> > - Set operations (UNION, INTERSECT, EXCEPT)\n> > - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n> >\n> > Also, a view definition query cannot contain other views, materialized views,\n> > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > system columns, or expressions that contains aggregates.\n> >\n> > ---------------------------------------------------------------------------------------\n> > * Design\n> >\n> > An IMMV is maintained using statement-level AFTER triggers.\n> > When an IMMV is created, triggers are automatically created on all base\n> > tables contained in the view definition query.\n> >\n> > When a table is modified, changes that occurred in the table are extracted\n> > as transition tables in the AFTER triggers. Then, changes that will occur in\n> > the view are calculated by a rewritten view dequery in which the modified table\n> > is replaced with the transition table.\n> >\n> > For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> > into R are stored in a transiton table dR, the tuples that will be inserted into\n> > the view are calculated as the result of \"SELECT * FROM dR, S\".\n> >\n> > ** Multiple Tables Modification\n> >\n> > Multiple tables can be modified in a statement when using triggers, foreign key\n> > constraint, or modifying CTEs. When multiple tables are modified, we need\n> > the state of tables before the modification.\n> >\n> > For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> > the tuples that will be inserted into the view are calculated by the following\n> > two queries:\n> >\n> > \"SELECT * FROM dR, S_pre\"\n> > \"SELECT * FROM R, dS\"\n> >\n> > where S_pre is the table before the modification, R is the current state of\n> > table, that is, after the modification. This pre-update states of table\n> > is calculated by filtering inserted tuples and appending deleted tuples.\n> > The subquery that represents pre-update state is generated in get_prestate_rte().\n> > Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> > in WHERE clause. This function checks the visibility of tuples by using\n> > the snapshot taken before table modification. The deleted tuples are contained\n> > in the old transition table, and this table is appended using UNION ALL.\n> >\n> > Transition tables for each modification are collected in each AFTER trigger\n> > function call. Then, the view maintenance is performed in the last call of\n> > the trigger.\n> >\n> > In the original PostgreSQL, tuplestores of transition tables are freed at the\n> > end of each nested query. However, their lifespan needs to be prolonged to\n> > the end of the out-most query in order to maintain the view in the last AFTER\n> > trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n> >\n> > ** Duplicate Tulpes\n> >\n> > When calculating changes that will occur in the view (= delta tables),\n> > multiplicity of tuples are calculated by using count(*).\n> >\n> > When deleting tuples from the view, tuples to be deleted are identified by\n> > joining the delta table with the view, and tuples are deleted as many as\n> > specified multiplicity by numbered using row_number() function.\n> > This is implemented in apply_old_delta().\n> >\n> > When inserting tuples into the view, each tuple is duplicated to the\n> > specified multiplicity using generate_series() function. This is implemented\n> > in apply_new_delta().\n> >\n> > ** DISTINCT clause\n> >\n> > When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> > stores multiplicity for tuples. When tuples are deleted from or inserted into\n> > the view, the values of __ivm_count__ column is decreased or increased as many\n> > as specified multiplicity. Eventually, when the values becomes zero, the\n> > corresponding tuple is deleted from the view. This is implemented in\n> > apply_old_delta_with_count() and apply_new_delta_with_count().\n> >\n> > ** Aggregates\n> >\n> > Built-in count sum, avg, min, and max are supported. Whether a given\n> > aggregate function can be used or not is checked by using its OID in\n> > check_aggregate_supports_ivm().\n> >\n> > When creating a materialized view containing aggregates, in addition\n> > to __ivm_count__, more than one hidden columns for each aggregate are\n> > added to the target list. For example, columns for storing sum(x),\n> > count(x) are added if we have avg(x). When the view is maintained,\n> > aggregated values are updated using these hidden columns, also hidden\n> > columns are updated at the same time.\n> >\n> > The maintenance of aggregated view is performed in\n> > apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> > clauses for updating columns are generated by append_set_clause_*().\n> >\n> > If the view has min(x) or max(x) and the minimum or maximal value is\n> > deleted from a table, we need to update the value to the new min/max\n> > recalculated from the tables rather than incremental computation. This\n> > is performed in recalc_and_set_values().\n> >\n> > ---------------------------------------------------------------------------------------\n> > * Details of the patch-set (v28)\n> >\n> > > The patch-set consists of the following eleven patches.\n> >\n> > In the previous version, the number of patches were nine.\n> > In the latest patch-set, the patches are divided more finely\n> > aiming to make the review easier.\n> >\n> > > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> >\n> > The prposed syntax to create an incrementally maintainable materialized\n> > view (IMMV) is;\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n> >\n> > However, this syntax is tentative, so any suggestions are welcomed.\n> >\n> > > - 0002: Add relisivm column to pg_class system catalog\n> >\n> > We add a new field in pg_class to indicate a relation is IMMV.\n> > Another alternative is to add a new catalog for managing materialized\n> > views including IMMV, but I am not sure if we want this.\n> >\n> > > - 0003: Allow to prolong life span of transition tables until transaction end\n> >\n> > This patch fixes the trigger system to allow to prolong lifespan of\n> > tuple stores for transition tables until the transaction end. We need\n> > this because multiple transition tables have to be preserved until the\n> > end of the out-most query when multiple tables are modified by nested\n> > triggers. (as explained above in Design - Multiple Tables Modification)\n> >\n> > If we don't want to change the trigger system in such way, the alternative\n> > is to copy the contents of transition tables to other tuplestores, although\n> > it needs more time and memory.\n> >\n> > > - 0004: Add Incremental View Maintenance support to pg_dump\n> >\n> > This patch enables pg_dump to output IMMV using the new syntax.\n> >\n> > > - 0005: Add Incremental View Maintenance support to psql\n> >\n> > This patch implements tab-completion for the new syntax and adds\n> > information of IMMV to \\d meta-command results.\n> >\n> > > - 0006: Add Incremental View Maintenance support\n> >\n> > This patch implements the basic IVM feature.\n> > DISTINCT and aggregate are not supported here.\n> >\n> > When an IMMV is created, the view query is checked, and if any\n> > non-supported feature is used, it raises an error. If it is ok,\n> > triggers are created on base tables and an unique index is\n> > created on the view if possible.\n> >\n> > In BEFORE trigger, an entry is created for each IMMV and the number\n> > of trigger firing is counted. Also, the snapshot just before the\n> > table modification is stored.\n> >\n> > In AFTER triggers, each transition tables are preserved. The number\n> > of trigger firing is counted also here, and when the firing number of\n> > BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> > trigger call.\n> >\n> > In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> > query is executed to generate delta tables, and deltas are applied\n> > to the view. If multiple tables are modified simultaneously, this\n> > process is iterated for each modified table. Tables before processed\n> > are represented in \"pre-update-state\", processed tables are\n> > \"post-update-state\" in the rewritten query.\n> >\n> > > - 0007: Add DISTINCT support for IVM\n> >\n> > This patch adds DISTINCT clause support.\n> >\n> > When an IMMV including DISTINCT is created, a hidden column\n> > \"__ivm_count__\" is added to the target list. This column has the\n> > number of duplicity of the same tuples. The duplicity is calculated\n> > by adding \"count(*)\" and GROUP BY to the view query.\n> >\n> > When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> > and a tuples whose duplicity becomes zero can be deleted from the view.\n> > This logic is implemented by SQL in apply_old_delta_with_count and\n> > apply_new_delta_with_count.\n> >\n> > Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> > appear when a view is accessed by \"SELECT * FROM ....\". This is\n> > implemented by fixing parse_relation.c.\n> >\n> > > - 0008: Add aggregates support in IVM\n> >\n> > This patch provides codes for aggregates support, specifically\n> > for builtin count, sum, and avg.\n> >\n> > When an IMMV containing an aggregate is created, it is checked if this\n> > aggregate function is supported, and if it is ok, some hidden columns\n> > are added to the target list.\n> >\n> > When the IMMV is maintained, the aggregated value is updated as well as\n> > related hidden columns. The way of update depends the type of aggregate\n> > functions, and SET clause string is generated for each aggregate.\n> >\n> > > - 0009: Add support for min/max aggregates for IVM\n> >\n> > This patch adds min/max aggregates support.\n> >\n> > This is separated from #0008 because min/max needs more complicated\n> > work than count, sum, and avg.\n> >\n> > If the view has min(x) or max(x) and the minimum or maximal value is\n> > deleted from a table, we need to update the value to the new min/max\n> > recalculated from the tables rather than incremental computation.\n> > This is performed in recalc_and_set_values().\n> >\n> > TIDs and keys of tuples that need re-calculation are returned as a\n> > result of the query that deleted min/max values from the view using\n> > RETURNING clause. The plan to recalculate and set the new min/max value\n> > are stored and reused.\n> >\n> > > - 0010: regression tests\n> >\n> > This patch provides regression tests for IVM.\n> >\n> > > - 0011: documentation\n> >\n> > This patch provides documantation for IVM.\n> >\n> > ---------------------------------------------------------------------------------------\n> > * Changes from the Previous Version (v27)\n> >\n> > - Allow TRUNCATE on base tables\n> >\n> > When a base table is truncated, the view content will be empty if the\n> > view definition query does not contain an aggregate without a GROUP clause.\n> > Therefore, such views can be truncated.\n> >\n> > Aggregate views without a GROUP clause always have one row. Therefore,\n> > if a base table is truncated, the view will not be empty and will contain\n> > a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> > view instead of truncating it.\n> >\n> > - Fix bugs reported by huyajun [1]\n> >\n> > [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n> >\n> > ---------------------------------------------------------------------------------------\n> > * Discussion\n> >\n> > ** Aggregate support\n> >\n> > There were a few suggestions that general aggregate functions should be\n> > supported [2][3], which may be possible by extending pg_aggregate catalog.\n> > However, we decided to leave supporting general aggregates to the future work [4]\n> > because it would need substantial works and make the patch more complex and\n> > bigger.\n> >\n> > There has been no opposite opinion on this. However, if we need more discussion\n> > on the design of aggregate support, we can omit aggregate support for the first\n> > release of IVM.\n> >\n> > [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> > [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> > [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> >\n> > ** Hidden columns\n> >\n> > In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n> >\n> > Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> > view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> > fixed. There was a proposal to enable hidden columns by adding a new flag to\n> > pg_attribute [5], but this thread is no longer active, so we decided to check\n> > the hidden column by its name [6].\n> >\n> > [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> >\n> > ** Concurrent Transactions\n> >\n> > When the view definition has more than one table, we acquire an exclusive\n> > lock before the view maintenance in order to avoid inconsistent results.\n> > This behavior was explained in [7]. The lock was improved to use weaker lock\n> > when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> > However, due to the implementation that uses ctid for identifying target tuples,\n> > we still have to use an exclusive lock for DELETE and UPDATE.\n> >\n> > [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n> >\n> > ** Automatic Index Creation\n> >\n> > When a view is created, a unique index is automatically created if\n> > possible, that is, if the view definition query has a GROUP BY or\n> > DISTINCT, or if the view contains all primary key attributes of\n> > its base tables in the target list. It is necessary for efficient\n> > view maintenance. This feature is based on a suggestion from\n> > Konstantin Knizhnik [9].\n> >\n> > [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n> >\n> >\n> > ** Trigger and Transition Tables\n> >\n> > We implemented IVM based on triggers. This is because we want to use\n> > transition tables to extract changes on base tables. Also, there are\n> > other constraint that are using triggers in its implementation, like\n> > foreign references. However, if we can use transition table like feature\n> > without relying triggers, we don't have to insist to use triggers and we\n> > might implement IVM in the executor directly as similar as declarative\n> > partitioning.\n> >\n> > ** Feature to be Supported in the First Release\n> >\n> > The current patch-set supports DISTINCT and aggregates for built-in count,\n> > sum, avg, min and max. Do we need all these feature for the first IVM release?\n> > Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> > for supporting min/max we need to discuss on re-calculation method. Before\n> > handling such relatively advanced feature, maybe, should we focus to design\n> > and implement of the basic feature of IVM?\n> >\n> >\n> > Any suggestion and discussion are welcomed!\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n> >\n> >\n> \n> \n> > The followings are supported in view definition queries:\n> > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> \n> \n> > Also, a view definition query cannot contain other views, materialized views,\n> > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > system columns, or expressions that contains aggregates.\n> \n> Does this also apply to tableoid? but tableoid is a constant, so it\n> should be fine?\n> can following two queries apply to this feature.\n> select tableoid, unique1 from tenk1;\n\nCurrently, this is not allowed because tableoid is a system column.\nAs you say, tableoid is a constant, so we can allow. Should we do this?\n\n> select 1 as constant, unique1 from tenk1;\n\nThis is allowed, of course.\n\n> I didn't apply the patch.(will do later, for someone to test, it would\n> be a better idea to dump a whole file separately....).\n\nThank you! I'm looking forward to your feedback.\n(I didn't attach a whole patch separately because I wouldn't like\ncfbot to be unhappy...)\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 28 Jun 2023 17:06:04 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 4:06 PM Yugo NAGATA <[email protected]> wrote:\n>\n> On Wed, 28 Jun 2023 00:01:02 +0800\n> jian he <[email protected]> wrote:\n>\n> > On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n> > >\n> > > On Thu, 1 Jun 2023 23:59:09 +0900\n> > > Yugo NAGATA <[email protected]> wrote:\n> > >\n> > > > Hello hackers,\n> > > >\n> > > > Here's a rebased version of the patch-set adding Incremental View\n> > > > Maintenance support for PostgreSQL. That was discussed in [1].\n> > >\n> > > > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n> > >\n> > > ---------------------------------------------------------------------------------------\n> > > * Overview\n> > >\n> > > Incremental View Maintenance (IVM) is a way to make materialized views\n> > > up-to-date by computing only incremental changes and applying them on\n> > > views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> > > only small parts of the view are changed.\n> > >\n> > > ** Feature\n> > >\n> > > The attached patchset provides a feature that allows materialized views\n> > > to be updated automatically and incrementally just after a underlying\n> > > table is modified.\n> > >\n> > > You can create an incementally maintainable materialized view (IMMV)\n> > > by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n> > >\n> > > The followings are supported in view definition queries:\n> > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > > - some built-in aggregate functions (count, sum, avg, min, max)\n> > > - GROUP BY clause\n> > > - DISTINCT clause\n> > >\n> > > Views can contain multiple tuples with the same content (duplicate tuples).\n> > >\n> > > ** Restriction\n> > >\n> > > The following are not supported in a view definition:\n> > > - Outer joins\n> > > - Aggregates otehr than above, window functions, HAVING\n> > > - Sub-queries, CTEs\n> > > - Set operations (UNION, INTERSECT, EXCEPT)\n> > > - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n> > >\n> > > Also, a view definition query cannot contain other views, materialized views,\n> > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > system columns, or expressions that contains aggregates.\n> > >\n> > > ---------------------------------------------------------------------------------------\n> > > * Design\n> > >\n> > > An IMMV is maintained using statement-level AFTER triggers.\n> > > When an IMMV is created, triggers are automatically created on all base\n> > > tables contained in the view definition query.\n> > >\n> > > When a table is modified, changes that occurred in the table are extracted\n> > > as transition tables in the AFTER triggers. Then, changes that will occur in\n> > > the view are calculated by a rewritten view dequery in which the modified table\n> > > is replaced with the transition table.\n> > >\n> > > For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> > > into R are stored in a transiton table dR, the tuples that will be inserted into\n> > > the view are calculated as the result of \"SELECT * FROM dR, S\".\n> > >\n> > > ** Multiple Tables Modification\n> > >\n> > > Multiple tables can be modified in a statement when using triggers, foreign key\n> > > constraint, or modifying CTEs. When multiple tables are modified, we need\n> > > the state of tables before the modification.\n> > >\n> > > For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> > > the tuples that will be inserted into the view are calculated by the following\n> > > two queries:\n> > >\n> > > \"SELECT * FROM dR, S_pre\"\n> > > \"SELECT * FROM R, dS\"\n> > >\n> > > where S_pre is the table before the modification, R is the current state of\n> > > table, that is, after the modification. This pre-update states of table\n> > > is calculated by filtering inserted tuples and appending deleted tuples.\n> > > The subquery that represents pre-update state is generated in get_prestate_rte().\n> > > Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> > > in WHERE clause. This function checks the visibility of tuples by using\n> > > the snapshot taken before table modification. The deleted tuples are contained\n> > > in the old transition table, and this table is appended using UNION ALL.\n> > >\n> > > Transition tables for each modification are collected in each AFTER trigger\n> > > function call. Then, the view maintenance is performed in the last call of\n> > > the trigger.\n> > >\n> > > In the original PostgreSQL, tuplestores of transition tables are freed at the\n> > > end of each nested query. However, their lifespan needs to be prolonged to\n> > > the end of the out-most query in order to maintain the view in the last AFTER\n> > > trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n> > >\n> > > ** Duplicate Tulpes\n> > >\n> > > When calculating changes that will occur in the view (= delta tables),\n> > > multiplicity of tuples are calculated by using count(*).\n> > >\n> > > When deleting tuples from the view, tuples to be deleted are identified by\n> > > joining the delta table with the view, and tuples are deleted as many as\n> > > specified multiplicity by numbered using row_number() function.\n> > > This is implemented in apply_old_delta().\n> > >\n> > > When inserting tuples into the view, each tuple is duplicated to the\n> > > specified multiplicity using generate_series() function. This is implemented\n> > > in apply_new_delta().\n> > >\n> > > ** DISTINCT clause\n> > >\n> > > When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> > > stores multiplicity for tuples. When tuples are deleted from or inserted into\n> > > the view, the values of __ivm_count__ column is decreased or increased as many\n> > > as specified multiplicity. Eventually, when the values becomes zero, the\n> > > corresponding tuple is deleted from the view. This is implemented in\n> > > apply_old_delta_with_count() and apply_new_delta_with_count().\n> > >\n> > > ** Aggregates\n> > >\n> > > Built-in count sum, avg, min, and max are supported. Whether a given\n> > > aggregate function can be used or not is checked by using its OID in\n> > > check_aggregate_supports_ivm().\n> > >\n> > > When creating a materialized view containing aggregates, in addition\n> > > to __ivm_count__, more than one hidden columns for each aggregate are\n> > > added to the target list. For example, columns for storing sum(x),\n> > > count(x) are added if we have avg(x). When the view is maintained,\n> > > aggregated values are updated using these hidden columns, also hidden\n> > > columns are updated at the same time.\n> > >\n> > > The maintenance of aggregated view is performed in\n> > > apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> > > clauses for updating columns are generated by append_set_clause_*().\n> > >\n> > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > deleted from a table, we need to update the value to the new min/max\n> > > recalculated from the tables rather than incremental computation. This\n> > > is performed in recalc_and_set_values().\n> > >\n> > > ---------------------------------------------------------------------------------------\n> > > * Details of the patch-set (v28)\n> > >\n> > > > The patch-set consists of the following eleven patches.\n> > >\n> > > In the previous version, the number of patches were nine.\n> > > In the latest patch-set, the patches are divided more finely\n> > > aiming to make the review easier.\n> > >\n> > > > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> > >\n> > > The prposed syntax to create an incrementally maintainable materialized\n> > > view (IMMV) is;\n> > >\n> > > CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n> > >\n> > > However, this syntax is tentative, so any suggestions are welcomed.\n> > >\n> > > > - 0002: Add relisivm column to pg_class system catalog\n> > >\n> > > We add a new field in pg_class to indicate a relation is IMMV.\n> > > Another alternative is to add a new catalog for managing materialized\n> > > views including IMMV, but I am not sure if we want this.\n> > >\n> > > > - 0003: Allow to prolong life span of transition tables until transaction end\n> > >\n> > > This patch fixes the trigger system to allow to prolong lifespan of\n> > > tuple stores for transition tables until the transaction end. We need\n> > > this because multiple transition tables have to be preserved until the\n> > > end of the out-most query when multiple tables are modified by nested\n> > > triggers. (as explained above in Design - Multiple Tables Modification)\n> > >\n> > > If we don't want to change the trigger system in such way, the alternative\n> > > is to copy the contents of transition tables to other tuplestores, although\n> > > it needs more time and memory.\n> > >\n> > > > - 0004: Add Incremental View Maintenance support to pg_dump\n> > >\n> > > This patch enables pg_dump to output IMMV using the new syntax.\n> > >\n> > > > - 0005: Add Incremental View Maintenance support to psql\n> > >\n> > > This patch implements tab-completion for the new syntax and adds\n> > > information of IMMV to \\d meta-command results.\n> > >\n> > > > - 0006: Add Incremental View Maintenance support\n> > >\n> > > This patch implements the basic IVM feature.\n> > > DISTINCT and aggregate are not supported here.\n> > >\n> > > When an IMMV is created, the view query is checked, and if any\n> > > non-supported feature is used, it raises an error. If it is ok,\n> > > triggers are created on base tables and an unique index is\n> > > created on the view if possible.\n> > >\n> > > In BEFORE trigger, an entry is created for each IMMV and the number\n> > > of trigger firing is counted. Also, the snapshot just before the\n> > > table modification is stored.\n> > >\n> > > In AFTER triggers, each transition tables are preserved. The number\n> > > of trigger firing is counted also here, and when the firing number of\n> > > BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> > > trigger call.\n> > >\n> > > In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> > > query is executed to generate delta tables, and deltas are applied\n> > > to the view. If multiple tables are modified simultaneously, this\n> > > process is iterated for each modified table. Tables before processed\n> > > are represented in \"pre-update-state\", processed tables are\n> > > \"post-update-state\" in the rewritten query.\n> > >\n> > > > - 0007: Add DISTINCT support for IVM\n> > >\n> > > This patch adds DISTINCT clause support.\n> > >\n> > > When an IMMV including DISTINCT is created, a hidden column\n> > > \"__ivm_count__\" is added to the target list. This column has the\n> > > number of duplicity of the same tuples. The duplicity is calculated\n> > > by adding \"count(*)\" and GROUP BY to the view query.\n> > >\n> > > When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> > > and a tuples whose duplicity becomes zero can be deleted from the view.\n> > > This logic is implemented by SQL in apply_old_delta_with_count and\n> > > apply_new_delta_with_count.\n> > >\n> > > Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> > > appear when a view is accessed by \"SELECT * FROM ....\". This is\n> > > implemented by fixing parse_relation.c.\n> > >\n> > > > - 0008: Add aggregates support in IVM\n> > >\n> > > This patch provides codes for aggregates support, specifically\n> > > for builtin count, sum, and avg.\n> > >\n> > > When an IMMV containing an aggregate is created, it is checked if this\n> > > aggregate function is supported, and if it is ok, some hidden columns\n> > > are added to the target list.\n> > >\n> > > When the IMMV is maintained, the aggregated value is updated as well as\n> > > related hidden columns. The way of update depends the type of aggregate\n> > > functions, and SET clause string is generated for each aggregate.\n> > >\n> > > > - 0009: Add support for min/max aggregates for IVM\n> > >\n> > > This patch adds min/max aggregates support.\n> > >\n> > > This is separated from #0008 because min/max needs more complicated\n> > > work than count, sum, and avg.\n> > >\n> > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > deleted from a table, we need to update the value to the new min/max\n> > > recalculated from the tables rather than incremental computation.\n> > > This is performed in recalc_and_set_values().\n> > >\n> > > TIDs and keys of tuples that need re-calculation are returned as a\n> > > result of the query that deleted min/max values from the view using\n> > > RETURNING clause. The plan to recalculate and set the new min/max value\n> > > are stored and reused.\n> > >\n> > > > - 0010: regression tests\n> > >\n> > > This patch provides regression tests for IVM.\n> > >\n> > > > - 0011: documentation\n> > >\n> > > This patch provides documantation for IVM.\n> > >\n> > > ---------------------------------------------------------------------------------------\n> > > * Changes from the Previous Version (v27)\n> > >\n> > > - Allow TRUNCATE on base tables\n> > >\n> > > When a base table is truncated, the view content will be empty if the\n> > > view definition query does not contain an aggregate without a GROUP clause.\n> > > Therefore, such views can be truncated.\n> > >\n> > > Aggregate views without a GROUP clause always have one row. Therefore,\n> > > if a base table is truncated, the view will not be empty and will contain\n> > > a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> > > view instead of truncating it.\n> > >\n> > > - Fix bugs reported by huyajun [1]\n> > >\n> > > [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n> > >\n> > > ---------------------------------------------------------------------------------------\n> > > * Discussion\n> > >\n> > > ** Aggregate support\n> > >\n> > > There were a few suggestions that general aggregate functions should be\n> > > supported [2][3], which may be possible by extending pg_aggregate catalog.\n> > > However, we decided to leave supporting general aggregates to the future work [4]\n> > > because it would need substantial works and make the patch more complex and\n> > > bigger.\n> > >\n> > > There has been no opposite opinion on this. However, if we need more discussion\n> > > on the design of aggregate support, we can omit aggregate support for the first\n> > > release of IVM.\n> > >\n> > > [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> > > [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> > > [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > >\n> > > ** Hidden columns\n> > >\n> > > In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n> > >\n> > > Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> > > view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> > > fixed. There was a proposal to enable hidden columns by adding a new flag to\n> > > pg_attribute [5], but this thread is no longer active, so we decided to check\n> > > the hidden column by its name [6].\n> > >\n> > > [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > > [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > >\n> > > ** Concurrent Transactions\n> > >\n> > > When the view definition has more than one table, we acquire an exclusive\n> > > lock before the view maintenance in order to avoid inconsistent results.\n> > > This behavior was explained in [7]. The lock was improved to use weaker lock\n> > > when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> > > However, due to the implementation that uses ctid for identifying target tuples,\n> > > we still have to use an exclusive lock for DELETE and UPDATE.\n> > >\n> > > [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > > [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n> > >\n> > > ** Automatic Index Creation\n> > >\n> > > When a view is created, a unique index is automatically created if\n> > > possible, that is, if the view definition query has a GROUP BY or\n> > > DISTINCT, or if the view contains all primary key attributes of\n> > > its base tables in the target list. It is necessary for efficient\n> > > view maintenance. This feature is based on a suggestion from\n> > > Konstantin Knizhnik [9].\n> > >\n> > > [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n> > >\n> > >\n> > > ** Trigger and Transition Tables\n> > >\n> > > We implemented IVM based on triggers. This is because we want to use\n> > > transition tables to extract changes on base tables. Also, there are\n> > > other constraint that are using triggers in its implementation, like\n> > > foreign references. However, if we can use transition table like feature\n> > > without relying triggers, we don't have to insist to use triggers and we\n> > > might implement IVM in the executor directly as similar as declarative\n> > > partitioning.\n> > >\n> > > ** Feature to be Supported in the First Release\n> > >\n> > > The current patch-set supports DISTINCT and aggregates for built-in count,\n> > > sum, avg, min and max. Do we need all these feature for the first IVM release?\n> > > Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> > > for supporting min/max we need to discuss on re-calculation method. Before\n> > > handling such relatively advanced feature, maybe, should we focus to design\n> > > and implement of the basic feature of IVM?\n> > >\n> > >\n> > > Any suggestion and discussion are welcomed!\n> > >\n> > > Regards,\n> > > Yugo Nagata\n> > >\n> > > --\n> > > Yugo NAGATA <[email protected]>\n> > >\n> > >\n> >\n> >\n> > > The followings are supported in view definition queries:\n> > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> >\n> >\n> > > Also, a view definition query cannot contain other views, materialized views,\n> > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > system columns, or expressions that contains aggregates.\n> >\n> > Does this also apply to tableoid? but tableoid is a constant, so it\n> > should be fine?\n> > can following two queries apply to this feature.\n> > select tableoid, unique1 from tenk1;\n>\n> Currently, this is not allowed because tableoid is a system column.\n> As you say, tableoid is a constant, so we can allow. Should we do this?\n>\n> > select 1 as constant, unique1 from tenk1;\n>\n> This is allowed, of course.\n>\n> > I didn't apply the patch.(will do later, for someone to test, it would\n> > be a better idea to dump a whole file separately....).\n>\n> Thank you! I'm looking forward to your feedback.\n> (I didn't attach a whole patch separately because I wouldn't like\n> cfbot to be unhappy...)\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <[email protected]>\n\nI played around first half of regress patch.\nthese all following queries fails.\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n SELECT DISTINCT * , 1 as \"__ivm_count__\" FROM mv_base_a;\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n SELECT DISTINCT * , 1 as \"__ivm_countblablabla\" FROM mv_base_a;\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n SELECT DISTINCT * , 1 as \"__ivm_count\" FROM mv_base_a;\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n SELECT DISTINCT * , 1 as \"__ivm_count_____\" FROM mv_base_a;\n\nCREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n SELECT DISTINCT * , 1 as \"__ivm_countblabla\" FROM mv_base_a;\n\nso the hidden column reserved pattern \"__ivm_count.*\"? that would be a lot....\n\nselect * from pg_matviews where matviewname = 'mv_ivm_1';\ndon't have relisivm option. it's reasonable to make it in view pg_matviews?\n\n\n",
"msg_date": "Thu, 29 Jun 2023 00:40:45 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 12:40 AM jian he <[email protected]> wrote:\n>\n> On Wed, Jun 28, 2023 at 4:06 PM Yugo NAGATA <[email protected]> wrote:\n> >\n> > On Wed, 28 Jun 2023 00:01:02 +0800\n> > jian he <[email protected]> wrote:\n> >\n> > > On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > On Thu, 1 Jun 2023 23:59:09 +0900\n> > > > Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > > Hello hackers,\n> > > > >\n> > > > > Here's a rebased version of the patch-set adding Incremental View\n> > > > > Maintenance support for PostgreSQL. That was discussed in [1].\n> > > >\n> > > > > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Overview\n> > > >\n> > > > Incremental View Maintenance (IVM) is a way to make materialized views\n> > > > up-to-date by computing only incremental changes and applying them on\n> > > > views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> > > > only small parts of the view are changed.\n> > > >\n> > > > ** Feature\n> > > >\n> > > > The attached patchset provides a feature that allows materialized views\n> > > > to be updated automatically and incrementally just after a underlying\n> > > > table is modified.\n> > > >\n> > > > You can create an incementally maintainable materialized view (IMMV)\n> > > > by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n> > > >\n> > > > The followings are supported in view definition queries:\n> > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > > > - some built-in aggregate functions (count, sum, avg, min, max)\n> > > > - GROUP BY clause\n> > > > - DISTINCT clause\n> > > >\n> > > > Views can contain multiple tuples with the same content (duplicate tuples).\n> > > >\n> > > > ** Restriction\n> > > >\n> > > > The following are not supported in a view definition:\n> > > > - Outer joins\n> > > > - Aggregates otehr than above, window functions, HAVING\n> > > > - Sub-queries, CTEs\n> > > > - Set operations (UNION, INTERSECT, EXCEPT)\n> > > > - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n> > > >\n> > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > system columns, or expressions that contains aggregates.\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Design\n> > > >\n> > > > An IMMV is maintained using statement-level AFTER triggers.\n> > > > When an IMMV is created, triggers are automatically created on all base\n> > > > tables contained in the view definition query.\n> > > >\n> > > > When a table is modified, changes that occurred in the table are extracted\n> > > > as transition tables in the AFTER triggers. Then, changes that will occur in\n> > > > the view are calculated by a rewritten view dequery in which the modified table\n> > > > is replaced with the transition table.\n> > > >\n> > > > For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> > > > into R are stored in a transiton table dR, the tuples that will be inserted into\n> > > > the view are calculated as the result of \"SELECT * FROM dR, S\".\n> > > >\n> > > > ** Multiple Tables Modification\n> > > >\n> > > > Multiple tables can be modified in a statement when using triggers, foreign key\n> > > > constraint, or modifying CTEs. When multiple tables are modified, we need\n> > > > the state of tables before the modification.\n> > > >\n> > > > For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> > > > the tuples that will be inserted into the view are calculated by the following\n> > > > two queries:\n> > > >\n> > > > \"SELECT * FROM dR, S_pre\"\n> > > > \"SELECT * FROM R, dS\"\n> > > >\n> > > > where S_pre is the table before the modification, R is the current state of\n> > > > table, that is, after the modification. This pre-update states of table\n> > > > is calculated by filtering inserted tuples and appending deleted tuples.\n> > > > The subquery that represents pre-update state is generated in get_prestate_rte().\n> > > > Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> > > > in WHERE clause. This function checks the visibility of tuples by using\n> > > > the snapshot taken before table modification. The deleted tuples are contained\n> > > > in the old transition table, and this table is appended using UNION ALL.\n> > > >\n> > > > Transition tables for each modification are collected in each AFTER trigger\n> > > > function call. Then, the view maintenance is performed in the last call of\n> > > > the trigger.\n> > > >\n> > > > In the original PostgreSQL, tuplestores of transition tables are freed at the\n> > > > end of each nested query. However, their lifespan needs to be prolonged to\n> > > > the end of the out-most query in order to maintain the view in the last AFTER\n> > > > trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n> > > >\n> > > > ** Duplicate Tulpes\n> > > >\n> > > > When calculating changes that will occur in the view (= delta tables),\n> > > > multiplicity of tuples are calculated by using count(*).\n> > > >\n> > > > When deleting tuples from the view, tuples to be deleted are identified by\n> > > > joining the delta table with the view, and tuples are deleted as many as\n> > > > specified multiplicity by numbered using row_number() function.\n> > > > This is implemented in apply_old_delta().\n> > > >\n> > > > When inserting tuples into the view, each tuple is duplicated to the\n> > > > specified multiplicity using generate_series() function. This is implemented\n> > > > in apply_new_delta().\n> > > >\n> > > > ** DISTINCT clause\n> > > >\n> > > > When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> > > > stores multiplicity for tuples. When tuples are deleted from or inserted into\n> > > > the view, the values of __ivm_count__ column is decreased or increased as many\n> > > > as specified multiplicity. Eventually, when the values becomes zero, the\n> > > > corresponding tuple is deleted from the view. This is implemented in\n> > > > apply_old_delta_with_count() and apply_new_delta_with_count().\n> > > >\n> > > > ** Aggregates\n> > > >\n> > > > Built-in count sum, avg, min, and max are supported. Whether a given\n> > > > aggregate function can be used or not is checked by using its OID in\n> > > > check_aggregate_supports_ivm().\n> > > >\n> > > > When creating a materialized view containing aggregates, in addition\n> > > > to __ivm_count__, more than one hidden columns for each aggregate are\n> > > > added to the target list. For example, columns for storing sum(x),\n> > > > count(x) are added if we have avg(x). When the view is maintained,\n> > > > aggregated values are updated using these hidden columns, also hidden\n> > > > columns are updated at the same time.\n> > > >\n> > > > The maintenance of aggregated view is performed in\n> > > > apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> > > > clauses for updating columns are generated by append_set_clause_*().\n> > > >\n> > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > deleted from a table, we need to update the value to the new min/max\n> > > > recalculated from the tables rather than incremental computation. This\n> > > > is performed in recalc_and_set_values().\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Details of the patch-set (v28)\n> > > >\n> > > > > The patch-set consists of the following eleven patches.\n> > > >\n> > > > In the previous version, the number of patches were nine.\n> > > > In the latest patch-set, the patches are divided more finely\n> > > > aiming to make the review easier.\n> > > >\n> > > > > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> > > >\n> > > > The prposed syntax to create an incrementally maintainable materialized\n> > > > view (IMMV) is;\n> > > >\n> > > > CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n> > > >\n> > > > However, this syntax is tentative, so any suggestions are welcomed.\n> > > >\n> > > > > - 0002: Add relisivm column to pg_class system catalog\n> > > >\n> > > > We add a new field in pg_class to indicate a relation is IMMV.\n> > > > Another alternative is to add a new catalog for managing materialized\n> > > > views including IMMV, but I am not sure if we want this.\n> > > >\n> > > > > - 0003: Allow to prolong life span of transition tables until transaction end\n> > > >\n> > > > This patch fixes the trigger system to allow to prolong lifespan of\n> > > > tuple stores for transition tables until the transaction end. We need\n> > > > this because multiple transition tables have to be preserved until the\n> > > > end of the out-most query when multiple tables are modified by nested\n> > > > triggers. (as explained above in Design - Multiple Tables Modification)\n> > > >\n> > > > If we don't want to change the trigger system in such way, the alternative\n> > > > is to copy the contents of transition tables to other tuplestores, although\n> > > > it needs more time and memory.\n> > > >\n> > > > > - 0004: Add Incremental View Maintenance support to pg_dump\n> > > >\n> > > > This patch enables pg_dump to output IMMV using the new syntax.\n> > > >\n> > > > > - 0005: Add Incremental View Maintenance support to psql\n> > > >\n> > > > This patch implements tab-completion for the new syntax and adds\n> > > > information of IMMV to \\d meta-command results.\n> > > >\n> > > > > - 0006: Add Incremental View Maintenance support\n> > > >\n> > > > This patch implements the basic IVM feature.\n> > > > DISTINCT and aggregate are not supported here.\n> > > >\n> > > > When an IMMV is created, the view query is checked, and if any\n> > > > non-supported feature is used, it raises an error. If it is ok,\n> > > > triggers are created on base tables and an unique index is\n> > > > created on the view if possible.\n> > > >\n> > > > In BEFORE trigger, an entry is created for each IMMV and the number\n> > > > of trigger firing is counted. Also, the snapshot just before the\n> > > > table modification is stored.\n> > > >\n> > > > In AFTER triggers, each transition tables are preserved. The number\n> > > > of trigger firing is counted also here, and when the firing number of\n> > > > BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> > > > trigger call.\n> > > >\n> > > > In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> > > > query is executed to generate delta tables, and deltas are applied\n> > > > to the view. If multiple tables are modified simultaneously, this\n> > > > process is iterated for each modified table. Tables before processed\n> > > > are represented in \"pre-update-state\", processed tables are\n> > > > \"post-update-state\" in the rewritten query.\n> > > >\n> > > > > - 0007: Add DISTINCT support for IVM\n> > > >\n> > > > This patch adds DISTINCT clause support.\n> > > >\n> > > > When an IMMV including DISTINCT is created, a hidden column\n> > > > \"__ivm_count__\" is added to the target list. This column has the\n> > > > number of duplicity of the same tuples. The duplicity is calculated\n> > > > by adding \"count(*)\" and GROUP BY to the view query.\n> > > >\n> > > > When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> > > > and a tuples whose duplicity becomes zero can be deleted from the view.\n> > > > This logic is implemented by SQL in apply_old_delta_with_count and\n> > > > apply_new_delta_with_count.\n> > > >\n> > > > Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> > > > appear when a view is accessed by \"SELECT * FROM ....\". This is\n> > > > implemented by fixing parse_relation.c.\n> > > >\n> > > > > - 0008: Add aggregates support in IVM\n> > > >\n> > > > This patch provides codes for aggregates support, specifically\n> > > > for builtin count, sum, and avg.\n> > > >\n> > > > When an IMMV containing an aggregate is created, it is checked if this\n> > > > aggregate function is supported, and if it is ok, some hidden columns\n> > > > are added to the target list.\n> > > >\n> > > > When the IMMV is maintained, the aggregated value is updated as well as\n> > > > related hidden columns. The way of update depends the type of aggregate\n> > > > functions, and SET clause string is generated for each aggregate.\n> > > >\n> > > > > - 0009: Add support for min/max aggregates for IVM\n> > > >\n> > > > This patch adds min/max aggregates support.\n> > > >\n> > > > This is separated from #0008 because min/max needs more complicated\n> > > > work than count, sum, and avg.\n> > > >\n> > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > deleted from a table, we need to update the value to the new min/max\n> > > > recalculated from the tables rather than incremental computation.\n> > > > This is performed in recalc_and_set_values().\n> > > >\n> > > > TIDs and keys of tuples that need re-calculation are returned as a\n> > > > result of the query that deleted min/max values from the view using\n> > > > RETURNING clause. The plan to recalculate and set the new min/max value\n> > > > are stored and reused.\n> > > >\n> > > > > - 0010: regression tests\n> > > >\n> > > > This patch provides regression tests for IVM.\n> > > >\n> > > > > - 0011: documentation\n> > > >\n> > > > This patch provides documantation for IVM.\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Changes from the Previous Version (v27)\n> > > >\n> > > > - Allow TRUNCATE on base tables\n> > > >\n> > > > When a base table is truncated, the view content will be empty if the\n> > > > view definition query does not contain an aggregate without a GROUP clause.\n> > > > Therefore, such views can be truncated.\n> > > >\n> > > > Aggregate views without a GROUP clause always have one row. Therefore,\n> > > > if a base table is truncated, the view will not be empty and will contain\n> > > > a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> > > > view instead of truncating it.\n> > > >\n> > > > - Fix bugs reported by huyajun [1]\n> > > >\n> > > > [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Discussion\n> > > >\n> > > > ** Aggregate support\n> > > >\n> > > > There were a few suggestions that general aggregate functions should be\n> > > > supported [2][3], which may be possible by extending pg_aggregate catalog.\n> > > > However, we decided to leave supporting general aggregates to the future work [4]\n> > > > because it would need substantial works and make the patch more complex and\n> > > > bigger.\n> > > >\n> > > > There has been no opposite opinion on this. However, if we need more discussion\n> > > > on the design of aggregate support, we can omit aggregate support for the first\n> > > > release of IVM.\n> > > >\n> > > > [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> > > > [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> > > > [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > >\n> > > > ** Hidden columns\n> > > >\n> > > > In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n> > > >\n> > > > Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> > > > view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> > > > fixed. There was a proposal to enable hidden columns by adding a new flag to\n> > > > pg_attribute [5], but this thread is no longer active, so we decided to check\n> > > > the hidden column by its name [6].\n> > > >\n> > > > [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > > > [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > >\n> > > > ** Concurrent Transactions\n> > > >\n> > > > When the view definition has more than one table, we acquire an exclusive\n> > > > lock before the view maintenance in order to avoid inconsistent results.\n> > > > This behavior was explained in [7]. The lock was improved to use weaker lock\n> > > > when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> > > > However, due to the implementation that uses ctid for identifying target tuples,\n> > > > we still have to use an exclusive lock for DELETE and UPDATE.\n> > > >\n> > > > [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > > > [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n> > > >\n> > > > ** Automatic Index Creation\n> > > >\n> > > > When a view is created, a unique index is automatically created if\n> > > > possible, that is, if the view definition query has a GROUP BY or\n> > > > DISTINCT, or if the view contains all primary key attributes of\n> > > > its base tables in the target list. It is necessary for efficient\n> > > > view maintenance. This feature is based on a suggestion from\n> > > > Konstantin Knizhnik [9].\n> > > >\n> > > > [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n> > > >\n> > > >\n> > > > ** Trigger and Transition Tables\n> > > >\n> > > > We implemented IVM based on triggers. This is because we want to use\n> > > > transition tables to extract changes on base tables. Also, there are\n> > > > other constraint that are using triggers in its implementation, like\n> > > > foreign references. However, if we can use transition table like feature\n> > > > without relying triggers, we don't have to insist to use triggers and we\n> > > > might implement IVM in the executor directly as similar as declarative\n> > > > partitioning.\n> > > >\n> > > > ** Feature to be Supported in the First Release\n> > > >\n> > > > The current patch-set supports DISTINCT and aggregates for built-in count,\n> > > > sum, avg, min and max. Do we need all these feature for the first IVM release?\n> > > > Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> > > > for supporting min/max we need to discuss on re-calculation method. Before\n> > > > handling such relatively advanced feature, maybe, should we focus to design\n> > > > and implement of the basic feature of IVM?\n> > > >\n> > > >\n> > > > Any suggestion and discussion are welcomed!\n> > > >\n> > > > Regards,\n> > > > Yugo Nagata\n> > > >\n> > > > --\n> > > > Yugo NAGATA <[email protected]>\n> > > >\n> > > >\n> > >\n> > >\n> > > > The followings are supported in view definition queries:\n> > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > >\n> > >\n> > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > system columns, or expressions that contains aggregates.\n> > >\n> > > Does this also apply to tableoid? but tableoid is a constant, so it\n> > > should be fine?\n> > > can following two queries apply to this feature.\n> > > select tableoid, unique1 from tenk1;\n> >\n> > Currently, this is not allowed because tableoid is a system column.\n> > As you say, tableoid is a constant, so we can allow. Should we do this?\n> >\n> > > select 1 as constant, unique1 from tenk1;\n> >\n> > This is allowed, of course.\n> >\n> > > I didn't apply the patch.(will do later, for someone to test, it would\n> > > be a better idea to dump a whole file separately....).\n> >\n> > Thank you! I'm looking forward to your feedback.\n> > (I didn't attach a whole patch separately because I wouldn't like\n> > cfbot to be unhappy...)\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n>\n> I played around first half of regress patch.\n> these all following queries fails.\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count__\" FROM mv_base_a;\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_countblablabla\" FROM mv_base_a;\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count\" FROM mv_base_a;\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count_____\" FROM mv_base_a;\n>\n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_countblabla\" FROM mv_base_a;\n>\n> so the hidden column reserved pattern \"__ivm_count.*\"? that would be a lot....\n>\n> select * from pg_matviews where matviewname = 'mv_ivm_1';\n> don't have relisivm option. it's reasonable to make it in view pg_matviews?\n\nanother trivial:\nincremental_matview.out (last few lines) last transaction seems to\nneed COMMIT command.\n\n\n",
"msg_date": "Thu, 29 Jun 2023 18:20:32 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "I cannot build the doc.\ngit clean -fdx\ngit am ~/Desktop/tmp/*.patch\n\nApplying: Add a syntax to create Incrementally Maintainable Materialized Views\nApplying: Add relisivm column to pg_class system catalog\nApplying: Allow to prolong life span of transition tables until transaction end\nApplying: Add Incremental View Maintenance support to pg_dump\nApplying: Add Incremental View Maintenance support to psql\nApplying: Add Incremental View Maintenance support\nApplying: Add DISTINCT support for IVM\nApplying: Add aggregates support in IVM\nApplying: Add support for min/max aggregates for IVM\nApplying: Add regression tests for Incremental View Maintenance\nApplying: Add documentations about Incremental View Maintenance\n.git/rebase-apply/patch:79: trailing whitespace.\n clause.\nwarning: 1 line adds whitespace errors.\n\nBecause of this, the {ninja docs} command failed. ERROR message:\n\n[6/6] Generating doc/src/sgml/html with a custom command\nFAILED: doc/src/sgml/html\n/usr/bin/python3\n../../Desktop/pg_sources/main/postgres/doc/src/sgml/xmltools_dep_wrapper\n--targetname doc/src/sgml/html --depfile doc/src/sgml/html.d --tool\n/usr/bin/xsltproc -- -o doc/src/sgml/ --nonet --stringparam pg.version\n16beta2 --path doc/src/sgml --path\n../../Desktop/pg_sources/main/postgres/doc/src/sgml\n../../Desktop/pg_sources/main/postgres/doc/src/sgml/stylesheet.xsl\ndoc/src/sgml/postgres-full.xml\nERROR: id attribute missing on <sect2> element under /book[@id =\n'postgres']/part[@id = 'server-programming']/chapter[@id =\n'rules']/sect1[@id = 'rules-ivm']\nerror: file doc/src/sgml/postgres-full.xml\nxsltRunStylesheet : run failed\nninja: build stopped: subcommand failed.\n\n\n",
"msg_date": "Thu, 29 Jun 2023 18:51:06 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 6:51 PM jian he <[email protected]> wrote:\n>\n> I cannot build the doc.\n> git clean -fdx\n> git am ~/Desktop/tmp/*.patch\n>\n> Applying: Add a syntax to create Incrementally Maintainable Materialized Views\n> Applying: Add relisivm column to pg_class system catalog\n> Applying: Allow to prolong life span of transition tables until transaction end\n> Applying: Add Incremental View Maintenance support to pg_dump\n> Applying: Add Incremental View Maintenance support to psql\n> Applying: Add Incremental View Maintenance support\n> Applying: Add DISTINCT support for IVM\n> Applying: Add aggregates support in IVM\n> Applying: Add support for min/max aggregates for IVM\n> Applying: Add regression tests for Incremental View Maintenance\n> Applying: Add documentations about Incremental View Maintenance\n> .git/rebase-apply/patch:79: trailing whitespace.\n> clause.\n> warning: 1 line adds whitespace errors.\n>\n> Because of this, the {ninja docs} command failed. ERROR message:\n>\n> [6/6] Generating doc/src/sgml/html with a custom command\n> FAILED: doc/src/sgml/html\n> /usr/bin/python3\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml/xmltools_dep_wrapper\n> --targetname doc/src/sgml/html --depfile doc/src/sgml/html.d --tool\n> /usr/bin/xsltproc -- -o doc/src/sgml/ --nonet --stringparam pg.version\n> 16beta2 --path doc/src/sgml --path\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml/stylesheet.xsl\n> doc/src/sgml/postgres-full.xml\n> ERROR: id attribute missing on <sect2> element under /book[@id =\n> 'postgres']/part[@id = 'server-programming']/chapter[@id =\n> 'rules']/sect1[@id = 'rules-ivm']\n> error: file doc/src/sgml/postgres-full.xml\n> xsltRunStylesheet : run failed\n> ninja: build stopped: subcommand failed.\n\n\nso far what I tried:\ngit am --ignore-whitespace --whitespace=nowarn ~/Desktop/tmp/*.patch\ngit am --whitespace=fix ~/Desktop/tmp/*.patch\ngit am --whitespace=error ~/Desktop/tmp/*.patch\n\nI still cannot generate html docs.\n\n\n",
"msg_date": "Thu, 29 Jun 2023 20:21:34 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "Hi there.\nin v28-0005-Add-Incremental-View-Maintenance-support-to-psql.patch\nI don't know how to set psql to get the output\n\"Incremental view maintenance: yes\"\n\n\n",
"msg_date": "Fri, 30 Jun 2023 08:00:00 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "This is probably not trivial.\nIn function apply_new_delta_with_count.\n\n appendStringInfo(&querybuf,\n\"WITH updt AS (\" /* update a tuple if this exists in the view */\n\"UPDATE %s AS mv SET %s = mv.%s OPERATOR(pg_catalog.+) diff.%s \"\n\"%s \" /* SET clauses for aggregates */\n\"FROM %s AS diff \"\n\"WHERE %s \" /* tuple matching condition */\n\"RETURNING %s\" /* returning keys of updated tuples */\n\") INSERT INTO %s (%s)\" /* insert a new tuple if this doesn't existw */\n\"SELECT %s FROM %s AS diff \"\n\"WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE %s);\",\n\n---------------------\n\") INSERT INTO %s (%s)\" /* insert a new tuple if this doesn't existw */\n\"SELECT %s FROM %s AS diff \"\n\nthe INSERT INTO line, should have one white space in the end?\nalso \"existw\" should be \"exists\"\n\nThis is probably not trivial. In function apply_new_delta_with_count. appendStringInfo(&querybuf,\t\t\t\t\t\"WITH updt AS (\"\t\t/* update a tuple if this exists in the view */\t\t\t\t\t\t\"UPDATE %s AS mv SET %s = mv.%s OPERATOR(pg_catalog.+) diff.%s \"\t\t\t\t\t\t\t\t\t\t\t\"%s \"\t/* SET clauses for aggregates */\t\t\t\t\t\t\"FROM %s AS diff \"\t\t\t\t\t\t\"WHERE %s \"\t\t\t\t\t/* tuple matching condition */\t\t\t\t\t\t\"RETURNING %s\"\t\t\t\t/* returning keys of updated tuples */\t\t\t\t\t\") INSERT INTO %s (%s)\"\t/* insert a new tuple if this doesn't existw */\t\t\t\t\t\t\"SELECT %s FROM %s AS diff \"\t\t\t\t\t\t\"WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE %s);\",---------------------\") INSERT INTO %s (%s)\"\t/* insert a new tuple if this doesn't existw */\t\t\t\t\t\t\"SELECT %s FROM %s AS diff \"the INSERT INTO line, should have one white space in the end? also \"existw\" should be \"exists\"",
"msg_date": "Sun, 2 Jul 2023 08:25:12 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "ok. Now I really found a small bug.\n\nthis works as intended:\nBEGIN;\nCREATE INCREMENTAL MATERIALIZED VIEW test_ivm AS SELECT i, MIN(j) as\nmin_j FROM mv_base_a group by 1;\nINSERT INTO mv_base_a select 1,-2 where false;\nrollback;\n\nhowever the following one:\nBEGIN;\nCREATE INCREMENTAL MATERIALIZED VIEW test_ivm1 AS SELECT MIN(j) as\nmin_j FROM mv_base_a;\nINSERT INTO mv_base_a select 1, -2 where false;\nrollback;\n\nwill evaluate\ntuplestore_tuple_count(new_tuplestores) to 1, it will walk through\nIVM_immediate_maintenance function to apply_delta.\nbut should it be zero?\n\n\n",
"msg_date": "Sun, 2 Jul 2023 10:38:20 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, 29 Jun 2023 00:40:45 +0800\njian he <[email protected]> wrote:\n\n> On Wed, Jun 28, 2023 at 4:06 PM Yugo NAGATA <[email protected]> wrote:\n> >\n> > On Wed, 28 Jun 2023 00:01:02 +0800\n> > jian he <[email protected]> wrote:\n> >\n> > > On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > On Thu, 1 Jun 2023 23:59:09 +0900\n> > > > Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > > Hello hackers,\n> > > > >\n> > > > > Here's a rebased version of the patch-set adding Incremental View\n> > > > > Maintenance support for PostgreSQL. That was discussed in [1].\n> > > >\n> > > > > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Overview\n> > > >\n> > > > Incremental View Maintenance (IVM) is a way to make materialized views\n> > > > up-to-date by computing only incremental changes and applying them on\n> > > > views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> > > > only small parts of the view are changed.\n> > > >\n> > > > ** Feature\n> > > >\n> > > > The attached patchset provides a feature that allows materialized views\n> > > > to be updated automatically and incrementally just after a underlying\n> > > > table is modified.\n> > > >\n> > > > You can create an incementally maintainable materialized view (IMMV)\n> > > > by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n> > > >\n> > > > The followings are supported in view definition queries:\n> > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > > > - some built-in aggregate functions (count, sum, avg, min, max)\n> > > > - GROUP BY clause\n> > > > - DISTINCT clause\n> > > >\n> > > > Views can contain multiple tuples with the same content (duplicate tuples).\n> > > >\n> > > > ** Restriction\n> > > >\n> > > > The following are not supported in a view definition:\n> > > > - Outer joins\n> > > > - Aggregates otehr than above, window functions, HAVING\n> > > > - Sub-queries, CTEs\n> > > > - Set operations (UNION, INTERSECT, EXCEPT)\n> > > > - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n> > > >\n> > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > system columns, or expressions that contains aggregates.\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Design\n> > > >\n> > > > An IMMV is maintained using statement-level AFTER triggers.\n> > > > When an IMMV is created, triggers are automatically created on all base\n> > > > tables contained in the view definition query.\n> > > >\n> > > > When a table is modified, changes that occurred in the table are extracted\n> > > > as transition tables in the AFTER triggers. Then, changes that will occur in\n> > > > the view are calculated by a rewritten view dequery in which the modified table\n> > > > is replaced with the transition table.\n> > > >\n> > > > For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> > > > into R are stored in a transiton table dR, the tuples that will be inserted into\n> > > > the view are calculated as the result of \"SELECT * FROM dR, S\".\n> > > >\n> > > > ** Multiple Tables Modification\n> > > >\n> > > > Multiple tables can be modified in a statement when using triggers, foreign key\n> > > > constraint, or modifying CTEs. When multiple tables are modified, we need\n> > > > the state of tables before the modification.\n> > > >\n> > > > For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> > > > the tuples that will be inserted into the view are calculated by the following\n> > > > two queries:\n> > > >\n> > > > \"SELECT * FROM dR, S_pre\"\n> > > > \"SELECT * FROM R, dS\"\n> > > >\n> > > > where S_pre is the table before the modification, R is the current state of\n> > > > table, that is, after the modification. This pre-update states of table\n> > > > is calculated by filtering inserted tuples and appending deleted tuples.\n> > > > The subquery that represents pre-update state is generated in get_prestate_rte().\n> > > > Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> > > > in WHERE clause. This function checks the visibility of tuples by using\n> > > > the snapshot taken before table modification. The deleted tuples are contained\n> > > > in the old transition table, and this table is appended using UNION ALL.\n> > > >\n> > > > Transition tables for each modification are collected in each AFTER trigger\n> > > > function call. Then, the view maintenance is performed in the last call of\n> > > > the trigger.\n> > > >\n> > > > In the original PostgreSQL, tuplestores of transition tables are freed at the\n> > > > end of each nested query. However, their lifespan needs to be prolonged to\n> > > > the end of the out-most query in order to maintain the view in the last AFTER\n> > > > trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n> > > >\n> > > > ** Duplicate Tulpes\n> > > >\n> > > > When calculating changes that will occur in the view (= delta tables),\n> > > > multiplicity of tuples are calculated by using count(*).\n> > > >\n> > > > When deleting tuples from the view, tuples to be deleted are identified by\n> > > > joining the delta table with the view, and tuples are deleted as many as\n> > > > specified multiplicity by numbered using row_number() function.\n> > > > This is implemented in apply_old_delta().\n> > > >\n> > > > When inserting tuples into the view, each tuple is duplicated to the\n> > > > specified multiplicity using generate_series() function. This is implemented\n> > > > in apply_new_delta().\n> > > >\n> > > > ** DISTINCT clause\n> > > >\n> > > > When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> > > > stores multiplicity for tuples. When tuples are deleted from or inserted into\n> > > > the view, the values of __ivm_count__ column is decreased or increased as many\n> > > > as specified multiplicity. Eventually, when the values becomes zero, the\n> > > > corresponding tuple is deleted from the view. This is implemented in\n> > > > apply_old_delta_with_count() and apply_new_delta_with_count().\n> > > >\n> > > > ** Aggregates\n> > > >\n> > > > Built-in count sum, avg, min, and max are supported. Whether a given\n> > > > aggregate function can be used or not is checked by using its OID in\n> > > > check_aggregate_supports_ivm().\n> > > >\n> > > > When creating a materialized view containing aggregates, in addition\n> > > > to __ivm_count__, more than one hidden columns for each aggregate are\n> > > > added to the target list. For example, columns for storing sum(x),\n> > > > count(x) are added if we have avg(x). When the view is maintained,\n> > > > aggregated values are updated using these hidden columns, also hidden\n> > > > columns are updated at the same time.\n> > > >\n> > > > The maintenance of aggregated view is performed in\n> > > > apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> > > > clauses for updating columns are generated by append_set_clause_*().\n> > > >\n> > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > deleted from a table, we need to update the value to the new min/max\n> > > > recalculated from the tables rather than incremental computation. This\n> > > > is performed in recalc_and_set_values().\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Details of the patch-set (v28)\n> > > >\n> > > > > The patch-set consists of the following eleven patches.\n> > > >\n> > > > In the previous version, the number of patches were nine.\n> > > > In the latest patch-set, the patches are divided more finely\n> > > > aiming to make the review easier.\n> > > >\n> > > > > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> > > >\n> > > > The prposed syntax to create an incrementally maintainable materialized\n> > > > view (IMMV) is;\n> > > >\n> > > > CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n> > > >\n> > > > However, this syntax is tentative, so any suggestions are welcomed.\n> > > >\n> > > > > - 0002: Add relisivm column to pg_class system catalog\n> > > >\n> > > > We add a new field in pg_class to indicate a relation is IMMV.\n> > > > Another alternative is to add a new catalog for managing materialized\n> > > > views including IMMV, but I am not sure if we want this.\n> > > >\n> > > > > - 0003: Allow to prolong life span of transition tables until transaction end\n> > > >\n> > > > This patch fixes the trigger system to allow to prolong lifespan of\n> > > > tuple stores for transition tables until the transaction end. We need\n> > > > this because multiple transition tables have to be preserved until the\n> > > > end of the out-most query when multiple tables are modified by nested\n> > > > triggers. (as explained above in Design - Multiple Tables Modification)\n> > > >\n> > > > If we don't want to change the trigger system in such way, the alternative\n> > > > is to copy the contents of transition tables to other tuplestores, although\n> > > > it needs more time and memory.\n> > > >\n> > > > > - 0004: Add Incremental View Maintenance support to pg_dump\n> > > >\n> > > > This patch enables pg_dump to output IMMV using the new syntax.\n> > > >\n> > > > > - 0005: Add Incremental View Maintenance support to psql\n> > > >\n> > > > This patch implements tab-completion for the new syntax and adds\n> > > > information of IMMV to \\d meta-command results.\n> > > >\n> > > > > - 0006: Add Incremental View Maintenance support\n> > > >\n> > > > This patch implements the basic IVM feature.\n> > > > DISTINCT and aggregate are not supported here.\n> > > >\n> > > > When an IMMV is created, the view query is checked, and if any\n> > > > non-supported feature is used, it raises an error. If it is ok,\n> > > > triggers are created on base tables and an unique index is\n> > > > created on the view if possible.\n> > > >\n> > > > In BEFORE trigger, an entry is created for each IMMV and the number\n> > > > of trigger firing is counted. Also, the snapshot just before the\n> > > > table modification is stored.\n> > > >\n> > > > In AFTER triggers, each transition tables are preserved. The number\n> > > > of trigger firing is counted also here, and when the firing number of\n> > > > BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> > > > trigger call.\n> > > >\n> > > > In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> > > > query is executed to generate delta tables, and deltas are applied\n> > > > to the view. If multiple tables are modified simultaneously, this\n> > > > process is iterated for each modified table. Tables before processed\n> > > > are represented in \"pre-update-state\", processed tables are\n> > > > \"post-update-state\" in the rewritten query.\n> > > >\n> > > > > - 0007: Add DISTINCT support for IVM\n> > > >\n> > > > This patch adds DISTINCT clause support.\n> > > >\n> > > > When an IMMV including DISTINCT is created, a hidden column\n> > > > \"__ivm_count__\" is added to the target list. This column has the\n> > > > number of duplicity of the same tuples. The duplicity is calculated\n> > > > by adding \"count(*)\" and GROUP BY to the view query.\n> > > >\n> > > > When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> > > > and a tuples whose duplicity becomes zero can be deleted from the view.\n> > > > This logic is implemented by SQL in apply_old_delta_with_count and\n> > > > apply_new_delta_with_count.\n> > > >\n> > > > Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> > > > appear when a view is accessed by \"SELECT * FROM ....\". This is\n> > > > implemented by fixing parse_relation.c.\n> > > >\n> > > > > - 0008: Add aggregates support in IVM\n> > > >\n> > > > This patch provides codes for aggregates support, specifically\n> > > > for builtin count, sum, and avg.\n> > > >\n> > > > When an IMMV containing an aggregate is created, it is checked if this\n> > > > aggregate function is supported, and if it is ok, some hidden columns\n> > > > are added to the target list.\n> > > >\n> > > > When the IMMV is maintained, the aggregated value is updated as well as\n> > > > related hidden columns. The way of update depends the type of aggregate\n> > > > functions, and SET clause string is generated for each aggregate.\n> > > >\n> > > > > - 0009: Add support for min/max aggregates for IVM\n> > > >\n> > > > This patch adds min/max aggregates support.\n> > > >\n> > > > This is separated from #0008 because min/max needs more complicated\n> > > > work than count, sum, and avg.\n> > > >\n> > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > deleted from a table, we need to update the value to the new min/max\n> > > > recalculated from the tables rather than incremental computation.\n> > > > This is performed in recalc_and_set_values().\n> > > >\n> > > > TIDs and keys of tuples that need re-calculation are returned as a\n> > > > result of the query that deleted min/max values from the view using\n> > > > RETURNING clause. The plan to recalculate and set the new min/max value\n> > > > are stored and reused.\n> > > >\n> > > > > - 0010: regression tests\n> > > >\n> > > > This patch provides regression tests for IVM.\n> > > >\n> > > > > - 0011: documentation\n> > > >\n> > > > This patch provides documantation for IVM.\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Changes from the Previous Version (v27)\n> > > >\n> > > > - Allow TRUNCATE on base tables\n> > > >\n> > > > When a base table is truncated, the view content will be empty if the\n> > > > view definition query does not contain an aggregate without a GROUP clause.\n> > > > Therefore, such views can be truncated.\n> > > >\n> > > > Aggregate views without a GROUP clause always have one row. Therefore,\n> > > > if a base table is truncated, the view will not be empty and will contain\n> > > > a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> > > > view instead of truncating it.\n> > > >\n> > > > - Fix bugs reported by huyajun [1]\n> > > >\n> > > > [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n> > > >\n> > > > ---------------------------------------------------------------------------------------\n> > > > * Discussion\n> > > >\n> > > > ** Aggregate support\n> > > >\n> > > > There were a few suggestions that general aggregate functions should be\n> > > > supported [2][3], which may be possible by extending pg_aggregate catalog.\n> > > > However, we decided to leave supporting general aggregates to the future work [4]\n> > > > because it would need substantial works and make the patch more complex and\n> > > > bigger.\n> > > >\n> > > > There has been no opposite opinion on this. However, if we need more discussion\n> > > > on the design of aggregate support, we can omit aggregate support for the first\n> > > > release of IVM.\n> > > >\n> > > > [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> > > > [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> > > > [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > >\n> > > > ** Hidden columns\n> > > >\n> > > > In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n> > > >\n> > > > Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> > > > view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> > > > fixed. There was a proposal to enable hidden columns by adding a new flag to\n> > > > pg_attribute [5], but this thread is no longer active, so we decided to check\n> > > > the hidden column by its name [6].\n> > > >\n> > > > [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > > > [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > >\n> > > > ** Concurrent Transactions\n> > > >\n> > > > When the view definition has more than one table, we acquire an exclusive\n> > > > lock before the view maintenance in order to avoid inconsistent results.\n> > > > This behavior was explained in [7]. The lock was improved to use weaker lock\n> > > > when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> > > > However, due to the implementation that uses ctid for identifying target tuples,\n> > > > we still have to use an exclusive lock for DELETE and UPDATE.\n> > > >\n> > > > [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > > > [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n> > > >\n> > > > ** Automatic Index Creation\n> > > >\n> > > > When a view is created, a unique index is automatically created if\n> > > > possible, that is, if the view definition query has a GROUP BY or\n> > > > DISTINCT, or if the view contains all primary key attributes of\n> > > > its base tables in the target list. It is necessary for efficient\n> > > > view maintenance. This feature is based on a suggestion from\n> > > > Konstantin Knizhnik [9].\n> > > >\n> > > > [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n> > > >\n> > > >\n> > > > ** Trigger and Transition Tables\n> > > >\n> > > > We implemented IVM based on triggers. This is because we want to use\n> > > > transition tables to extract changes on base tables. Also, there are\n> > > > other constraint that are using triggers in its implementation, like\n> > > > foreign references. However, if we can use transition table like feature\n> > > > without relying triggers, we don't have to insist to use triggers and we\n> > > > might implement IVM in the executor directly as similar as declarative\n> > > > partitioning.\n> > > >\n> > > > ** Feature to be Supported in the First Release\n> > > >\n> > > > The current patch-set supports DISTINCT and aggregates for built-in count,\n> > > > sum, avg, min and max. Do we need all these feature for the first IVM release?\n> > > > Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> > > > for supporting min/max we need to discuss on re-calculation method. Before\n> > > > handling such relatively advanced feature, maybe, should we focus to design\n> > > > and implement of the basic feature of IVM?\n> > > >\n> > > >\n> > > > Any suggestion and discussion are welcomed!\n> > > >\n> > > > Regards,\n> > > > Yugo Nagata\n> > > >\n> > > > --\n> > > > Yugo NAGATA <[email protected]>\n> > > >\n> > > >\n> > >\n> > >\n> > > > The followings are supported in view definition queries:\n> > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > >\n> > >\n> > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > system columns, or expressions that contains aggregates.\n> > >\n> > > Does this also apply to tableoid? but tableoid is a constant, so it\n> > > should be fine?\n> > > can following two queries apply to this feature.\n> > > select tableoid, unique1 from tenk1;\n> >\n> > Currently, this is not allowed because tableoid is a system column.\n> > As you say, tableoid is a constant, so we can allow. Should we do this?\n> >\n> > > select 1 as constant, unique1 from tenk1;\n> >\n> > This is allowed, of course.\n> >\n> > > I didn't apply the patch.(will do later, for someone to test, it would\n> > > be a better idea to dump a whole file separately....).\n> >\n> > Thank you! I'm looking forward to your feedback.\n> > (I didn't attach a whole patch separately because I wouldn't like\n> > cfbot to be unhappy...)\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n> \n> I played around first half of regress patch.\n\nI'm so sorry for the late reply.\n\n> these all following queries fails.\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count__\" FROM mv_base_a;\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_countblablabla\" FROM mv_base_a;\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count\" FROM mv_base_a;\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_count_____\" FROM mv_base_a;\n> \n> CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> SELECT DISTINCT * , 1 as \"__ivm_countblabla\" FROM mv_base_a;\n> \n> so the hidden column reserved pattern \"__ivm_count.*\"? that would be a lot....\n\nColumn names which start with \"__ivm_\" are prohibited because hidden columns\nusing this pattern are used for handling views with aggregate or DISTINCT.\nEven when neither aggregate or DISINCT is used, such column name is used\nfor handling tuple duplicates in views. So, if we choose not to allow\ntuple duplicates in the initial version of IVM, we would remove this\nrestriction for now....\n\n> \n> select * from pg_matviews where matviewname = 'mv_ivm_1';\n> don't have relisivm option. it's reasonable to make it in view pg_matviews?\n\nMake sense. I'll do it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Sun, 27 Aug 2023 22:35:51 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, 29 Jun 2023 18:20:32 +0800\njian he <[email protected]> wrote:\n\n> On Thu, Jun 29, 2023 at 12:40 AM jian he <[email protected]> wrote:\n> >\n> > On Wed, Jun 28, 2023 at 4:06 PM Yugo NAGATA <[email protected]> wrote:\n> > >\n> > > On Wed, 28 Jun 2023 00:01:02 +0800\n> > > jian he <[email protected]> wrote:\n> > >\n> > > > On Thu, Jun 1, 2023 at 2:47 AM Yugo NAGATA <[email protected]> wrote:\n> > > > >\n> > > > > On Thu, 1 Jun 2023 23:59:09 +0900\n> > > > > Yugo NAGATA <[email protected]> wrote:\n> > > > >\n> > > > > > Hello hackers,\n> > > > > >\n> > > > > > Here's a rebased version of the patch-set adding Incremental View\n> > > > > > Maintenance support for PostgreSQL. That was discussed in [1].\n> > > > >\n> > > > > > [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n> > > > >\n> > > > > ---------------------------------------------------------------------------------------\n> > > > > * Overview\n> > > > >\n> > > > > Incremental View Maintenance (IVM) is a way to make materialized views\n> > > > > up-to-date by computing only incremental changes and applying them on\n> > > > > views. IVM is more efficient than REFRESH MATERIALIZED VIEW when\n> > > > > only small parts of the view are changed.\n> > > > >\n> > > > > ** Feature\n> > > > >\n> > > > > The attached patchset provides a feature that allows materialized views\n> > > > > to be updated automatically and incrementally just after a underlying\n> > > > > table is modified.\n> > > > >\n> > > > > You can create an incementally maintainable materialized view (IMMV)\n> > > > > by using CREATE INCREMENTAL MATERIALIZED VIEW command.\n> > > > >\n> > > > > The followings are supported in view definition queries:\n> > > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > > > > - some built-in aggregate functions (count, sum, avg, min, max)\n> > > > > - GROUP BY clause\n> > > > > - DISTINCT clause\n> > > > >\n> > > > > Views can contain multiple tuples with the same content (duplicate tuples).\n> > > > >\n> > > > > ** Restriction\n> > > > >\n> > > > > The following are not supported in a view definition:\n> > > > > - Outer joins\n> > > > > - Aggregates otehr than above, window functions, HAVING\n> > > > > - Sub-queries, CTEs\n> > > > > - Set operations (UNION, INTERSECT, EXCEPT)\n> > > > > - DISTINCT ON, ORDER BY, LIMIT, OFFSET\n> > > > >\n> > > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > > system columns, or expressions that contains aggregates.\n> > > > >\n> > > > > ---------------------------------------------------------------------------------------\n> > > > > * Design\n> > > > >\n> > > > > An IMMV is maintained using statement-level AFTER triggers.\n> > > > > When an IMMV is created, triggers are automatically created on all base\n> > > > > tables contained in the view definition query.\n> > > > >\n> > > > > When a table is modified, changes that occurred in the table are extracted\n> > > > > as transition tables in the AFTER triggers. Then, changes that will occur in\n> > > > > the view are calculated by a rewritten view dequery in which the modified table\n> > > > > is replaced with the transition table.\n> > > > >\n> > > > > For example, if the view is defined as \"SELECT * FROM R, S\", and tuples inserted\n> > > > > into R are stored in a transiton table dR, the tuples that will be inserted into\n> > > > > the view are calculated as the result of \"SELECT * FROM dR, S\".\n> > > > >\n> > > > > ** Multiple Tables Modification\n> > > > >\n> > > > > Multiple tables can be modified in a statement when using triggers, foreign key\n> > > > > constraint, or modifying CTEs. When multiple tables are modified, we need\n> > > > > the state of tables before the modification.\n> > > > >\n> > > > > For example, when some tuples, dR and dS, are inserted into R and S respectively,\n> > > > > the tuples that will be inserted into the view are calculated by the following\n> > > > > two queries:\n> > > > >\n> > > > > \"SELECT * FROM dR, S_pre\"\n> > > > > \"SELECT * FROM R, dS\"\n> > > > >\n> > > > > where S_pre is the table before the modification, R is the current state of\n> > > > > table, that is, after the modification. This pre-update states of table\n> > > > > is calculated by filtering inserted tuples and appending deleted tuples.\n> > > > > The subquery that represents pre-update state is generated in get_prestate_rte().\n> > > > > Specifically, the insterted tuples are filtered by calling IVM_visible_in_prestate()\n> > > > > in WHERE clause. This function checks the visibility of tuples by using\n> > > > > the snapshot taken before table modification. The deleted tuples are contained\n> > > > > in the old transition table, and this table is appended using UNION ALL.\n> > > > >\n> > > > > Transition tables for each modification are collected in each AFTER trigger\n> > > > > function call. Then, the view maintenance is performed in the last call of\n> > > > > the trigger.\n> > > > >\n> > > > > In the original PostgreSQL, tuplestores of transition tables are freed at the\n> > > > > end of each nested query. However, their lifespan needs to be prolonged to\n> > > > > the end of the out-most query in order to maintain the view in the last AFTER\n> > > > > trigger. For this purpose, SetTransitionTablePreserved is added in trigger.c.\n> > > > >\n> > > > > ** Duplicate Tulpes\n> > > > >\n> > > > > When calculating changes that will occur in the view (= delta tables),\n> > > > > multiplicity of tuples are calculated by using count(*).\n> > > > >\n> > > > > When deleting tuples from the view, tuples to be deleted are identified by\n> > > > > joining the delta table with the view, and tuples are deleted as many as\n> > > > > specified multiplicity by numbered using row_number() function.\n> > > > > This is implemented in apply_old_delta().\n> > > > >\n> > > > > When inserting tuples into the view, each tuple is duplicated to the\n> > > > > specified multiplicity using generate_series() function. This is implemented\n> > > > > in apply_new_delta().\n> > > > >\n> > > > > ** DISTINCT clause\n> > > > >\n> > > > > When DISTINCT is used, the view has a hidden column __ivm_count__ that\n> > > > > stores multiplicity for tuples. When tuples are deleted from or inserted into\n> > > > > the view, the values of __ivm_count__ column is decreased or increased as many\n> > > > > as specified multiplicity. Eventually, when the values becomes zero, the\n> > > > > corresponding tuple is deleted from the view. This is implemented in\n> > > > > apply_old_delta_with_count() and apply_new_delta_with_count().\n> > > > >\n> > > > > ** Aggregates\n> > > > >\n> > > > > Built-in count sum, avg, min, and max are supported. Whether a given\n> > > > > aggregate function can be used or not is checked by using its OID in\n> > > > > check_aggregate_supports_ivm().\n> > > > >\n> > > > > When creating a materialized view containing aggregates, in addition\n> > > > > to __ivm_count__, more than one hidden columns for each aggregate are\n> > > > > added to the target list. For example, columns for storing sum(x),\n> > > > > count(x) are added if we have avg(x). When the view is maintained,\n> > > > > aggregated values are updated using these hidden columns, also hidden\n> > > > > columns are updated at the same time.\n> > > > >\n> > > > > The maintenance of aggregated view is performed in\n> > > > > apply_old_delta_with_count() and apply_new_delta_with_count(). The SET\n> > > > > clauses for updating columns are generated by append_set_clause_*().\n> > > > >\n> > > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > > deleted from a table, we need to update the value to the new min/max\n> > > > > recalculated from the tables rather than incremental computation. This\n> > > > > is performed in recalc_and_set_values().\n> > > > >\n> > > > > ---------------------------------------------------------------------------------------\n> > > > > * Details of the patch-set (v28)\n> > > > >\n> > > > > > The patch-set consists of the following eleven patches.\n> > > > >\n> > > > > In the previous version, the number of patches were nine.\n> > > > > In the latest patch-set, the patches are divided more finely\n> > > > > aiming to make the review easier.\n> > > > >\n> > > > > > - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> > > > >\n> > > > > The prposed syntax to create an incrementally maintainable materialized\n> > > > > view (IMMV) is;\n> > > > >\n> > > > > CREATE INCREMENTAL MATERIALIZED VIEW AS SELECT .....;\n> > > > >\n> > > > > However, this syntax is tentative, so any suggestions are welcomed.\n> > > > >\n> > > > > > - 0002: Add relisivm column to pg_class system catalog\n> > > > >\n> > > > > We add a new field in pg_class to indicate a relation is IMMV.\n> > > > > Another alternative is to add a new catalog for managing materialized\n> > > > > views including IMMV, but I am not sure if we want this.\n> > > > >\n> > > > > > - 0003: Allow to prolong life span of transition tables until transaction end\n> > > > >\n> > > > > This patch fixes the trigger system to allow to prolong lifespan of\n> > > > > tuple stores for transition tables until the transaction end. We need\n> > > > > this because multiple transition tables have to be preserved until the\n> > > > > end of the out-most query when multiple tables are modified by nested\n> > > > > triggers. (as explained above in Design - Multiple Tables Modification)\n> > > > >\n> > > > > If we don't want to change the trigger system in such way, the alternative\n> > > > > is to copy the contents of transition tables to other tuplestores, although\n> > > > > it needs more time and memory.\n> > > > >\n> > > > > > - 0004: Add Incremental View Maintenance support to pg_dump\n> > > > >\n> > > > > This patch enables pg_dump to output IMMV using the new syntax.\n> > > > >\n> > > > > > - 0005: Add Incremental View Maintenance support to psql\n> > > > >\n> > > > > This patch implements tab-completion for the new syntax and adds\n> > > > > information of IMMV to \\d meta-command results.\n> > > > >\n> > > > > > - 0006: Add Incremental View Maintenance support\n> > > > >\n> > > > > This patch implements the basic IVM feature.\n> > > > > DISTINCT and aggregate are not supported here.\n> > > > >\n> > > > > When an IMMV is created, the view query is checked, and if any\n> > > > > non-supported feature is used, it raises an error. If it is ok,\n> > > > > triggers are created on base tables and an unique index is\n> > > > > created on the view if possible.\n> > > > >\n> > > > > In BEFORE trigger, an entry is created for each IMMV and the number\n> > > > > of trigger firing is counted. Also, the snapshot just before the\n> > > > > table modification is stored.\n> > > > >\n> > > > > In AFTER triggers, each transition tables are preserved. The number\n> > > > > of trigger firing is counted also here, and when the firing number of\n> > > > > BEFORE and AFTER trigger reach the same, it is deemed the final AFTER\n> > > > > trigger call.\n> > > > >\n> > > > > In the final AFTER trigger, the IMMV is maintained. Rewritten view\n> > > > > query is executed to generate delta tables, and deltas are applied\n> > > > > to the view. If multiple tables are modified simultaneously, this\n> > > > > process is iterated for each modified table. Tables before processed\n> > > > > are represented in \"pre-update-state\", processed tables are\n> > > > > \"post-update-state\" in the rewritten query.\n> > > > >\n> > > > > > - 0007: Add DISTINCT support for IVM\n> > > > >\n> > > > > This patch adds DISTINCT clause support.\n> > > > >\n> > > > > When an IMMV including DISTINCT is created, a hidden column\n> > > > > \"__ivm_count__\" is added to the target list. This column has the\n> > > > > number of duplicity of the same tuples. The duplicity is calculated\n> > > > > by adding \"count(*)\" and GROUP BY to the view query.\n> > > > >\n> > > > > When an IMMV is maintained, the duplicity in __ivm_count__ is updated,\n> > > > > and a tuples whose duplicity becomes zero can be deleted from the view.\n> > > > > This logic is implemented by SQL in apply_old_delta_with_count and\n> > > > > apply_new_delta_with_count.\n> > > > >\n> > > > > Columns starting with \"__ivm_\" are deemed hidden columns that doesn't\n> > > > > appear when a view is accessed by \"SELECT * FROM ....\". This is\n> > > > > implemented by fixing parse_relation.c.\n> > > > >\n> > > > > > - 0008: Add aggregates support in IVM\n> > > > >\n> > > > > This patch provides codes for aggregates support, specifically\n> > > > > for builtin count, sum, and avg.\n> > > > >\n> > > > > When an IMMV containing an aggregate is created, it is checked if this\n> > > > > aggregate function is supported, and if it is ok, some hidden columns\n> > > > > are added to the target list.\n> > > > >\n> > > > > When the IMMV is maintained, the aggregated value is updated as well as\n> > > > > related hidden columns. The way of update depends the type of aggregate\n> > > > > functions, and SET clause string is generated for each aggregate.\n> > > > >\n> > > > > > - 0009: Add support for min/max aggregates for IVM\n> > > > >\n> > > > > This patch adds min/max aggregates support.\n> > > > >\n> > > > > This is separated from #0008 because min/max needs more complicated\n> > > > > work than count, sum, and avg.\n> > > > >\n> > > > > If the view has min(x) or max(x) and the minimum or maximal value is\n> > > > > deleted from a table, we need to update the value to the new min/max\n> > > > > recalculated from the tables rather than incremental computation.\n> > > > > This is performed in recalc_and_set_values().\n> > > > >\n> > > > > TIDs and keys of tuples that need re-calculation are returned as a\n> > > > > result of the query that deleted min/max values from the view using\n> > > > > RETURNING clause. The plan to recalculate and set the new min/max value\n> > > > > are stored and reused.\n> > > > >\n> > > > > > - 0010: regression tests\n> > > > >\n> > > > > This patch provides regression tests for IVM.\n> > > > >\n> > > > > > - 0011: documentation\n> > > > >\n> > > > > This patch provides documantation for IVM.\n> > > > >\n> > > > > ---------------------------------------------------------------------------------------\n> > > > > * Changes from the Previous Version (v27)\n> > > > >\n> > > > > - Allow TRUNCATE on base tables\n> > > > >\n> > > > > When a base table is truncated, the view content will be empty if the\n> > > > > view definition query does not contain an aggregate without a GROUP clause.\n> > > > > Therefore, such views can be truncated.\n> > > > >\n> > > > > Aggregate views without a GROUP clause always have one row. Therefore,\n> > > > > if a base table is truncated, the view will not be empty and will contain\n> > > > > a row with NULL value (or 0 for count()). So, in this case, we refresh the\n> > > > > view instead of truncating it.\n> > > > >\n> > > > > - Fix bugs reported by huyajun [1]\n> > > > >\n> > > > > [1] https://www.postgresql.org/message-id/tencent_FCAF11BCA5003FD16BDDFDDA5D6A19587809%40qq.com\n> > > > >\n> > > > > ---------------------------------------------------------------------------------------\n> > > > > * Discussion\n> > > > >\n> > > > > ** Aggregate support\n> > > > >\n> > > > > There were a few suggestions that general aggregate functions should be\n> > > > > supported [2][3], which may be possible by extending pg_aggregate catalog.\n> > > > > However, we decided to leave supporting general aggregates to the future work [4]\n> > > > > because it would need substantial works and make the patch more complex and\n> > > > > bigger.\n> > > > >\n> > > > > There has been no opposite opinion on this. However, if we need more discussion\n> > > > > on the design of aggregate support, we can omit aggregate support for the first\n> > > > > release of IVM.\n> > > > >\n> > > > > [2] https://www.postgresql.org/message-id/20191128140333.GA25947%40alvherre.pgsql\n> > > > > [3] https://www.postgresql.org/message-id/CAM-w4HOvDrL4ou6m%3D592zUiKGVzTcOpNj-d_cJqzL00fdsS5kg%40mail.gmail.com\n> > > > > [4] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > > >\n> > > > > ** Hidden columns\n> > > > >\n> > > > > In order to support DISTINCT or aggregates, our implementation uses hidden columns.\n> > > > >\n> > > > > Columns starting with \"__ivm_\" are hidden columns that doesn't appear when a\n> > > > > view is accessed by \"SELECT * FROM ....\". For this aim, parse_relation.c is\n> > > > > fixed. There was a proposal to enable hidden columns by adding a new flag to\n> > > > > pg_attribute [5], but this thread is no longer active, so we decided to check\n> > > > > the hidden column by its name [6].\n> > > > >\n> > > > > [5] https://www.postgresql.org/message-id/flat/CAEepm%3D3ZHh%3Dp0nEEnVbs1Dig_UShPzHUcMNAqvDQUgYgcDo-pA%40mail.gmail.com\n> > > > > [6] https://www.postgresql.org/message-id/20201016193034.9a4c44c79fc1eca7babe093e%40sraoss.co.jp\n> > > > >\n> > > > > ** Concurrent Transactions\n> > > > >\n> > > > > When the view definition has more than one table, we acquire an exclusive\n> > > > > lock before the view maintenance in order to avoid inconsistent results.\n> > > > > This behavior was explained in [7]. The lock was improved to use weaker lock\n> > > > > when the view has only one table based on a suggestion from Konstantin Knizhnik [8].\n> > > > > However, due to the implementation that uses ctid for identifying target tuples,\n> > > > > we still have to use an exclusive lock for DELETE and UPDATE.\n> > > > >\n> > > > > [7] https://www.postgresql.org/message-id/20200909092752.c91758a1bec3479668e82643%40sraoss.co.jp\n> > > > > [8] https://www.postgresql.org/message-id/5663f5f0-48af-686c-bf3c-62d279567e2a%40postgrespro.ru\n> > > > >\n> > > > > ** Automatic Index Creation\n> > > > >\n> > > > > When a view is created, a unique index is automatically created if\n> > > > > possible, that is, if the view definition query has a GROUP BY or\n> > > > > DISTINCT, or if the view contains all primary key attributes of\n> > > > > its base tables in the target list. It is necessary for efficient\n> > > > > view maintenance. This feature is based on a suggestion from\n> > > > > Konstantin Knizhnik [9].\n> > > > >\n> > > > > [9] https://www.postgresql.org/message-id/89729da8-9042-7ea0-95af-e415df6da14d%40postgrespro.ru\n> > > > >\n> > > > >\n> > > > > ** Trigger and Transition Tables\n> > > > >\n> > > > > We implemented IVM based on triggers. This is because we want to use\n> > > > > transition tables to extract changes on base tables. Also, there are\n> > > > > other constraint that are using triggers in its implementation, like\n> > > > > foreign references. However, if we can use transition table like feature\n> > > > > without relying triggers, we don't have to insist to use triggers and we\n> > > > > might implement IVM in the executor directly as similar as declarative\n> > > > > partitioning.\n> > > > >\n> > > > > ** Feature to be Supported in the First Release\n> > > > >\n> > > > > The current patch-set supports DISTINCT and aggregates for built-in count,\n> > > > > sum, avg, min and max. Do we need all these feature for the first IVM release?\n> > > > > Supporting DISTINCT and aggregates needs discussion on hidden columns, and\n> > > > > for supporting min/max we need to discuss on re-calculation method. Before\n> > > > > handling such relatively advanced feature, maybe, should we focus to design\n> > > > > and implement of the basic feature of IVM?\n> > > > >\n> > > > >\n> > > > > Any suggestion and discussion are welcomed!\n> > > > >\n> > > > > Regards,\n> > > > > Yugo Nagata\n> > > > >\n> > > > > --\n> > > > > Yugo NAGATA <[email protected]>\n> > > > >\n> > > > >\n> > > >\n> > > >\n> > > > > The followings are supported in view definition queries:\n> > > > > - SELECT ... FROM ... WHERE ..., joins (inner joins, self-joins)\n> > > >\n> > > >\n> > > > > Also, a view definition query cannot contain other views, materialized views,\n> > > > > foreign tables, partitioned tables, partitions, VALUES, non-immutable functions,\n> > > > > system columns, or expressions that contains aggregates.\n> > > >\n> > > > Does this also apply to tableoid? but tableoid is a constant, so it\n> > > > should be fine?\n> > > > can following two queries apply to this feature.\n> > > > select tableoid, unique1 from tenk1;\n> > >\n> > > Currently, this is not allowed because tableoid is a system column.\n> > > As you say, tableoid is a constant, so we can allow. Should we do this?\n> > >\n> > > > select 1 as constant, unique1 from tenk1;\n> > >\n> > > This is allowed, of course.\n> > >\n> > > > I didn't apply the patch.(will do later, for someone to test, it would\n> > > > be a better idea to dump a whole file separately....).\n> > >\n> > > Thank you! I'm looking forward to your feedback.\n> > > (I didn't attach a whole patch separately because I wouldn't like\n> > > cfbot to be unhappy...)\n> > >\n> > > Regards,\n> > > Yugo Nagata\n> > >\n> > > --\n> > > Yugo NAGATA <[email protected]>\n> >\n> > I played around first half of regress patch.\n> > these all following queries fails.\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> > SELECT DISTINCT * , 1 as \"__ivm_count__\" FROM mv_base_a;\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> > SELECT DISTINCT * , 1 as \"__ivm_countblablabla\" FROM mv_base_a;\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> > SELECT DISTINCT * , 1 as \"__ivm_count\" FROM mv_base_a;\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> > SELECT DISTINCT * , 1 as \"__ivm_count_____\" FROM mv_base_a;\n> >\n> > CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_rename AS\n> > SELECT DISTINCT * , 1 as \"__ivm_countblabla\" FROM mv_base_a;\n> >\n> > so the hidden column reserved pattern \"__ivm_count.*\"? that would be a lot....\n> >\n> > select * from pg_matviews where matviewname = 'mv_ivm_1';\n> > don't have relisivm option. it's reasonable to make it in view pg_matviews?\n> \n> another trivial:\n> incremental_matview.out (last few lines) last transaction seems to\n> need COMMIT command.\n\nThank you for pointing out it.\nThere is a unnecessary BEGIN, so I'll remove it.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Sun, 27 Aug 2023 22:41:53 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, 29 Jun 2023 18:51:06 +0800\njian he <[email protected]> wrote:\n\n> I cannot build the doc.\n> git clean -fdx\n> git am ~/Desktop/tmp/*.patch\n> \n> Applying: Add a syntax to create Incrementally Maintainable Materialized Views\n> Applying: Add relisivm column to pg_class system catalog\n> Applying: Allow to prolong life span of transition tables until transaction end\n> Applying: Add Incremental View Maintenance support to pg_dump\n> Applying: Add Incremental View Maintenance support to psql\n> Applying: Add Incremental View Maintenance support\n> Applying: Add DISTINCT support for IVM\n> Applying: Add aggregates support in IVM\n> Applying: Add support for min/max aggregates for IVM\n> Applying: Add regression tests for Incremental View Maintenance\n> Applying: Add documentations about Incremental View Maintenance\n> .git/rebase-apply/patch:79: trailing whitespace.\n> clause.\n> warning: 1 line adds whitespace errors.\n> \n> Because of this, the {ninja docs} command failed. ERROR message:\n> \n> [6/6] Generating doc/src/sgml/html with a custom command\n> FAILED: doc/src/sgml/html\n> /usr/bin/python3\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml/xmltools_dep_wrapper\n> --targetname doc/src/sgml/html --depfile doc/src/sgml/html.d --tool\n> /usr/bin/xsltproc -- -o doc/src/sgml/ --nonet --stringparam pg.version\n> 16beta2 --path doc/src/sgml --path\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml\n> ../../Desktop/pg_sources/main/postgres/doc/src/sgml/stylesheet.xsl\n> doc/src/sgml/postgres-full.xml\n> ERROR: id attribute missing on <sect2> element under /book[@id =\n> 'postgres']/part[@id = 'server-programming']/chapter[@id =\n> 'rules']/sect1[@id = 'rules-ivm']\n> error: file doc/src/sgml/postgres-full.xml\n> xsltRunStylesheet : run failed\n> ninja: build stopped: subcommand failed.\n\nThank your for pointing out this.\n\nI'll add ids for all sections to suppress the errors.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 28 Aug 2023 00:15:05 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Fri, 30 Jun 2023 08:00:00 +0800\njian he <[email protected]> wrote:\n\n> Hi there.\n> in v28-0005-Add-Incremental-View-Maintenance-support-to-psql.patch\n> I don't know how to set psql to get the output\n> \"Incremental view maintenance: yes\"\n\nThis information will appear when you use \"d+\" command for an \nincrementally maintained materialized view.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 28 Aug 2023 01:05:15 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Sun, 2 Jul 2023 08:25:12 +0800\njian he <[email protected]> wrote:\n\n> This is probably not trivial.\n> In function apply_new_delta_with_count.\n> \n> appendStringInfo(&querybuf,\n> \"WITH updt AS (\" /* update a tuple if this exists in the view */\n> \"UPDATE %s AS mv SET %s = mv.%s OPERATOR(pg_catalog.+) diff.%s \"\n> \"%s \" /* SET clauses for aggregates */\n> \"FROM %s AS diff \"\n> \"WHERE %s \" /* tuple matching condition */\n> \"RETURNING %s\" /* returning keys of updated tuples */\n> \") INSERT INTO %s (%s)\" /* insert a new tuple if this doesn't existw */\n> \"SELECT %s FROM %s AS diff \"\n> \"WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE %s);\",\n> \n> ---------------------\n> \") INSERT INTO %s (%s)\" /* insert a new tuple if this doesn't existw */\n> \"SELECT %s FROM %s AS diff \"\n> \n> the INSERT INTO line, should have one white space in the end?\n> also \"existw\" should be \"exists\"\n\nYes, we should need a space although it works. I'll fix as well as the typo.\nThank you.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 28 Aug 2023 01:12:41 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Sun, 2 Jul 2023 10:38:20 +0800\njian he <[email protected]> wrote:\n\n> ok. Now I really found a small bug.\n> \n> this works as intended:\n> BEGIN;\n> CREATE INCREMENTAL MATERIALIZED VIEW test_ivm AS SELECT i, MIN(j) as\n> min_j FROM mv_base_a group by 1;\n> INSERT INTO mv_base_a select 1,-2 where false;\n> rollback;\n> \n> however the following one:\n> BEGIN;\n> CREATE INCREMENTAL MATERIALIZED VIEW test_ivm1 AS SELECT MIN(j) as\n> min_j FROM mv_base_a;\n> INSERT INTO mv_base_a select 1, -2 where false;\n> rollback;\n> \n> will evaluate\n> tuplestore_tuple_count(new_tuplestores) to 1, it will walk through\n> IVM_immediate_maintenance function to apply_delta.\n> but should it be zero?\n\nThis is not a bug because an aggregate without GROUP BY always\nresults one row whose value is NULL. \n\nThe contents of test_imv1 would be always same as \" SELECT MIN(j) as min_j \nFROM mv_base_a;\", isn't it?\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 28 Aug 2023 02:49:08 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Mon, 28 Aug 2023 02:49:08 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Sun, 2 Jul 2023 10:38:20 +0800\n> jian he <[email protected]> wrote:\n\nI attahed the patches v29 updated to comments from jian he.\nThe changes from the previous includes:\n\n- errors in documentations is fixed.\n- remove unnecessary BEGIN from the test\n- add isimmv column to pg_matviews system view\n- fix a typo\n- rebase to the master branch\n\n> \n> > ok. Now I really found a small bug.\n> > \n> > this works as intended:\n> > BEGIN;\n> > CREATE INCREMENTAL MATERIALIZED VIEW test_ivm AS SELECT i, MIN(j) as\n> > min_j FROM mv_base_a group by 1;\n> > INSERT INTO mv_base_a select 1,-2 where false;\n> > rollback;\n> > \n> > however the following one:\n> > BEGIN;\n> > CREATE INCREMENTAL MATERIALIZED VIEW test_ivm1 AS SELECT MIN(j) as\n> > min_j FROM mv_base_a;\n> > INSERT INTO mv_base_a select 1, -2 where false;\n> > rollback;\n> > \n> > will evaluate\n> > tuplestore_tuple_count(new_tuplestores) to 1, it will walk through\n> > IVM_immediate_maintenance function to apply_delta.\n> > but should it be zero?\n> \n> This is not a bug because an aggregate without GROUP BY always\n> results one row whose value is NULL. \n> \n> The contents of test_imv1 would be always same as \" SELECT MIN(j) as min_j \n> FROM mv_base_a;\", isn't it?\n> \n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Mon, 28 Aug 2023 11:52:52 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Mon, 28 Aug 2023 11:52:52 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Mon, 28 Aug 2023 02:49:08 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> \n> > On Sun, 2 Jul 2023 10:38:20 +0800\n> > jian he <[email protected]> wrote:\n> \n> I attahed the patches v29 updated to comments from jian he.\n> The changes from the previous includes:\n> \n> - errors in documentations is fixed.\n> - remove unnecessary BEGIN from the test\n> - add isimmv column to pg_matviews system view\n> - fix a typo\n> - rebase to the master branch\n\nI found pg_dump test was broken, so attached the fixed version.\n\nRegards,\nYugo Nagata\n\n> \n> > \n> > > ok. Now I really found a small bug.\n> > > \n> > > this works as intended:\n> > > BEGIN;\n> > > CREATE INCREMENTAL MATERIALIZED VIEW test_ivm AS SELECT i, MIN(j) as\n> > > min_j FROM mv_base_a group by 1;\n> > > INSERT INTO mv_base_a select 1,-2 where false;\n> > > rollback;\n> > > \n> > > however the following one:\n> > > BEGIN;\n> > > CREATE INCREMENTAL MATERIALIZED VIEW test_ivm1 AS SELECT MIN(j) as\n> > > min_j FROM mv_base_a;\n> > > INSERT INTO mv_base_a select 1, -2 where false;\n> > > rollback;\n> > > \n> > > will evaluate\n> > > tuplestore_tuple_count(new_tuplestores) to 1, it will walk through\n> > > IVM_immediate_maintenance function to apply_delta.\n> > > but should it be zero?\n> > \n> > This is not a bug because an aggregate without GROUP BY always\n> > results one row whose value is NULL. \n> > \n> > The contents of test_imv1 would be always same as \" SELECT MIN(j) as min_j \n> > FROM mv_base_a;\", isn't it?\n> > \n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <[email protected]>\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Mon, 28 Aug 2023 16:05:30 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "hi\nbased on v29.\nbased on https://stackoverflow.com/a/4014981/1560347:\nI added a new function append_update_set_caluse, and deleted\nfunctions: {append_set_clause_for_count, append_set_clause_for_sum,\nappend_set_clause_for_avg, append_set_clause_for_minmax}\n\nI guess this way is more extensible/generic than yours.\n\nreplaced the following code with the generic function: append_update_set_caluse.\n+ /* For views with aggregates, we need to build SET clause for\nupdating aggregate\n+ * values. */\n+ if (query->hasAggs && IsA(tle->expr, Aggref))\n+ {\n+ Aggref *aggref = (Aggref *) tle->expr;\n+ const char *aggname = get_func_name(aggref->aggfnoid);\n+\n+ /*\n+ * We can use function names here because it is already checked if these\n+ * can be used in IMMV by its OID at the definition time.\n+ */\n+\n+ /* count */\n+ if (!strcmp(aggname, \"count\"))\n+ append_set_clause_for_count(resname, aggs_set_old, aggs_set_new,\naggs_list_buf);\n+\n+ /* sum */\n+ else if (!strcmp(aggname, \"sum\"))\n+ append_set_clause_for_sum(resname, aggs_set_old, aggs_set_new, aggs_list_buf);\n+\n+ /* avg */\n+ else if (!strcmp(aggname, \"avg\"))\n+ append_set_clause_for_avg(resname, aggs_set_old, aggs_set_new, aggs_list_buf,\n+ format_type_be(aggref->aggtype));\n+\n+ else\n+ elog(ERROR, \"unsupported aggregate function: %s\", aggname);\n+ }\n----------------------<<<\nattached is my refactor. there is some whitespace errors in the\npatches, you need use\ngit apply --reject --whitespace=fix\nbasedon_v29_matview_c_refactor_update_set_clause.patch\n\nAlso you patch cannot use git apply, i finally found out bulk apply\nusing gnu patch from\nhttps://serverfault.com/questions/102324/apply-multiple-patch-files.\npreviously I just did it manually one by one.\n\nI think if you use { for i in $PATCHES/v29*.patch; do patch -p1 < $i;\ndone } GNU patch, it will generate an .orig file for every modified\nfile?\n-----------------<<<<<\nsrc/backend/commands/matview.c\n2268: /* For tuple deletion */\nmaybe \"/* For tuple deletion and update*/\" is more accurate?\n-----------------<<<<<\ncurrently at here: src/test/regress/sql/incremental_matview.sql\n98: -- support SUM(), COUNT() and AVG() aggregate functions\n99: BEGIN;\n100: CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_agg AS SELECT i,\nSUM(j), COUNT(i), AVG(j) FROM mv_base_a GROUP BY i;\n101: SELECT * FROM mv_ivm_agg ORDER BY 1,2,3,4;\n102: INSERT INTO mv_base_a VALUES(2,100);\n\nsrc/backend/commands/matview.c\n2858: if (SPI_exec(querybuf.data, 0) != SPI_OK_INSERT)\n2859: elog(ERROR, \"SPI_exec failed: %s\", querybuf.data);\n\nthen I debug, print out querybuf.data:\nWITH updt AS (UPDATE public.mv_ivm_agg AS mv SET __ivm_count__ =\nmv.__ivm_count__ OPERATOR(pg_catalog.+) diff.__ivm_count__ , sum =\n(CASE WHEN mv.__ivm_count_sum__ OPERATOR(pg_catalog.=) 0 AND\ndiff.__ivm_count_sum__ OPERATOR(pg_catalog.=) 0 THEN NULL WHEN mv.sum\nIS NULL THEN diff.sum WHEN diff.sum IS NULL THEN mv.sum ELSE (mv.sum\nOPERATOR(pg_catalog.+) diff.sum) END), __ivm_count_sum__ =\n(mv.__ivm_count_sum__ OPERATOR(pg_catalog.+) diff.__ivm_count_sum__),\ncount = (mv.count OPERATOR(pg_catalog.+) diff.count), avg = (CASE WHEN\nmv.__ivm_count_avg__ OPERATOR(pg_catalog.=) 0 AND\ndiff.__ivm_count_avg__ OPERATOR(pg_catalog.=) 0 THEN NULL WHEN\nmv.__ivm_sum_avg__ IS NULL THEN diff.__ivm_sum_avg__ WHEN\ndiff.__ivm_sum_avg__ IS NULL THEN mv.__ivm_sum_avg__ ELSE\n(mv.__ivm_sum_avg__ OPERATOR(pg_catalog.+)\ndiff.__ivm_sum_avg__)::numeric END) OPERATOR(pg_catalog./)\n(mv.__ivm_count_avg__ OPERATOR(pg_catalog.+) diff.__ivm_count_avg__),\n__ivm_sum_avg__ = (CASE WHEN mv.__ivm_count_avg__\nOPERATOR(pg_catalog.=) 0 AND diff.__ivm_count_avg__\nOPERATOR(pg_catalog.=) 0 THEN NULL WHEN mv.__ivm_sum_avg__ IS NULL\nTHEN diff.__ivm_sum_avg__ WHEN diff.__ivm_sum_avg__ IS NULL THEN\nmv.__ivm_sum_avg__ ELSE (mv.__ivm_sum_avg__ OPERATOR(pg_catalog.+)\ndiff.__ivm_sum_avg__) END), __ivm_count_avg__ = (mv.__ivm_count_avg__\nOPERATOR(pg_catalog.+) diff.__ivm_count_avg__) FROM new_delta AS diff\nWHERE (mv.i OPERATOR(pg_catalog.=) diff.i OR (mv.i IS NULL AND diff.i\nIS NULL)) RETURNING mv.i) INSERT INTO public.mv_ivm_agg (i, sum,\ncount, avg, __ivm_count_sum__, __ivm_count_avg__, __ivm_sum_avg__,\n__ivm_count__) SELECT i, sum, count, avg, __ivm_count_sum__,\n__ivm_count_avg__, __ivm_sum_avg__, __ivm_count__ FROM new_delta AS\ndiff WHERE NOT EXISTS (SELECT 1 FROM updt AS mv WHERE (mv.i\nOPERATOR(pg_catalog.=) diff.i OR (mv.i IS NULL AND diff.i IS NULL)));\n\nAt this final SPI_exec, we have a update statement with related\ncolumns { __ivm_count_sum__, sum, __ivm_count__, count, avg,\n__ivm_sum_avg__, __ivm_count_avg__}. At this time, my mind stops\nworking, querybuf.data is way too big, but I still feel like there is\nsome logic associated with these columns, maybe we can use it as an\nassertion to prove that this query (querybuf.len = 1834) is indeed\ncorrect.\n\nSince the apply delta query is quite complex, I feel like adding some\n\"if debug then print out the final querybuf.data end if\" would be a\ngood idea.\n\nwe add hidden columns somewhere, also to avoid corner cases, so maybe\nsomewhere we should assert total attribute number is sane.",
"msg_date": "Fri, 1 Sep 2023 15:42:17 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "> attached is my refactor. there is some whitespace errors in the\n> patches, you need use\n> git apply --reject --whitespace=fix\n> basedon_v29_matview_c_refactor_update_set_clause.patch\n> \n> Also you patch cannot use git apply, i finally found out bulk apply\n\nI have no problem with applying Yugo's v29 patches using git apply, no\nwhite space errors.\n\n$ git apply ~/v29*\n\n(the patches are saved under my home directory).\n\nI suggest you to check your email application whether it correctly\nsaved the patch files for you.\n\nFYI, here are results from sha256sum:\n\nffac37cb455788c1105ffc01c6b606de75f53321c2f235f7efa19f3f52d12b9e v29-0001-Add-a-syntax-to-create-Incrementally-Maintainabl.patch\nf684485e7c9ac1b2990943a3c73fa49a9091a268917547d9e116baef5118cca7 v29-0002-Add-relisivm-column-to-pg_class-system-catalog.patch\nfcf5bc8ae562ed1c2ab397b499544ddab03ad2c3acb2263d0195a3ec799b131c v29-0003-Allow-to-prolong-life-span-of-transition-tables-.patch\na7a13ef8e73c4717166db079d5607f78d21199379de341a0e8175beef5ea1c1a v29-0004-Add-Incremental-View-Maintenance-support-to-pg_d.patch\na2aa51d035774867bfab1580ef14143998dc71c1b941bd1a3721dc019bc62649 v29-0005-Add-Incremental-View-Maintenance-support-to-psql.patch\nfe0225d761a08eb80082f1a2c039b9b8b20626169b03abaf649db9c74fe99194 v29-0006-Add-Incremental-View-Maintenance-support.patch\n68b007befedcf92fc83ab8c3347ac047a50816f061c77b69281e12d52944db82 v29-0007-Add-DISTINCT-support-for-IVM.patch\n2201241a22095f736a17383fc8b26d48a459ebf1c2f5cf120896cfc0ce5e03e4 v29-0008-Add-aggregates-support-in-IVM.patch\n6390117c559bf1585349c5a09b77b784e086ccc22eb530cd364ce78371c66741 v29-0009-Add-support-for-min-max-aggregates-for-IVM.patch\n7019a116c64127783bd9c682ddf1ee3792286d0e41c91a33010111e7be2c9459 v29-0010-Add-regression-tests-for-Incremental-View-Mainte.patch\n189afdc7da866bd958e2d554ba12adf93d7e6d0acb581290a48d72fcf640e243 v29-0011-Add-documentations-about-Incremental-View-Mainte.patch\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 02 Sep 2023 20:46:34 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Sat, Sep 2, 2023 at 7:46 PM Tatsuo Ishii <[email protected]> wrote:\n>\n> > attached is my refactor. there is some whitespace errors in the\n> > patches, you need use\n> > git apply --reject --whitespace=fix\n> > basedon_v29_matview_c_refactor_update_set_clause.patch\n> >\n> > Also you patch cannot use git apply, i finally found out bulk apply\n>\n> I have no problem with applying Yugo's v29 patches using git apply, no\n> white space errors.\n>\n\nthanks. I downloaded the patches from the postgres website, then the\nproblem was solved.\n\nother ideas based on v29.\n\nsrc/include/utils/rel.h\n680: #define RelationIsIVM(relation) ((relation)->rd_rel->relisivm)\nI guess it would be better to add some comments to address the usage.\nSince all peer macros all have some comments.\n\npg_class change, I guess we need bump CATALOG_VERSION_NO?\n\nsmall issue. makeIvmAggColumn and calc_delta need to add an empty\nreturn statement?\n\nstyle issue. in gram.y, \"incremental\" upper case?\n+ CREATE OptNoLog incremental MATERIALIZED VIEW\ncreate_mv_target AS SelectStmt opt_with_data\n\nI don't know how pgident works, do you need to add some keywords to\nsrc/tools/pgindent/typedefs.list to make indentation work?\n\nin\n /* If this is not the last AFTER trigger call, immediately exit. */\n Assert (entry->before_trig_count >= entry->after_trig_count);\n if (entry->before_trig_count != entry->after_trig_count)\n return PointerGetDatum(NULL);\n\nbefore returning NULL, do you also need clean_up_IVM_hash_entry? (I\ndon't know when this case will happen)\n\nin\n /* Replace the modified table with the new delta table and\ncalculate the new view delta*/\n replace_rte_with_delta(rte, table, true, queryEnv);\n refresh_matview_datafill(dest_new, query, queryEnv, tupdesc_new, \"\");\n\nreplace_rte_with_delta does not change the argument: table, argument:\nqueryEnv. refresh_matview_datafill just uses the partial argument of\nthe function calc_delta. So I guess, I am confused by the usage of\nreplace_rte_with_delta. also I think it should return void, since you\njust modify the input argument. Here refresh_matview_datafill is just\npersisting new delta content to dest_new?\n\n\n",
"msg_date": "Mon, 4 Sep 2023 16:48:02 +0800",
"msg_from": "jian he <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there was some CFbot test failure last time it was run [2].\nPlease have a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4337/\n[2] https://cirrus-ci.com/task/6607979311529984\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:51:08 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Mon, 22 Jan 2024 13:51:08 +1100\nPeter Smith <[email protected]> wrote:\n\n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> like there was some CFbot test failure last time it was run [2].\n> Please have a look and post an updated version if necessary.\n\nThank you for pointing out it. The CFbot failure is caused by\na post [1] not by my patch-set, but regardless of it, I will \nheck if we need rebase and send the new version if necessary soon.\n\n[1] https://www.postgresql.org/message-id/CACJufxEoCCJE1vntJp1SWjen8vBUa3vZLgL%3DswPwar4zim976g%40mail.gmail.com\n\nRegards,\nYugo Nagata\n\n> ======\n> [1] https://commitfest.postgresql.org/46/4337/\n> [2] https://cirrus-ci.com/task/6607979311529984\n> \n> Kind Regards,\n> Peter Smith.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Tue, 23 Jan 2024 16:23:27 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Fri, 1 Sep 2023 15:42:17 +0800\njian he <[email protected]> wrote:\n\nI apologize for this late reply. \n\n> I added a new function append_update_set_caluse, and deleted\n> functions: {append_set_clause_for_count, append_set_clause_for_sum,\n> append_set_clause_for_avg, append_set_clause_for_minmax}\n> \n> I guess this way is more extensible/generic than yours.\n\nDo you mean that consolidating such functions to a general function\nmake easier to support a new aggregate function in future? I'm not\nconvinced completely yet it because your suggestion seems that every\nfunctions' logic are just put into a new function, but providing a\ncommon interface might make a sense a bit.\n\nBy the way, when you attach files other than updated patches that\ncan be applied to master branch, using \".patch\" or \".diff\" as the\nfile extension help to avoid to confuse cfbot (for example, like\nbasedon_v29_matview_c_refactor_update_set_clause.patch.txt).\n\n> src/backend/commands/matview.c\n> 2268: /* For tuple deletion */\n> maybe \"/* For tuple deletion and update*/\" is more accurate?\n\nThis \"deletion\" means deletion of tuple from the view rather \nthan DELETE statement, so I think this is ok. \n\n> Since the apply delta query is quite complex, I feel like adding some\n> \"if debug then print out the final querybuf.data end if\" would be a\n> good idea.\n\nAgreed, it would be helpful for debugging. I think it would be good\nto add a debug macro that works if DEBUG_IVM is defined rather than\nadding GUC like debug_print_..., how about it?\n\n> we add hidden columns somewhere, also to avoid corner cases, so maybe\n> somewhere we should assert total attribute number is sane.\n\nThe number of hidden columns to be added depends on the view definition\nquery, so I wonder the Assert condition would be a bit complex. Could\nyou explain what are you assume about like for example? \n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 4 Mar 2024 11:53:44 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Mon, 4 Sep 2023 16:48:02 +0800\njian he <[email protected]> wrote:\n> other ideas based on v29.\n> \n> src/include/utils/rel.h\n> 680: #define RelationIsIVM(relation) ((relation)->rd_rel->relisivm)\n> I guess it would be better to add some comments to address the usage.\n> Since all peer macros all have some comments.\n\nOK. I will add comments on this macro.\n\n> pg_class change, I guess we need bump CATALOG_VERSION_NO?\n\nCATALOG_VERSION_NO is frequently bumped up when new features are\ncommitted, so including it in the patch causes frequent needs for\nrebase during the review of the patch even if no meaningful change\nis made. Therefore, I wonder we don't have to included it in the\npatch at this time. \n\n> small issue. makeIvmAggColumn and calc_delta need to add an empty\n> return statement?\n\nI'm sorry but I could not understand what you suggested, so could\nyou give me more explanation?\n\n> style issue. in gram.y, \"incremental\" upper case?\n> + CREATE OptNoLog incremental MATERIALIZED VIEW\n> create_mv_target AS SelectStmt opt_with_data\n\nThis \"incremental\" is defined as INCREMENTAL or empty, as below.\n\nincremental: INCREMENTAL { $$ = true; }\n | /*EMPTY*/ { $$ = false; }\n\n\n> I don't know how pgident works, do you need to add some keywords to\n> src/tools/pgindent/typedefs.list to make indentation work?\n\nI'm not sure typedefs.list should be updated in each patch, because\ntools/pgindent/README said that the latest typedef file is downloaded\nfrom the buildfarm when pgindent is run.\n\n> in\n> /* If this is not the last AFTER trigger call, immediately exit. */\n> Assert (entry->before_trig_count >= entry->after_trig_count);\n> if (entry->before_trig_count != entry->after_trig_count)\n> return PointerGetDatum(NULL);\n> \n> before returning NULL, do you also need clean_up_IVM_hash_entry? (I\n> don't know when this case will happen)\n\nNo, clean_up_IVM_hash_entry is not necessary in this case.\nWhen multiple tables are updated in a statement, statement-level AFTER\ntriggers collects every information of the tables, and the last AFTER\ntrigger have to perform the actual maintenance of the view. To make sure\nthis, the number that BEFORE and AFTER trigger is fired is counted\nrespectively, and when they match it is regarded the last AFTER trigger\ncall performing the maintenance. Until this, collected information have\nto keep, so we cannot call clean_up_IVM_hash_entry. \n\n> in\n> /* Replace the modified table with the new delta table and\n> calculate the new view delta*/\n> replace_rte_with_delta(rte, table, true, queryEnv);\n> refresh_matview_datafill(dest_new, query, queryEnv, tupdesc_new, \"\");\n> \n> replace_rte_with_delta does not change the argument: table, argument:\n> queryEnv. refresh_matview_datafill just uses the partial argument of\n> the function calc_delta. So I guess, I am confused by the usage of\n> replace_rte_with_delta. also I think it should return void, since you\n> just modify the input argument. Here refresh_matview_datafill is just\n> persisting new delta content to dest_new?\n\nYes, refresh_matview_datafill executes the query and the result rows to\n\"dest_new\". And, replace_rte_with_delta updates the input argument \"rte\"\nand returns the result to it, so it may be better that this returns void,\nas you suggested.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Mon, 4 Mar 2024 11:53:50 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 23 Jan 2024 16:23:27 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Mon, 22 Jan 2024 13:51:08 +1100\n> Peter Smith <[email protected]> wrote:\n> \n> > 2024-01 Commitfest.\n> > \n> > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > like there was some CFbot test failure last time it was run [2].\n> > Please have a look and post an updated version if necessary.\n\nI attached a rebased patch-set, v30.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Mon, 4 Mar 2024 11:58:46 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Mon, 4 Mar 2024 11:58:46 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Tue, 23 Jan 2024 16:23:27 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> \n> > On Mon, 22 Jan 2024 13:51:08 +1100\n> > Peter Smith <[email protected]> wrote:\n> > \n> > > 2024-01 Commitfest.\n> > > \n> > > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > > like there was some CFbot test failure last time it was run [2].\n> > > Please have a look and post an updated version if necessary.\n> \n> I attached a rebased patch-set, v30.\n\nI attached a rebased patch-set, v31.\n\nAlso, I added a comment on RelationIsIVM() macro persuggestion from jian he.\nIn addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n\n WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n\nThis warning was raised since I missed to modify outfuncs.c for a new field.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Fri, 29 Mar 2024 23:47:00 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Fri, 29 Mar 2024 23:47:00 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Mon, 4 Mar 2024 11:58:46 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> \n> > On Tue, 23 Jan 2024 16:23:27 +0900\n> > Yugo NAGATA <[email protected]> wrote:\n> > \n> > > On Mon, 22 Jan 2024 13:51:08 +1100\n> > > Peter Smith <[email protected]> wrote:\n> > > \n> > > > 2024-01 Commitfest.\n> > > > \n> > > > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > > > like there was some CFbot test failure last time it was run [2].\n> > > > Please have a look and post an updated version if necessary.\n> > \n> > I attached a rebased patch-set, v30.\n> \n> I attached a rebased patch-set, v31.\n> \n> Also, I added a comment on RelationIsIVM() macro persuggestion from jian he.\n> In addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n> \n> WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> \n> This warning was raised since I missed to modify outfuncs.c for a new field.\n\nI found cfbot on FreeBSD still reported a failure due to\nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS because the regression test used\nwrong role names. Attached is a fixed version, v32.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <[email protected]>\n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Sun, 31 Mar 2024 22:59:31 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Sun, 31 Mar 2024 22:59:31 +0900\nYugo NAGATA <[email protected]> wrote:\n> > \n> > Also, I added a comment on RelationIsIVM() macro persuggestion from jian he.\n> > In addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n> > \n> > WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> > \n> > This warning was raised since I missed to modify outfuncs.c for a new field.\n> \n> I found cfbot on FreeBSD still reported a failure due to\n> ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS because the regression test used\n> wrong role names. Attached is a fixed version, v32.\n\nAttached is a rebased version, v33.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Tue, 2 Jul 2024 17:03:11 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 2 Jul 2024 17:03:11 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> On Sun, 31 Mar 2024 22:59:31 +0900\n> Yugo NAGATA <[email protected]> wrote:\n> > > \n> > > Also, I added a comment on RelationIsIVM() macro persuggestion from jian he.\n> > > In addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n> > > \n> > > WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> > > \n> > > This warning was raised since I missed to modify outfuncs.c for a new field.\n> > \n> > I found cfbot on FreeBSD still reported a failure due to\n> > ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS because the regression test used\n> > wrong role names. Attached is a fixed version, v32.\n> \n> Attached is a rebased version, v33.\n\nI updated the patch to bump up the version numbers in psql and pg_dump codes\nfrom 17 to 18.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Thu, 11 Jul 2024 13:23:57 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "Hi!\nCloudberry DB (Greenplum fork) uses IMMV feature for AQUMV (auto query\nuse matview) feature, so i got interested in how it is implemented.\n\nOn Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n>\n> I updated the patch to bump up the version numbers in psql and pg_dump codes\n> from 17 to 18.\n\nFew suggestions:\n\n1) `Add-relisivm-column-to-pg_class-system-catalog` commit message\nshould be fixed, there is \"isimmv\" in the last line.\n2) I dont get why `Add-Incremental-View-Maintenance-support.patch`\ngoes after 0005 & 0004. Shoulndt we first implement feature server\nside, only when client (psql & pg_dump) side?\n3) Can we provide regression tests for each function separately? Test\nfor main feature in main patch, test for DISTINCT support in\nv34-0007-Add-DISTINCT-support-for-IVM.patch etc? This way the patchset\nwill be easier to review, and can be committed separelety.\n4) v34-0006-Add-Incremental-View-Maintenance-support.patch no longer\napplies due to 4b74ebf726d444ba820830cad986a1f92f724649. After\nresolving issues manually, it does not compile, because\n4b74ebf726d444ba820830cad986a1f92f724649 also removes\nsave_userid/save_sec_context fields from ExecCreateTableAs.\n\n> if (RelationIsIVM(matviewRel) && stmt->skipData)\nNow this function accepts skipData param.\n\n5) For DISTINCT support patch uses hidden __ivm* columns. Is this\ndesign discussed anywhere? I wonder if this is a necessity (only\nsolution) or if there are alternatives.\n6)\nWhat are the caveats of supporting some simple cases for aggregation\nfuncs like in example?\n```\nregress=# CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_2 AS SELECT\nsum(j) + sum(i) from mv_base_a;\nERROR: expression containing an aggregate in it is not supported on\nincrementally maintainable materialized view\n```\nI can see some difficulties with division CREATE IMMV .... AS SELECT\n1/sum(i) from mv_base_a; (sum(i) == 0 case), but adding &\nmultiplication should be ok, aren't they?\n\n\nOverall, patchset looks mature, however it is far from being\ncommittable due to lack of testing/feedback/discussion. There is only\none way to fix this... Test and discuss it!\n\n\n[1] https://github.com/cloudberrydb/cloudberrydb\n\n\n",
"msg_date": "Sat, 27 Jul 2024 13:26:46 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Sat, 27 Jul 2024 at 13:26, Kirill Reshke <[email protected]> wrote:\n>\n> Hi!\n> Cloudberry DB (Greenplum fork) uses IMMV feature for AQUMV (auto query\n> use matview) feature, so i got interested in how it is implemented.\n>\n> On Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n> >\n> > I updated the patch to bump up the version numbers in psql and pg_dump codes\n> > from 17 to 18.\n>\n> Few suggestions:\n>\n> 1) `Add-relisivm-column-to-pg_class-system-catalog` commit message\n> should be fixed, there is \"isimmv\" in the last line.\n> 2) I dont get why `Add-Incremental-View-Maintenance-support.patch`\n> goes after 0005 & 0004. Shoulndt we first implement feature server\n> side, only when client (psql & pg_dump) side?\n> 3) Can we provide regression tests for each function separately? Test\n> for main feature in main patch, test for DISTINCT support in\n> v34-0007-Add-DISTINCT-support-for-IVM.patch etc? This way the patchset\n> will be easier to review, and can be committed separelety.\n> 4) v34-0006-Add-Incremental-View-Maintenance-support.patch no longer\n> applies due to 4b74ebf726d444ba820830cad986a1f92f724649. After\n> resolving issues manually, it does not compile, because\n> 4b74ebf726d444ba820830cad986a1f92f724649 also removes\n> save_userid/save_sec_context fields from ExecCreateTableAs.\n>\n> > if (RelationIsIVM(matviewRel) && stmt->skipData)\n> Now this function accepts skipData param.\n>\n> 5) For DISTINCT support patch uses hidden __ivm* columns. Is this\n> design discussed anywhere? I wonder if this is a necessity (only\n> solution) or if there are alternatives.\n> 6)\n> What are the caveats of supporting some simple cases for aggregation\n> funcs like in example?\n> ```\n> regress=# CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_2 AS SELECT\n> sum(j) + sum(i) from mv_base_a;\n> ERROR: expression containing an aggregate in it is not supported on\n> incrementally maintainable materialized view\n> ```\n> I can see some difficulties with division CREATE IMMV .... AS SELECT\n> 1/sum(i) from mv_base_a; (sum(i) == 0 case), but adding &\n> multiplication should be ok, aren't they?\n>\n>\n> Overall, patchset looks mature, however it is far from being\n> committable due to lack of testing/feedback/discussion. There is only\n> one way to fix this... Test and discuss it!\n>\n>\n> [1] https://github.com/cloudberrydb/cloudberrydb\n\nHi! Small update: I tried to run a regression test and all\nIMMV-related tests failed on my vm. Maybe I'm doing something wrong, I\nwill try to investigate.\n\nAnother suggestion: support for \\d and \\d+ commands in psql. With v34\npatchset applied, psql does not show anything IMMV-related in \\d mode.\n\n```\nreshke=# \\d m1\n Materialized view \"public.m1\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n i | integer | | |\nDistributed by: (i)\n\n\nreshke=# \\d+ m1\n Materialized view \"public.m1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n i | integer | | | | plain |\n | |\nView definition:\n SELECT t1.i\n FROM t1;\nDistributed by: (i)\nAccess method: heap\n\n```\n\nOutput should be 'Incrementally materialized view \"public.m1\"' IMO.\n\n\n",
"msg_date": "Tue, 30 Jul 2024 03:32:19 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "Hi,\n\nOn Tue, 30 Jul 2024 03:32:19 +0500\nKirill Reshke <[email protected]> wrote:\n\n> On Sat, 27 Jul 2024 at 13:26, Kirill Reshke <[email protected]> wrote:\n> >\n> > Hi!\n> > Cloudberry DB (Greenplum fork) uses IMMV feature for AQUMV (auto query\n> > use matview) feature, so i got interested in how it is implemented.\n\nThank you so much for a lot of comments!\nI will respond to the comments soon.\n\n> >\n> > On Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n> > >\n> > > I updated the patch to bump up the version numbers in psql and pg_dump codes\n> > > from 17 to 18.\n> >\n> > Few suggestions:\n> >\n> > 1) `Add-relisivm-column-to-pg_class-system-catalog` commit message\n> > should be fixed, there is \"isimmv\" in the last line.\n> > 2) I dont get why `Add-Incremental-View-Maintenance-support.patch`\n> > goes after 0005 & 0004. Shoulndt we first implement feature server\n> > side, only when client (psql & pg_dump) side?\n> > 3) Can we provide regression tests for each function separately? Test\n> > for main feature in main patch, test for DISTINCT support in\n> > v34-0007-Add-DISTINCT-support-for-IVM.patch etc? This way the patchset\n> > will be easier to review, and can be committed separelety.\n> > 4) v34-0006-Add-Incremental-View-Maintenance-support.patch no longer\n> > applies due to 4b74ebf726d444ba820830cad986a1f92f724649. After\n> > resolving issues manually, it does not compile, because\n> > 4b74ebf726d444ba820830cad986a1f92f724649 also removes\n> > save_userid/save_sec_context fields from ExecCreateTableAs.\n> >\n> > > if (RelationIsIVM(matviewRel) && stmt->skipData)\n> > Now this function accepts skipData param.\n> >\n> > 5) For DISTINCT support patch uses hidden __ivm* columns. Is this\n> > design discussed anywhere? I wonder if this is a necessity (only\n> > solution) or if there are alternatives.\n> > 6)\n> > What are the caveats of supporting some simple cases for aggregation\n> > funcs like in example?\n> > ```\n> > regress=# CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_2 AS SELECT\n> > sum(j) + sum(i) from mv_base_a;\n> > ERROR: expression containing an aggregate in it is not supported on\n> > incrementally maintainable materialized view\n> > ```\n> > I can see some difficulties with division CREATE IMMV .... AS SELECT\n> > 1/sum(i) from mv_base_a; (sum(i) == 0 case), but adding &\n> > multiplication should be ok, aren't they?\n> >\n> >\n> > Overall, patchset looks mature, however it is far from being\n> > committable due to lack of testing/feedback/discussion. There is only\n> > one way to fix this... Test and discuss it!\n> >\n> >\n> > [1] https://github.com/cloudberrydb/cloudberrydb\n> \n> Hi! Small update: I tried to run a regression test and all\n> IMMV-related tests failed on my vm. Maybe I'm doing something wrong, I\n> will try to investigate.\n> \n> Another suggestion: support for \\d and \\d+ commands in psql. With v34\n> patchset applied, psql does not show anything IMMV-related in \\d mode.\n> \n> ```\n> reshke=# \\d m1\n> Materialized view \"public.m1\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> i | integer | | |\n> Distributed by: (i)\n> \n> \n> reshke=# \\d+ m1\n> Materialized view \"public.m1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> i | integer | | | | plain |\n> | |\n> View definition:\n> SELECT t1.i\n> FROM t1;\n> Distributed by: (i)\n> Access method: heap\n> \n> ```\n> \n> Output should be 'Incrementally materialized view \"public.m1\"' IMO.\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Tue, 30 Jul 2024 14:24:20 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 03:32, Kirill Reshke <[email protected]> wrote:\n>\n> On Sat, 27 Jul 2024 at 13:26, Kirill Reshke <[email protected]> wrote:\n> >\n> > Hi!\n> > Cloudberry DB (Greenplum fork) uses IMMV feature for AQUMV (auto query\n> > use matview) feature, so i got interested in how it is implemented.\n> >\n> > On Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n> > >\n> > > I updated the patch to bump up the version numbers in psql and pg_dump codes\n> > > from 17 to 18.\n> >\n> > Few suggestions:\n> >\n> > 1) `Add-relisivm-column-to-pg_class-system-catalog` commit message\n> > should be fixed, there is \"isimmv\" in the last line.\n> > 2) I dont get why `Add-Incremental-View-Maintenance-support.patch`\n> > goes after 0005 & 0004. Shoulndt we first implement feature server\n> > side, only when client (psql & pg_dump) side?\n> > 3) Can we provide regression tests for each function separately? Test\n> > for main feature in main patch, test for DISTINCT support in\n> > v34-0007-Add-DISTINCT-support-for-IVM.patch etc? This way the patchset\n> > will be easier to review, and can be committed separelety.\n> > 4) v34-0006-Add-Incremental-View-Maintenance-support.patch no longer\n> > applies due to 4b74ebf726d444ba820830cad986a1f92f724649. After\n> > resolving issues manually, it does not compile, because\n> > 4b74ebf726d444ba820830cad986a1f92f724649 also removes\n> > save_userid/save_sec_context fields from ExecCreateTableAs.\n> >\n> > > if (RelationIsIVM(matviewRel) && stmt->skipData)\n> > Now this function accepts skipData param.\n> >\n> > 5) For DISTINCT support patch uses hidden __ivm* columns. Is this\n> > design discussed anywhere? I wonder if this is a necessity (only\n> > solution) or if there are alternatives.\n> > 6)\n> > What are the caveats of supporting some simple cases for aggregation\n> > funcs like in example?\n> > ```\n> > regress=# CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_2 AS SELECT\n> > sum(j) + sum(i) from mv_base_a;\n> > ERROR: expression containing an aggregate in it is not supported on\n> > incrementally maintainable materialized view\n> > ```\n> > I can see some difficulties with division CREATE IMMV .... AS SELECT\n> > 1/sum(i) from mv_base_a; (sum(i) == 0 case), but adding &\n> > multiplication should be ok, aren't they?\n> >\n> >\n> > Overall, patchset looks mature, however it is far from being\n> > committable due to lack of testing/feedback/discussion. There is only\n> > one way to fix this... Test and discuss it!\n> >\n> >\n> > [1] https://github.com/cloudberrydb/cloudberrydb\n>\n> Hi! Small update: I tried to run a regression test and all\n> IMMV-related tests failed on my vm. Maybe I'm doing something wrong, I\n> will try to investigate.\n>\n> Another suggestion: support for \\d and \\d+ commands in psql. With v34\n> patchset applied, psql does not show anything IMMV-related in \\d mode.\n>\n> ```\n> reshke=# \\d m1\n> Materialized view \"public.m1\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> i | integer | | |\n> Distributed by: (i)\n>\n>\n> reshke=# \\d+ m1\n> Materialized view \"public.m1\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> i | integer | | | | plain |\n> | |\n> View definition:\n> SELECT t1.i\n> FROM t1;\n> Distributed by: (i)\n> Access method: heap\n>\n> ```\n>\n> Output should be 'Incrementally materialized view \"public.m1\"' IMO.\n\n\n\nAnd one more thing, noticed today while playing with patchset:\nI believe non-terminal incremental should be OptIncremental\n\nIm talking about this:\n```\nincremental: INCREMENTAL { $$ = true; }\n| /*EMPTY*/ { $$ = false; }\n;\n```\n\n\n",
"msg_date": "Wed, 31 Jul 2024 11:39:37 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n>\n> On Tue, 2 Jul 2024 17:03:11 +0900\n> Yugo NAGATA <[email protected]> wrote:\n>\n> > On Sun, 31 Mar 2024 22:59:31 +0900\n> > Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > Also, I added a comment on RelationIsIVM() macro persuggestion from jian he.\n> > > > In addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n> > > >\n> > > > WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> > > >\n> > > > This warning was raised since I missed to modify outfuncs.c for a new field.\n> > >\n> > > I found cfbot on FreeBSD still reported a failure due to\n> > > ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS because the regression test used\n> > > wrong role names. Attached is a fixed version, v32.\n> >\n> > Attached is a rebased version, v33.\n>\n> I updated the patch to bump up the version numbers in psql and pg_dump codes\n> from 17 to 18.\n>\n> Regards,\n> Yugo Nagata\n>\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n\nSmall updates with something o found recent days:\n\n```\ndb2=# create incremental materialized view v2 as select * from v1;\nERROR: VIEW or MATERIALIZED VIEW is not supported on incrementally\nmaintainable materialized view\n```\nError messaging is not true, create view v2 as select * from v1; works fine.\n\n\n```\ndb2=# create incremental materialized view vv2 as select i,j2, i / j2\nfrom t1 join t2 on true;\ndb2=# insert into t2 values(1,0);\nERROR: division by zero\n```\nIt is very strange to receive `division by zero` while inserting into\nrelation, isn't it? Can we add some hints/CONTEXT here?\nRegular triggers do it:\n```\ndb2=# insert into ttt values(100000,0);\nERROR: division by zero\nCONTEXT: PL/pgSQL function f1() line 3 at IF\n```\n\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 6 Aug 2024 19:29:09 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "I am really sorry for splitting my review comments into multiple\nemails. I'll try to do a better review in a future, all-in-one.\n\nOn Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n>\n> On Tue, 2 Jul 2024 17:03:11 +0900\n> Yugo NAGATA <[email protected]> wrote:\n>\n> > On Sun, 31 Mar 2024 22:59:31 +0900\n> > Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > Also, I added a comment on RelationIsIVM() macro persuggestion from jian he.\n> > > > In addition, I fixed a failure reported from cfbot on FreeBSD build caused by;\n> > > >\n> > > > WARNING: outfuncs/readfuncs failed to produce an equal rewritten parse tree\n> > > >\n> > > > This warning was raised since I missed to modify outfuncs.c for a new field.\n> > >\n> > > I found cfbot on FreeBSD still reported a failure due to\n> > > ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS because the regression test used\n> > > wrong role names. Attached is a fixed version, v32.\n> >\n> > Attached is a rebased version, v33.\n>\n> I updated the patch to bump up the version numbers in psql and pg_dump codes\n> from 17 to 18.\n>\n> Regards,\n> Yugo Nagata\n>\n> >\n> > Regards,\n> > Yugo Nagata\n> >\n> >\n> > --\n> > Yugo NAGATA <[email protected]>\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n\n1) Provided patches do not set process title correctly:\n```\nreshke 2602433 18.7 0.1 203012 39760 ? Rs 20:41 1:58\npostgres: reshke ivm [local] CREATE MATERIALIZED VIEW\n```\n2) We allow to REFRESH IMMV. Why? IMMV should be always up to date.\nWell, I can see that this utility command may be useful in case of\ncorruption of some base relation/view itself, so there will be a need\nto rebuild the whole from scratch.\nBut we already have VACUUM FULL for this, aren't we?\n\n3) Triggers created for IMMV are not listed via \\dS [tablename]\n\n4) apply_old_delta_with_count executes non-trivial SQL statements for\nIMMV. It would be really helpful to see this in EXPLAIN ANALYZE.\n\n5)\n> + \"DELETE FROM %s WHERE ctid IN (\"\n> + \"SELECT tid FROM (SELECT pg_catalog.row_number() over (partition by %s) AS \\\"__ivm_row_number__\\\",\"\n> + \"mv.ctid AS tid,\"\n> + \"diff.\\\"__ivm_count__\\\"\"\n> + \"FROM %s AS mv, %s AS diff \"\n> + \"WHERE %s) v \"\n> + \"WHERE v.\\\"__ivm_row_number__\\\" OPERATOR(pg_catalog.<=) v.\\\"__ivm_count__\\\")\",\n> + matviewname,\n> + keysbuf.data,\n> + matviewname, deltaname_old,\n> + match_cond);\n\n`SELECT pg_catalog.row_number()` is too generic to my taste. Maybe\npg_catalog.immv_row_number() / pg_catalog.get_immv_row_number() ?\n\n6)\n\n> +static void\n> +apply_new_delta(const char *matviewname, const char *deltaname_new,\n> + StringInfo target_list)\n> +{\n> + StringInfoData querybuf;\n>+\n> + /* Search for matching tuples from the view and update or delete if found. */\n\nIs this comment correct? we only insert tuples here?\n\n7)\n\n During patch development, one should pick OIDs from range 8000-9999\n> +# IVM\n> +{ oid => '786', descr => 'ivm trigger (before)',\n> + proname => 'IVM_immediate_before', provolatile => 'v', prorettype => 'trigger',\n> + proargtypes => '', prosrc => 'IVM_immediate_before' },\n> +{ oid => '787', descr => 'ivm trigger (after)',\n> + proname => 'IVM_immediate_maintenance', provolatile => 'v', prorettype => 'trigger',\n> + proargtypes => '', prosrc => 'IVM_immediate_maintenance' },\n> +{ oid => '788', descr => 'ivm filetring ',\n> + proname => 'ivm_visible_in_prestate', provolatile => 's', prorettype => 'bool',\n> + proargtypes => 'oid tid oid', prosrc => 'ivm_visible_in_prestate' },\n> ]\n\n\n--\nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Thu, 8 Aug 2024 14:37:54 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Wed, 31 May 2023 at 20:14, Yugo NAGATA <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> Here's a rebased version of the patch-set adding Incremental View\n> Maintenance support for PostgreSQL. That was discussed in [1].\n>\n> The patch-set consists of the following eleven patches.\n>\n> - 0001: Add a syntax to create Incrementally Maintainable Materialized Views\n> - 0002: Add relisivm column to pg_class system catalog\n> - 0003: Allow to prolong life span of transition tables until transaction end\n> - 0004: Add Incremental View Maintenance support to pg_dum\n> - 0005: Add Incremental View Maintenance support to psql\n> - 0006: Add Incremental View Maintenance support\n> - 0007: Add DISTINCT support for IVM\n> - 0008: Add aggregates support in IVM\n> - 0009: Add support for min/max aggregates for IVM\n> - 0010: regression tests\n> - 0011: documentation\n>\n> [1] https://www.postgresql.org/message-id/flat/20181227215726.4d166b4874f8983a641123f5%40sraoss.co.jp\n>\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo NAGATA <[email protected]>\n\nActually, this new MV delta-table calculation can be used to make\nfaster REFRESH MATERIALIZED VIEW even for non-IMMV. Specifically, we\ncan use our cost-based Optimizer to decide which way is cheaper:\nregular query execution, or delta-table approach (if it is\napplicable).\n\nIs it worth another thread?\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Thu, 8 Aug 2024 15:03:08 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 10:24, Yugo NAGATA <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, 30 Jul 2024 03:32:19 +0500\n> Kirill Reshke <[email protected]> wrote:\n>\n> > On Sat, 27 Jul 2024 at 13:26, Kirill Reshke <[email protected]> wrote:\n> > >\n> > > Hi!\n> > > Cloudberry DB (Greenplum fork) uses IMMV feature for AQUMV (auto query\n> > > use matview) feature, so i got interested in how it is implemented.\n>\n> Thank you so much for a lot of comments!\n> I will respond to the comments soon.\n>\n> > >\n> > > On Thu, 11 Jul 2024 at 09:24, Yugo NAGATA <[email protected]> wrote:\n> > > >\n> > > > I updated the patch to bump up the version numbers in psql and pg_dump codes\n> > > > from 17 to 18.\n> > >\n> > > Few suggestions:\n> > >\n> > > 1) `Add-relisivm-column-to-pg_class-system-catalog` commit message\n> > > should be fixed, there is \"isimmv\" in the last line.\n> > > 2) I dont get why `Add-Incremental-View-Maintenance-support.patch`\n> > > goes after 0005 & 0004. Shoulndt we first implement feature server\n> > > side, only when client (psql & pg_dump) side?\n> > > 3) Can we provide regression tests for each function separately? Test\n> > > for main feature in main patch, test for DISTINCT support in\n> > > v34-0007-Add-DISTINCT-support-for-IVM.patch etc? This way the patchset\n> > > will be easier to review, and can be committed separelety.\n> > > 4) v34-0006-Add-Incremental-View-Maintenance-support.patch no longer\n> > > applies due to 4b74ebf726d444ba820830cad986a1f92f724649. After\n> > > resolving issues manually, it does not compile, because\n> > > 4b74ebf726d444ba820830cad986a1f92f724649 also removes\n> > > save_userid/save_sec_context fields from ExecCreateTableAs.\n> > >\n> > > > if (RelationIsIVM(matviewRel) && stmt->skipData)\n> > > Now this function accepts skipData param.\n> > >\n> > > 5) For DISTINCT support patch uses hidden __ivm* columns. Is this\n> > > design discussed anywhere? I wonder if this is a necessity (only\n> > > solution) or if there are alternatives.\n> > > 6)\n> > > What are the caveats of supporting some simple cases for aggregation\n> > > funcs like in example?\n> > > ```\n> > > regress=# CREATE INCREMENTAL MATERIALIZED VIEW mv_ivm_2 AS SELECT\n> > > sum(j) + sum(i) from mv_base_a;\n> > > ERROR: expression containing an aggregate in it is not supported on\n> > > incrementally maintainable materialized view\n> > > ```\n> > > I can see some difficulties with division CREATE IMMV .... AS SELECT\n> > > 1/sum(i) from mv_base_a; (sum(i) == 0 case), but adding &\n> > > multiplication should be ok, aren't they?\n> > >\n> > >\n> > > Overall, patchset looks mature, however it is far from being\n> > > committable due to lack of testing/feedback/discussion. There is only\n> > > one way to fix this... Test and discuss it!\n> > >\n> > >\n> > > [1] https://github.com/cloudberrydb/cloudberrydb\n> >\n> > Hi! Small update: I tried to run a regression test and all\n> > IMMV-related tests failed on my vm. Maybe I'm doing something wrong, I\n> > will try to investigate.\n> >\n> > Another suggestion: support for \\d and \\d+ commands in psql. With v34\n> > patchset applied, psql does not show anything IMMV-related in \\d mode.\n> >\n> > ```\n> > reshke=# \\d m1\n> > Materialized view \"public.m1\"\n> > Column | Type | Collation | Nullable | Default\n> > --------+---------+-----------+----------+---------\n> > i | integer | | |\n> > Distributed by: (i)\n> >\n> >\n> > reshke=# \\d+ m1\n> > Materialized view \"public.m1\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> > Compression | Stats target | Description\n> > --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> > i | integer | | | | plain |\n> > | |\n> > View definition:\n> > SELECT t1.i\n> > FROM t1;\n> > Distributed by: (i)\n> > Access method: heap\n> >\n> > ```\n> >\n> > Output should be 'Incrementally materialized view \"public.m1\"' IMO.\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n\n\nSo, I spent another 2 weeks on this patch. I have read the whole\n'Incremental View Maintenance' thread (from 2018), this thread, some\nrelated threads. Have studied some papers on this topic. I got a\nbetter understanding of the theory this work is backed up with.\nHowever, I still can add my 2c.\n\n\n== Major suggestions.\n\n1) At first glance, working with this IVM/IMMV infrastructure feels\nreally unintuitive about what servers actually do for query execution.\nI do think It will be much better for user experience to add more\nEXPLAIN about IVM work done inside IVM triggers. This way it is much\nclearer which part is working slow, so which index should be created,\netc.\n\n2) The kernel code for IVM lacks possibility to be extended for\nfurther IVM optimizations. The one example is foreign key optimization\ndescribed here[1]. I'm not saying we should implement this within this\npatchset, but we surely should pave the way for this. I don't have any\ngood suggestions for how to do this though.\n\n3) I don't really think SQL design is good. CREATE [INCREMENTAL] M.V.\nis too ad-hoc. I would prefer CREATE M.V. with (maintain_incr=true).\n(reloption name is just an example).\nThis way we can change regular M.V. to IVM and vice versa via ALTER\nM.V. SET *reloptions* - a type of syntax that is already present in\nPostgreSQL core.\n\n\n== Other thoughts\n\nIn OLAP databases (see [2]), IVM opens the door for 'view\nexploitation' feature. That is, use IVM (which is always up-to-date)\nfor query execution. But current IVM implementation is not compatible\nwith Cloudberry Append-optimized Table Access Method. The problem is\nthe 'table_tuple_fetch_row_version' call, which is used by\nivm_visible_in_prestate to check tuple visibility within a snapshot. I\nam trying to solve this somehow. My current idea is the following:\nmultiple base table modification via single statement along with tuple\ndeletion from base tables are features. We can error-out these cases\n(at M.V. creation time) all for some TAMs, and support only insert &\ntruncate. However, I don't know how to check if TAM supports\n'tuple_fetch_row_version' other than calling it and receiving\nERROR[3].\n\n== Minor nitpicks and typos.\n\nreshke=# insert into tt select * from generate_series(1, 1000090);\n^CCancel request sent\nERROR: canceling statement due to user request\nCONTEXT: SQL statement \"INSERT INTO public.mv1 (i, j) SELECT i, j\nFROM (SELECT diff.*, pg_catalog.generate_series(1,\ndiff.\"__ivm_count__\") AS __ivm_generate_series__ FROM new_delta AS\ndiff) AS v\"\nTime: 18883.883 ms (00:18.884)\n\nThis is very surprising, isn't it? We can set HINT here, to indicate\nwhere this query comes from.\n\n2)\ndeleted/deleted -> updated/deleted\n+ /*\n+ * XXX: When using DELETE or UPDATE, we must use exclusive lock for now\n+ * because apply_old_delta(_with_count) uses ctid to identify the tuple\n+ * to be deleted/deleted, but doesn't work in concurrent situations.\n\n3) Typo in rewrite_query_for_postupdate_state:\n/* Retore the original RTE */\n\n\n4) in apply_delta function has exactly one usage, so the 'use_count'\nparam is redundant, because we already pass the 'query' param, and\n'use_count' is calculated from the 'query'.\n\n5) in calc_delta:\n\n> ListCell *lc = list_nth_cell(query->rtable, rte_index - 1);\n> RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc);\nShould we add Assert(list_lenght(lc) == 1) here? Can there be multiple\nitems in this list?\n\n6) In get_prestate_rte:\n> appendStringInfo(&str,\n> \"SELECT t.* FROM %s t\"\n> \" WHERE pg_catalog.ivm_visible_in_prestate(t.tableoid, t.ctid ,%d::pg_catalog.oid)\",\n> relname, matviewid);\n\nIdentitation issue. This will not be fixed via pg_ident run, because\nthis is str contant, so better so fix it by-hand.\n\n7) in apply_new_delta_with_count:\n\n> appendStringInfo(&querybuf,\n> \"WITH updt AS (\" /* update a tuple if this exists in the view */\n> \"UPDATE %s AS mv SET %s = mv.%s OPERATOR(pg_catalog.+) diff.%s \"\n\nSET % OPERATOR(pg_catalog.=) mv.%s ?\n\nsame for append_set_clause_for_count, append_set_clause_for_sum,\nappend_set_clause_for_minmax\n\n> /* avg = (mv.sum - t.sum)::aggtype / (mv.count - t.count) */\n> appendStringInfo(buf_old,\n> \", %s = %s OPERATOR(pg_catalog./) %s\",\n\nshould be\n/* avg OPERATOR(pg_catalog.=) (mv.sum - t.sum)::aggtype / (mv.count -\nt.count) */\nappendStringInfo(buf_old,\n\", %s = %s OPERATOR(pg_catalog./) %s\",\n\n\n\n[1] https://assets.amazon.science/a2/57/a00ebcfc446a9d0bf827bb51c15a/foreign-keys-open-the-door-for-faster-incremental-view-maintenance.pdf\n[2] https://github.com/cloudberrydb/cloudberrydb\n[3] https://github.com/cloudberrydb/cloudberrydb/blob/b9aec75154d5bbecce7ce3a33e8bb2272ff61511/src/backend/access/appendonly/appendonlyam_handler.c#L828\n\n\n\n\n\n\n--\nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 20 Aug 2024 02:14:08 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 02:14, Kirill Reshke <[email protected]> wrote:\n\n> == Other thoughts\n>\n> In OLAP databases (see [2]), IVM opens the door for 'view\n> exploitation' feature. That is, use IVM (which is always up-to-date)\n> for query execution. But current IVM implementation is not compatible\n> with Cloudberry Append-optimized Table Access Method. The problem is\n> the 'table_tuple_fetch_row_version' call, which is used by\n> ivm_visible_in_prestate to check tuple visibility within a snapshot. I\n> am trying to solve this somehow. My current idea is the following:\n> multiple base table modification via single statement along with tuple\n> deletion from base tables are features. We can error-out these cases\n> (at M.V. creation time) all for some TAMs, and support only insert &\n> truncate. However, I don't know how to check if TAM supports\n> 'tuple_fetch_row_version' other than calling it and receiving\n> ERROR[3].\n>\n\nI reread this and I find this a little bit unclear. What I'm proposing\nhere is specifying the type of operations IVM supports on creation\ntime. So, one can run\n\nCREATE IVM immv1 WITH (support_deletion = true/false,\nsupport_multiple_relation_change = true/false). Then, in the query\nexecution time, we just ERROR if the query leads to deletion from IVM\nand support_deletion if false.\n\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 20 Aug 2024 11:09:49 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
},
{
"msg_contents": "On Tue, 20 Aug 2024 at 02:14, Kirill Reshke <[email protected]> wrote:\n>\n>\n> == Major suggestions.\n>\n> 1) At first glance, working with this IVM/IMMV infrastructure feels\n> really unintuitive about what servers actually do for query execution.\n> I do think It will be much better for user experience to add more\n> EXPLAIN about IVM work done inside IVM triggers. This way it is much\n> clearer which part is working slow, so which index should be created,\n> etc.\n>\n> 2) The kernel code for IVM lacks possibility to be extended for\n> further IVM optimizations. The one example is foreign key optimization\n> described here[1]. I'm not saying we should implement this within this\n> patchset, but we surely should pave the way for this. I don't have any\n> good suggestions for how to do this though.\n>\n> 3) I don't really think SQL design is good. CREATE [INCREMENTAL] M.V.\n> is too ad-hoc. I would prefer CREATE M.V. with (maintain_incr=true).\n> (reloption name is just an example).\n> This way we can change regular M.V. to IVM and vice versa via ALTER\n> M.V. SET *reloptions* - a type of syntax that is already present in\n> PostgreSQL core.\n>\n\nOne little follow-up here. Why do we do prepstate visibility the way\nit is done? Can we instead export the snapshot in BEFORE trigger, save\nit somewhere and use it after?\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 20 Aug 2024 12:06:24 +0500",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance, take 2"
}
] |
[
{
"msg_contents": "Hi Postgres community: I think support editing order of the fields in \ntable is a useful feature. I have known that the order of fields will \neffect the data structure of rows data, but I think we could add a extra \ninformation to identify the display order of fields but not effect the \nrows data, and the order identification is only used to display in order \nwhile execute `SELECT * FROM [table_name]` and display the table \nstructure on GUI tools like pgAdmin.\n\nNow, we must create a new view and define the order of fields if we need \nto display the fields of table in a order of our demand, it is not a \ngood way.\n\n\nMany Thanks\n\nChang Wei\n\n\n\n",
"msg_date": "Thu, 1 Jun 2023 00:31:45 +0800",
"msg_from": "=?UTF-8?B?Q2hhbmcgV2VpIOaYjOe2rQ==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support edit order of the fields in table"
},
{
"msg_contents": "On Thu, 2023-06-01 at 00:31 +0800, Chang Wei 昌維 wrote:\n> Hi Postgres community: I think support editing order of the fields in \n> table is a useful feature. I have known that the order of fields will \n> effect the data structure of rows data, but I think we could add a extra \n> information to identify the display order of fields but not effect the \n> rows data, and the order identification is only used to display in order \n> while execute `SELECT * FROM [table_name]` and display the table \n> structure on GUI tools like pgAdmin.\n> \n> Now, we must create a new view and define the order of fields if we need \n> to display the fields of table in a order of our demand, it is not a \n> good way.\n\nBut PostgreSQL tables are not spreadsheets. When, except in the display of\nthe result of interactive queries, would the order matter?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 31 May 2023 21:03:33 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support edit order of the fields in table"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWhat appears to be a pg_dump/pg_restore bug was observed with the new\r\nBEGIN ATOMIC function body syntax introduced in Postgres 14.\r\n\r\nDependencies inside a BEGIN ATOMIC function cannot be resolved\r\nif those dependencies are dumped after the function body. The repro\r\ncase is when a primary key constraint is used in a ON CONFLICT ON CONSTRAINT\r\nused within the function.\r\n\r\nWith the attached repro, pg_restore fails with\r\n\r\npg_restore: error: could not execute query: ERROR: constraint \"a_pkey\" for table \"a\" does not exist\r\nCommand was: CREATE FUNCTION public.a_f(c1_in text, c2 integer DEFAULT 60) RETURNS void\r\n\r\n\r\nI am not sure if the answer if to dump functions later on in the process.\r\n\r\nWould appreciate some feedback on this issue.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)",
"msg_date": "Wed, 31 May 2023 21:51:25 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "\"Imseih (AWS), Sami\" <[email protected]> writes:\n> With the attached repro, pg_restore fails with\n> pg_restore: error: could not execute query: ERROR: constraint \"a_pkey\" for table \"a\" does not exist\n> Command was: CREATE FUNCTION public.a_f(c1_in text, c2 integer DEFAULT 60) RETURNS void\n\nHmph. The other thing worth noticing is that pg_dump prints\na warning:\n\npg_dump: warning: could not resolve dependency loop among these items:\n\nor with -v:\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: FUNCTION a_f (ID 218 OID 40664)\npg_dump: CONSTRAINT a_pkey (ID 4131 OID 40663)\npg_dump: POST-DATA BOUNDARY (ID 4281)\npg_dump: TABLE DATA a (ID 4278 OID 40657)\npg_dump: PRE-DATA BOUNDARY (ID 4280)\n\nSo it's lacking a rule to tell it what to do in this case, and the\ndefault is the wrong way around. I think we need to fix it in\nabout the same way as the equivalent case for matviews, which\nleads to the attached barely-tested patch.\n\nBTW, now that I see a case the default printout here seems\ncompletely ridiculous. I think we need to do\n\n pg_log_warning(\"could not resolve dependency loop among these items:\");\n for (i = 0; i < nLoop; i++)\n {\n char buf[1024];\n\n describeDumpableObject(loop[i], buf, sizeof(buf));\n- pg_log_info(\" %s\", buf);\n+ pg_log_warning(\" %s\", buf);\n }\n\nbut I didn't actually change that in the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 02 Jun 2023 08:16:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "On Fri, Jun 2, 2023 at 8:16 AM Tom Lane <[email protected]> wrote:\n\n> or with -v:\n>\n> pg_dump: warning: could not resolve dependency loop among these items:\n> pg_dump: FUNCTION a_f (ID 218 OID 40664)\n> pg_dump: CONSTRAINT a_pkey (ID 4131 OID 40663)\n> pg_dump: POST-DATA BOUNDARY (ID 4281)\n> pg_dump: TABLE DATA a (ID 4278 OID 40657)\n> pg_dump: PRE-DATA BOUNDARY (ID 4280)\n>\n> ...\n> BTW, now that I see a case the default printout here seems\n> completely ridiculous. I think we need to do\n>\n> pg_log_warning(\"could not resolve dependency loop among these items:\");\n> for (i = 0; i < nLoop; i++)\n> {\n> char buf[1024];\n>\n> describeDumpableObject(loop[i], buf, sizeof(buf));\n> - pg_log_info(\" %s\", buf);\n> + pg_log_warning(\" %s\", buf);\n> }\n>\n\n-1\n Not that I matter, but as a \"consumer\" the current output tells me:\n- You have a Warning...\n+ Here are the supporting details (visually, very clearly)\n\n If I comprehend the suggestion, it will label each line with a warning.\nWhich implies I have 6 Warnings.\nIt feels \"off\" to do it that way, especially since the only way we get the\nadditional details is with \"-v\"?\n\nKirk...\n\nOn Fri, Jun 2, 2023 at 8:16 AM Tom Lane <[email protected]> wrote:or with -v:\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: FUNCTION a_f (ID 218 OID 40664)\npg_dump: CONSTRAINT a_pkey (ID 4131 OID 40663)\npg_dump: POST-DATA BOUNDARY (ID 4281)\npg_dump: TABLE DATA a (ID 4278 OID 40657)\npg_dump: PRE-DATA BOUNDARY (ID 4280)\n...\nBTW, now that I see a case the default printout here seems\ncompletely ridiculous. I think we need to do\n\n pg_log_warning(\"could not resolve dependency loop among these items:\");\n for (i = 0; i < nLoop; i++)\n {\n char buf[1024];\n\n describeDumpableObject(loop[i], buf, sizeof(buf));\n- pg_log_info(\" %s\", buf);\n+ pg_log_warning(\" %s\", buf);\n }-1 Not that I matter, but as a \"consumer\" the current output tells me:- You have a Warning...+ Here are the supporting details (visually, very clearly) If I comprehend the suggestion, it will label each line with a warning. Which implies I have 6 Warnings.It feels \"off\" to do it that way, especially since the only way we get the additional details is with \"-v\"?Kirk...",
"msg_date": "Fri, 2 Jun 2023 11:08:08 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "> So it's lacking a rule to tell it what to do in this case, and the\r\n> default is the wrong way around. I think we need to fix it in\r\n> about the same way as the equivalent case for matviews, which\r\n> leads to the attached barely-tested patch.\r\n\r\nThanks for the patch! A test on the initially reported use case\r\nand some other cases show it does the expected.\r\n\r\nSome minor comments I have:\r\n\r\n1/\r\n\r\n+ agginfo[i].aggfn.postponed_def = false;\t/* might get set during sort */\r\n\r\nThis is probably not needed as it seems that we can only\r\nget into this situation when function dependencies are tracked.\r\nThis is either the argument or results types of a function which\r\nare already handled correctly, or when the function body is examined\r\nas is the case with BEGIN ATOMIC.\r\n\r\n\r\n2/\r\n\r\nInstead of\r\n\r\n+ * section. This is sufficient to handle cases where a function depends on\r\n+ * some unique index, as can happen if it has a GROUP BY for example.\r\n+ */\r\n\r\nThe following description makes more sense. \r\n\r\n+ * section. This is sufficient to handle cases where a function depends on\r\n+ * some constraints, as can happen if a BEGIN ATOMIC function \r\n+ * references a constraint directly.\r\n\r\n\r\n3/\r\n\r\nThe docs in https://www.postgresql.org/docs/current/ddl-depend.html\r\nshould be updated. The entire section after \"For user-defined functions.... \"\r\nthere is no mention of BEGIN ATOMIC as one way that the function body\r\ncan be examined for dependencies.\r\n\r\nThis could be tracked in a separate doc update patch. What do you think?\r\n\r\n\r\n> BTW, now that I see a case the default printout here seems\r\n> completely ridiculous. I think we need to do\r\n\r\n\r\n> describeDumpableObject(loop[i], buf, sizeof(buf));\r\n> - pg_log_info(\" %s\", buf);\r\n> + pg_log_warning(\" %s\", buf);\r\n> }\r\n\r\nNot sure I like this more than what is there now.\r\n\r\nThe current indentation in \" pg_dump: \" makes it obvious \r\nthat these lines are details for the warning message. Additional\r\n\"warning\" messages will be confusing.\r\n\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 2 Jun 2023 21:26:50 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Kirk Wolak <[email protected]> writes:\n> On Fri, Jun 2, 2023 at 8:16 AM Tom Lane <[email protected]> wrote:\n>> BTW, now that I see a case the default printout here seems\n>> completely ridiculous. I think we need to do\n>> - pg_log_info(\" %s\", buf);\n>> + pg_log_warning(\" %s\", buf);\n\n> If I comprehend the suggestion, it will label each line with a warning.\n> Which implies I have 6 Warnings.\n\nRight, I'd forgotten that pg_log_warning() will interpose \"warning:\".\nAttached are two more-carefully-thought-out suggestions. The easy\nway is to use pg_log_warning_detail(), which produces output like\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\nAlternatively, we could assemble the details by hand, as in the\nsecond patch, producing\n\npg_dump: warning: could not resolve dependency loop among these items:\n FUNCTION a_f (ID 216 OID 40532)\n CONSTRAINT a_pkey (ID 3466 OID 40531)\n POST-DATA BOUNDARY (ID 3612)\n TABLE DATA a (ID 3610 OID 40525)\n PRE-DATA BOUNDARY (ID 3611)\n\nI'm not really sure which of these I like better. The first one\nis a much simpler code change, and there is some value in labeling\nthe output like that. The second patch's output seems less cluttered,\nbut it's committing a modularity sin by embedding formatting knowledge\nat the caller level. Thoughts?\n\nBTW, there is a similar abuse of pg_log_info just a few lines\nabove this, and probably others elsewhere. I won't bother\nwriting patches for other places till we have agreement on what\nthe output ought to look like.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 03 Jun 2023 14:28:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "On Sat, Jun 3, 2023 at 2:28 PM Tom Lane <[email protected]> wrote:\n\n> Kirk Wolak <[email protected]> writes:\n> > On Fri, Jun 2, 2023 at 8:16 AM Tom Lane <[email protected]> wrote:\n> > If I comprehend the suggestion, it will label each line with a warning.\n> > Which implies I have 6 Warnings.\n>\n> Right, I'd forgotten that pg_log_warning() will interpose \"warning:\".\n> Attached are two more-carefully-thought-out suggestions. The easy\n> way is to use pg_log_warning_detail(), which produces output like\n>\n> pg_dump: warning: could not resolve dependency loop among these items:\n> pg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\n> pg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\n> pg_dump: detail: POST-DATA BOUNDARY (ID 3612)\n> pg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\n> pg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n>\n> Alternatively, we could assemble the details by hand, as in the\n> second patch, producing\n>\n> pg_dump: warning: could not resolve dependency loop among these items:\n> FUNCTION a_f (ID 216 OID 40532)\n> CONSTRAINT a_pkey (ID 3466 OID 40531)\n> POST-DATA BOUNDARY (ID 3612)\n> TABLE DATA a (ID 3610 OID 40525)\n> PRE-DATA BOUNDARY (ID 3611)\n>\n> I'm not really sure which of these I like better. The first one\n> is a much simpler code change, and there is some value in labeling\n> the output like that. The second patch's output seems less cluttered,\n> but it's committing a modularity sin by embedding formatting knowledge\n> at the caller level. Thoughts?\n>\n\nHonestly the double space in front of the strings with either the Original\nversion,\nor the \"detail:\" version is great.\n\nWhile I get the \"Less Cluttered\" version.. It \"detaches\" it a bit too much\nfrom the lead in, for me.\n\nKirk...\n\nOn Sat, Jun 3, 2023 at 2:28 PM Tom Lane <[email protected]> wrote:Kirk Wolak <[email protected]> writes:\n> On Fri, Jun 2, 2023 at 8:16 AM Tom Lane <[email protected]> wrote:> If I comprehend the suggestion, it will label each line with a warning.\n> Which implies I have 6 Warnings.\n\nRight, I'd forgotten that pg_log_warning() will interpose \"warning:\".\nAttached are two more-carefully-thought-out suggestions. The easy\nway is to use pg_log_warning_detail(), which produces output like\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\nAlternatively, we could assemble the details by hand, as in the\nsecond patch, producing\n\npg_dump: warning: could not resolve dependency loop among these items:\n FUNCTION a_f (ID 216 OID 40532)\n CONSTRAINT a_pkey (ID 3466 OID 40531)\n POST-DATA BOUNDARY (ID 3612)\n TABLE DATA a (ID 3610 OID 40525)\n PRE-DATA BOUNDARY (ID 3611)\n\nI'm not really sure which of these I like better. The first one\nis a much simpler code change, and there is some value in labeling\nthe output like that. The second patch's output seems less cluttered,\nbut it's committing a modularity sin by embedding formatting knowledge\nat the caller level. Thoughts?Honestly the double space in front of the strings with either the Original version,or the \"detail:\" version is great.While I get the \"Less Cluttered\" version.. It \"detaches\" it a bit too much from the lead in, for me. Kirk...",
"msg_date": "Sat, 3 Jun 2023 23:10:19 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "\"Imseih (AWS), Sami\" <[email protected]> writes:\n> Some minor comments I have:\n\n> 1/\n\n> + agginfo[i].aggfn.postponed_def = false;\t/* might get set during sort */\n\n> This is probably not needed as it seems that we can only\n> get into this situation when function dependencies are tracked.\n\nThat's just to keep from leaving an undefined field in the\nDumpableObject struct. You are very likely right that nothing\nwould examine that field today, but that seems like a poor\nexcuse for not initializing it.\n\n> 2/\n\n> The following description makes more sense. \n\n> + * section. This is sufficient to handle cases where a function depends on\n> + * some constraints, as can happen if a BEGIN ATOMIC function \n> + * references a constraint directly.\n\nI did not think this was an improvement. For one thing, I doubt\nthat BEGIN ATOMIC is essential to cause the problem. I didn't\nprove the point by making a test case, but a \"RETURN expression\"\nfunction could probably trip over it too by putting a sub-SELECT\nwith GROUP BY into the expression. Also, your wording promises\nmore about what cases we can handle than I think is justified.\n\n> 3/\n\n> The docs in https://www.postgresql.org/docs/current/ddl-depend.html\n> should be updated.\n\nRight. In general, I thought that the new-style-SQL-functions patch\nwas very slipshod about updating the docs, and omitted touching\nmany places where it would be appropriate to mention the new style,\nor even flat-out replace examples with new syntax. I did something\nabout this particular point, but perhaps someone would like to look\naround and work on that topic?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Jun 2023 13:36:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Kirk Wolak <[email protected]> writes:\n> On Sat, Jun 3, 2023 at 2:28 PM Tom Lane <[email protected]> wrote:\n>> I'm not really sure which of these I like better. The first one\n>> is a much simpler code change, and there is some value in labeling\n>> the output like that. The second patch's output seems less cluttered,\n>> but it's committing a modularity sin by embedding formatting knowledge\n>> at the caller level. Thoughts?\n\n> Honestly the double space in front of the strings with either the Original\n> version,\n> or the \"detail:\" version is great.\n> While I get the \"Less Cluttered\" version.. It \"detaches\" it a bit too much\n> from the lead in, for me.\n\nDone with pg_log_warning_detail. I ended up taking out the two spaces,\nas that still felt like a modularity violation. Also, although the\nextra space looks alright in English, I'm less sure about how it'd\nlook in another language where \"warning:\" and \"detail:\" get translated\nto strings of other lengths. So the new output (before 016107478\nfixed it) is\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Jun 2023 13:41:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Thanks to Imseih and Sami at AWS for reporting this. The original case\ncomes from an upgrade I've been trying to complete for a couple of months\nnow, since RDS started supporting 15 with a 15.2 release.\n\nThe process has been slow and painful because, originally, there was a bug\non the RDS side that stopped any of the upgrade logs from appearing in RDS\nor CloudWatch. Now, minimal log errors are shown, but not much detail.\n\nI think that BEGIN ATOMIC is the sleeper feature of Postgres 14. It is\na *fantastic\n*addition to the dependency-tracking system. However, it does not seem to\nwork.\n\nI just found this thread now looking for the string warning: could not\nresolve dependency loop among these items. I got that far by setting up a\nnew database, and a simple test case that reproduces the problem. I called\nthe test database ba for Begin Atomic:\n\n------------------------------------\n-- New database\n------------------------------------\nCREATE DATABASE ba;\n\n------------------------------------\n-- Connect\n------------------------------------\n-- Connect to new database...\n\n------------------------------------\n-- Setup schemas\n------------------------------------\nCREATE SCHEMA data; -- I don't have public running, so create a schema.\n\n-- All of my UPSERT view and function plumbing is tucked away here:\nCREATE SCHEMA types_plus;\n\n------------------------------------\n-- Define table\n------------------------------------\nDROP TABLE IF EXISTS data.test_event;\n\nCREATE TABLE IF NOT EXISTS data.test_event (\n id uuid NOT NULL DEFAULT NULL PRIMARY KEY,\n\n ts_dts timestamp NOT NULL DEFAULT 'epoch',\n\n who text NOT NULL DEFAULT NULL,\n what text NOT NULL DEFAULT NULL\n);\n\n-- PK is created by default as test_event_pkey, used in ON CONFLICT later.\n\n------------------------------------\n-- Create view, get type for free\n------------------------------------\nCREATE VIEW types_plus.test_event_v1 AS\n\nSELECT\n id,\n ts_dts,\n who,\n what\n\n FROM data.test_event;\n\n-- Create a function to accept an array of rows formatted as test_event_v1\nfor UPSERT into test_event.\nDROP FUNCTION IF EXISTS types_plus.insert_test_event_v1\n(types_plus.test_event_v1[]);\n\nCREATE OR REPLACE FUNCTION types_plus.insert_test_event_v1 (data_in\ntypes_plus.test_event_v1[])\n\nRETURNS void\n\nLANGUAGE SQL\n\nBEGIN ATOMIC\n\nINSERT INTO data.test_event (\n id,\n ts_dts,\n who,\n what)\n\nSELECT\n rows_in.id,\n rows_in.ts_dts,\n rows_in.who,\n rows_in.what\n\nFROM unnest(data_in) as rows_in\n\nON CONFLICT ON CONSTRAINT test_event_pkey DO UPDATE SET\n id = EXCLUDED.id,\n ts_dts = EXCLUDED.ts_dts,\n who = EXCLUDED.who,\n what = EXCLUDED.what;\n\nEND;\n\nI've tested pg_dump with the plain, custom, directory, and tar options. All\nreport the same problem:\n\n\nNote that I'm using Postgres and pg_dump 14.8 locally, RDS is still at 14.7\nof Postgres and presumably 14.7 of pg_dump*. *\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: FUNCTION insert_test_event_v1 (ID 224 OID 1061258)\npg_dump: CONSTRAINT test_event_pkey (ID 3441 OID 1061253)\npg_dump: POST-DATA BOUNDARY (ID 3584)\npg_dump: TABLE DATA test_event (ID 3582 OID 1061246)\npg_dump: PRE-DATA BOUNDARY (ID 3583)\n\n\nHunting around earlier, I found a thread here from 2020 that mentioned\nthat BEGIN\nATOMIC was going to make dependency resolution tougher for pg_dump. Makes\nsense, it can become circular or ambiguous in a hurry. However, in my case,\nI don't see that the dependencies are any kind of crazy spaghetti. I have\nhundreds of tables with the same pattern of dependencies for UPSERT work:\n\n1. CREATE TABLE foo\n2. CREATE PK foo_pk and other constraints.\n3. CREATE VIEW foo_v1 (I could use CREATE TYPE here, for my purposes, but\nprefer CREATE VIEW.)\n4. CREATE FUNCTION insert_foo_v1 (foo_v1[])\n\n>\nThe example I listed earlier is a simplified version of this. I didn't even\ncheck that the new database works, that's not important....I am only trying\nto check out pg_dump/pg_restore.\n\nCan anyone suggest a path forward for me with the upgrade to PG 15? I'm\nwaiting on that as we need to use MERGE and I'd like other PG 15\nimprovements, like the sort optimizations. As far as I can see it, my best\nbet is to\n\n1. Delete all of my routines with BEGIN ATOMIC. That's roughly 250 routines.\n2. Upgrade.\n3. Add back in the routines in PG 15.\n\nThat likely would work for me as my dependencies are shallow and not\ncircular. They simply require a specific order. I avoid chaining views of\nviews and functions off functions as a deliberate practice in Postgres.\n\nDown the track, does my sort of dependency problem seem resolvable by\npg_dump? I've got my own build-the-system-from-scratch system that use for\nlocal testing out of the source files, and I had to resort to hinting files\nto inject some things in the correct order. So, I'm not assuming that it\n*is* possible for pg_dump to resolve all sequences. Then again, all of this\ncould go away if DDL dependency checking were deferrable. But, I'm just a\nPostgres user, not a C-coder.\n\nThanks for looking at this bug, thanks again for the AWS staff for posting\nit, and thanks for any suggestions on my day-to-day problem of upgrading.\n\nThanks to Imseih and Sami at AWS for reporting this. The original case comes from an upgrade I've been trying to complete for a couple of months now, since RDS started supporting 15 with a 15.2 release.The process has been slow and painful because, originally, there was a bug on the RDS side that stopped any of the upgrade logs from appearing in RDS or CloudWatch. Now, minimal log errors are shown, but not much detail.I think that BEGIN ATOMIC is the sleeper feature of Postgres 14. It is a fantastic addition to the dependency-tracking system. However, it does not seem to work. I just found this thread now looking for the string warning: could not resolve dependency loop among these items. I got that far by setting up a new database, and a simple test case that reproduces the problem. I called the test database ba for Begin Atomic:-------------------------------------- New database------------------------------------CREATE DATABASE ba;-------------------------------------- Connect-------------------------------------- Connect to new database...-------------------------------------- Setup schemas------------------------------------CREATE SCHEMA data; -- I don't have public running, so create a schema.-- All of my UPSERT view and function plumbing is tucked away here:CREATE SCHEMA types_plus;-------------------------------------- Define table------------------------------------ DROP TABLE IF EXISTS data.test_event;CREATE TABLE IF NOT EXISTS data.test_event ( id uuid NOT NULL DEFAULT NULL PRIMARY KEY, ts_dts timestamp NOT NULL DEFAULT 'epoch', who text NOT NULL DEFAULT NULL, what text NOT NULL DEFAULT NULL);-- PK is created by default as test_event_pkey, used in ON CONFLICT later.-------------------------------------- Create view, get type for free------------------------------------ CREATE VIEW types_plus.test_event_v1 ASSELECT id, ts_dts, who, what FROM data.test_event;-- Create a function to accept an array of rows formatted as test_event_v1 for UPSERT into test_event.DROP FUNCTION IF EXISTS types_plus.insert_test_event_v1 (types_plus.test_event_v1[]);CREATE OR REPLACE FUNCTION types_plus.insert_test_event_v1 (data_in types_plus.test_event_v1[])RETURNS voidLANGUAGE SQLBEGIN ATOMIC\tINSERT INTO data.test_event (\t id,\t ts_dts,\t who,\t what)\tSELECT\t rows_in.id,\t rows_in.ts_dts,\t rows_in.who,\t rows_in.what\tFROM unnest(data_in) as rows_in\tON CONFLICT ON CONSTRAINT test_event_pkey DO UPDATE SET\t id = EXCLUDED.id,\t ts_dts = EXCLUDED.ts_dts,\t who = EXCLUDED.who,\t what = EXCLUDED.what;END;I've tested pg_dump with the plain, custom, directory, and tar options. All report the same problem:Note that I'm using Postgres and pg_dump 14.8 locally, RDS is still at 14.7 of Postgres and presumably 14.7 of pg_dump. pg_dump: warning: could not resolve dependency loop among these items:pg_dump: FUNCTION insert_test_event_v1 (ID 224 OID 1061258)pg_dump: CONSTRAINT test_event_pkey (ID 3441 OID 1061253)pg_dump: POST-DATA BOUNDARY (ID 3584)pg_dump: TABLE DATA test_event (ID 3582 OID 1061246)pg_dump: PRE-DATA BOUNDARY (ID 3583)Hunting around earlier, I found a thread here from 2020 that mentioned that BEGIN ATOMIC was going to make dependency resolution tougher for pg_dump. Makes sense, it can become circular or ambiguous in a hurry. However, in my case, I don't see that the dependencies are any kind of crazy spaghetti. I have hundreds of tables with the same pattern of dependencies for UPSERT work:1. CREATE TABLE foo2. CREATE PK foo_pk and other constraints.3. CREATE VIEW foo_v1 (I could use CREATE TYPE here, for my purposes, but prefer CREATE VIEW.)4. CREATE FUNCTION insert_foo_v1 (foo_v1[])\n\n\n\nThe example I listed earlier is a simplified version of this. I didn't even check that the new database works, that's not important....I am only trying to check out pg_dump/pg_restore.Can anyone suggest a path forward for me with the upgrade to PG 15? I'm waiting on that as we need to use MERGE and I'd like other PG 15 improvements, like the sort optimizations. As far as I can see it, my best bet is to 1. Delete all of my routines with BEGIN ATOMIC. That's roughly 250 routines.2. Upgrade.3. Add back in the routines in PG 15.That likely would work for me as my dependencies are shallow and not circular. They simply require a specific order. I avoid chaining views of views and functions off functions as a deliberate practice in Postgres.Down the track, does my sort of dependency problem seem resolvable by pg_dump? I've got my own build-the-system-from-scratch system that use for local testing out of the source files, and I had to resort to hinting files to inject some things in the correct order. So, I'm not assuming that it is possible for pg_dump to resolve all sequences. Then again, all of this could go away if DDL dependency checking were deferrable. But, I'm just a Postgres user, not a C-coder.Thanks for looking at this bug, thanks again for the AWS staff for posting it, and thanks for any suggestions on my day-to-day problem of upgrading.",
"msg_date": "Mon, 5 Jun 2023 13:58:20 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Edit error above, I said that dependency tracking \"does not seem to work.\"\nNot what I mean, it works great...It just does not seem to work for me with\nany of the upgrade options.\n\n>\n\nEdit error above, I said that dependency tracking \"does not seem to work.\" Not what I mean, it works great...It just does not seem to work for me with any of the upgrade options.",
"msg_date": "Mon, 5 Jun 2023 14:13:16 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Morris de Oryx <[email protected]> writes:\n> Can anyone suggest a path forward for me with the upgrade to PG 15?\n\nApply this patch:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ca9e79274938d8ede07d9990c2f6f5107553b524\n\nor more likely, pester RDS to do so sooner than the next quarterly\nreleases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Jun 2023 09:01:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Well *that *was quick. Thank you!\n\nImseih, Sami what are the chances of getting RDS to apply this patch?\nPostgres 15 was released nearly 8 months ago, and it would be great to get\nonto it.\n\nThanks\n\nOn Mon, Jun 5, 2023 at 3:01 PM Tom Lane <[email protected]> wrote:\n\n> Morris de Oryx <[email protected]> writes:\n> > Can anyone suggest a path forward for me with the upgrade to PG 15?\n>\n> Apply this patch:\n>\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ca9e79274938d8ede07d9990c2f6f5107553b524\n>\n> or more likely, pester RDS to do so sooner than the next quarterly\n> releases.\n>\n> regards, tom lane\n>\n\nWell that was quick. Thank you!Imseih, Sami what are the chances of getting RDS to apply this patch? Postgres 15 was released nearly 8 months ago, and it would be great to get onto it.ThanksOn Mon, Jun 5, 2023 at 3:01 PM Tom Lane <[email protected]> wrote:Morris de Oryx <[email protected]> writes:\n> Can anyone suggest a path forward for me with the upgrade to PG 15?\n\nApply this patch:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ca9e79274938d8ede07d9990c2f6f5107553b524\n\nor more likely, pester RDS to do so sooner than the next quarterly\nreleases.\n\n regards, tom lane",
"msg_date": "Mon, 5 Jun 2023 15:21:55 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:\n\n> Kirk Wolak <[email protected]> writes:\n> .. to strings of other lengths. So the new output (before 016107478\n> fixed it) is\n>\n> pg_dump: warning: could not resolve dependency loop among these items:\n> pg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\n> pg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\n> pg_dump: detail: POST-DATA BOUNDARY (ID 3612)\n> pg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\n> pg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n>\n> regards, tom lane\n>\n+1\n\nOn Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:Kirk Wolak <[email protected]> writes:.. to strings of other lengths. So the new output (before 016107478\nfixed it) is\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\n regards, tom lane+1",
"msg_date": "Mon, 5 Jun 2023 11:18:54 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Another suggestion for AWS/RDS: Expose *all of the logs in the upgrade tool\nchain*. If I'd had all of the logs at the start of this, I'd have been able\nto track down the issue myself quite quickly. Setting up that simple case\ndatabase took me less than an hour today. Without the logs, it's been\nimpossible (until the RDS patch a month ago) and difficult (now) to get a\nsense of what's happening.\n\nThank you\n\nOn Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:\n\n> On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:\n>\n>> Kirk Wolak <[email protected]> writes:\n>> .. to strings of other lengths. So the new output (before 016107478\n>> fixed it) is\n>>\n>> pg_dump: warning: could not resolve dependency loop among these items:\n>> pg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\n>> pg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\n>> pg_dump: detail: POST-DATA BOUNDARY (ID 3612)\n>> pg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\n>> pg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n>>\n>> regards, tom lane\n>>\n> +1\n>\n\nAnother suggestion for AWS/RDS: Expose all of the logs in the upgrade tool chain. If I'd had all of the logs at the start of this, I'd have been able to track down the issue myself quite quickly. Setting up that simple case database took me less than an hour today. Without the logs, it's been impossible (until the RDS patch a month ago) and difficult (now) to get a sense of what's happening.Thank youOn Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:Kirk Wolak <[email protected]> writes:.. to strings of other lengths. So the new output (before 016107478\nfixed it) is\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\n regards, tom lane+1",
"msg_date": "Mon, 5 Jun 2023 18:03:46 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Reminds me to say a *big* *thank you* to everyone involved in and\ncontributing to Postgres development for making error messages which are so\ngood. For a programmer, error text is a primary UI. Most Postgres errors\nand log messages are clear and sufficient. Even when they're a bit obscure,\nthey alway seem to be *on topic*, and enough to get you on the right\ntrack.I assume that we've all used programs and operating systems that emit\nmore....runic...errors.\n\nOn Mon, Jun 5, 2023 at 6:03 PM Morris de Oryx <[email protected]>\nwrote:\n\n> Another suggestion for AWS/RDS: Expose *all of the logs in the upgrade\n> tool chain*. If I'd had all of the logs at the start of this, I'd have\n> been able to track down the issue myself quite quickly. Setting up that\n> simple case database took me less than an hour today. Without the logs,\n> it's been impossible (until the RDS patch a month ago) and difficult (now)\n> to get a sense of what's happening.\n>\n> Thank you\n>\n> On Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:\n>\n>> On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:\n>>\n>>> Kirk Wolak <[email protected]> writes:\n>>> .. to strings of other lengths. So the new output (before 016107478\n>>> fixed it) is\n>>>\n>>> pg_dump: warning: could not resolve dependency loop among these items:\n>>> pg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\n>>> pg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\n>>> pg_dump: detail: POST-DATA BOUNDARY (ID 3612)\n>>> pg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\n>>> pg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n>>>\n>>> regards, tom lane\n>>>\n>> +1\n>>\n>\n\nReminds me to say a big thank you to everyone involved in and contributing to Postgres development for making error messages which are so good. For a programmer, error text is a primary UI. Most Postgres errors and log messages are clear and sufficient. Even when they're a bit obscure, they alway seem to be on topic, and enough to get you on the right track.I assume that we've all used programs and operating systems that emit more....runic...errors.On Mon, Jun 5, 2023 at 6:03 PM Morris de Oryx <[email protected]> wrote:Another suggestion for AWS/RDS: Expose all of the logs in the upgrade tool chain. If I'd had all of the logs at the start of this, I'd have been able to track down the issue myself quite quickly. Setting up that simple case database took me less than an hour today. Without the logs, it's been impossible (until the RDS patch a month ago) and difficult (now) to get a sense of what's happening.Thank youOn Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:Kirk Wolak <[email protected]> writes:.. to strings of other lengths. So the new output (before 016107478\nfixed it) is\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\n regards, tom lane+1",
"msg_date": "Mon, 5 Jun 2023 18:20:34 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Quick follow-up: I've heard back from AWS regarding applying Tom Lane's\npatch. Nope. RDS releases numbered versions, nothing else. As Postgres is\nnow at 15.8/15.3 in the wild and on 15.7/15.3 on RDS, I'm guessing that the\npatch won't be available until 14.9/15.4.\n\nAm I right in thinking that this patch will be integrated into 14.9/15.4,\nif they are released?\n\nThank you\n\nOn Mon, Jun 5, 2023 at 6:20 PM Morris de Oryx <[email protected]>\nwrote:\n\n> Reminds me to say a *big* *thank you* to everyone involved in and\n> contributing to Postgres development for making error messages which are so\n> good. For a programmer, error text is a primary UI. Most Postgres errors\n> and log messages are clear and sufficient. Even when they're a bit obscure,\n> they alway seem to be *on topic*, and enough to get you on the right\n> track.I assume that we've all used programs and operating systems that emit\n> more....runic...errors.\n>\n> On Mon, Jun 5, 2023 at 6:03 PM Morris de Oryx <[email protected]>\n> wrote:\n>\n>> Another suggestion for AWS/RDS: Expose *all of the logs in the upgrade\n>> tool chain*. If I'd had all of the logs at the start of this, I'd have\n>> been able to track down the issue myself quite quickly. Setting up that\n>> simple case database took me less than an hour today. Without the logs,\n>> it's been impossible (until the RDS patch a month ago) and difficult (now)\n>> to get a sense of what's happening.\n>>\n>> Thank you\n>>\n>> On Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:\n>>\n>>> On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:\n>>>\n>>>> Kirk Wolak <[email protected]> writes:\n>>>> .. to strings of other lengths. So the new output (before 016107478\n>>>> fixed it) is\n>>>>\n>>>> pg_dump: warning: could not resolve dependency loop among these items:\n>>>> pg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\n>>>> pg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\n>>>> pg_dump: detail: POST-DATA BOUNDARY (ID 3612)\n>>>> pg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\n>>>> pg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n>>>>\n>>>> regards, tom lane\n>>>>\n>>> +1\n>>>\n>>\n\nQuick follow-up: I've heard back from AWS regarding applying Tom Lane's patch. Nope. RDS releases numbered versions, nothing else. As Postgres is now at 15.8/15.3 in the wild and on 15.7/15.3 on RDS, I'm guessing that the patch won't be available until 14.9/15.4.Am I right in thinking that this patch will be integrated into 14.9/15.4, if they are released?Thank youOn Mon, Jun 5, 2023 at 6:20 PM Morris de Oryx <[email protected]> wrote:Reminds me to say a big thank you to everyone involved in and contributing to Postgres development for making error messages which are so good. For a programmer, error text is a primary UI. Most Postgres errors and log messages are clear and sufficient. Even when they're a bit obscure, they alway seem to be on topic, and enough to get you on the right track.I assume that we've all used programs and operating systems that emit more....runic...errors.On Mon, Jun 5, 2023 at 6:03 PM Morris de Oryx <[email protected]> wrote:Another suggestion for AWS/RDS: Expose all of the logs in the upgrade tool chain. If I'd had all of the logs at the start of this, I'd have been able to track down the issue myself quite quickly. Setting up that simple case database took me less than an hour today. Without the logs, it's been impossible (until the RDS patch a month ago) and difficult (now) to get a sense of what's happening.Thank youOn Mon, Jun 5, 2023 at 5:19 PM Kirk Wolak <[email protected]> wrote:On Sun, Jun 4, 2023 at 1:41 PM Tom Lane <[email protected]> wrote:Kirk Wolak <[email protected]> writes:.. to strings of other lengths. So the new output (before 016107478\nfixed it) is\n\npg_dump: warning: could not resolve dependency loop among these items:\npg_dump: detail: FUNCTION a_f (ID 216 OID 40532)\npg_dump: detail: CONSTRAINT a_pkey (ID 3466 OID 40531)\npg_dump: detail: POST-DATA BOUNDARY (ID 3612)\npg_dump: detail: TABLE DATA a (ID 3610 OID 40525)\npg_dump: detail: PRE-DATA BOUNDARY (ID 3611)\n\n regards, tom lane+1",
"msg_date": "Tue, 13 Jun 2023 20:09:49 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "On 2023-Jun-13, Morris de Oryx wrote:\n\n> Quick follow-up: I've heard back from AWS regarding applying Tom Lane's\n> patch. Nope. RDS releases numbered versions, nothing else.\n\nSounds like a reasonable policy to me.\n\n> As Postgres is now at 15.8/15.3 in the wild and on 15.7/15.3 on RDS,\n> I'm guessing that the patch won't be available until 14.9/15.4.\n> \n> Am I right in thinking that this patch will be integrated into 14.9/15.4,\n\nYes. The commits got into Postgres on June 4th, and 14.8 and 15.3 where\nstamped on May 8th. So the fixes will be in 14.9 and 15.4 in August,\nper https://www.postgresql.org/developer/roadmap/\n\n> if they are released?\n\nNo \"if\" about this, unless everybody here is hit by ICBMs.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:27:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
},
{
"msg_contents": "Thanks or the confirmation, and here's hoping no ICBMs!\n\nThanks or the confirmation, and here's hoping no ICBMs!",
"msg_date": "Tue, 13 Jun 2023 21:15:44 +0200",
"msg_from": "Morris de Oryx <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_dump does not properly deal with BEGIN ATOMIC function"
}
] |
[
{
"msg_contents": "LoongArch is a new architecture that is already supported by linux-6.1,\ngcc-12, and I want to add LoongArch spinlock support in s_lock.h.",
"msg_date": "Thu, 1 Jun 2023 15:53:45 +0800",
"msg_from": "zang ruochen <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add LoongArch spinlock support in s_lock.h."
},
{
"msg_contents": "> On 1 Jun 2023, at 09:53, zang ruochen <[email protected]> wrote:\n\n> LoongArch is a new architecture that is already supported by linux-6.1, gcc-12, and I want to add LoongArch spinlock support in s_lock.h.\n\nThis has been discussed a number of times, see the thread linked below for a\ngood starting point.\n\nhttps://postgr.es/m/[email protected]\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 1 Jun 2023 09:57:56 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add LoongArch spinlock support in s_lock.h."
},
{
"msg_contents": "> On 1 Jun 2023, at 10:05, zang ruochen <[email protected]> wrote:\n> \n> Thanks, due to my mistake, I didn't notice that someone had already submitted a patch, so please ignore my patch submission this time.\n\nNo worries, thanks for the submission and support of postgres!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 1 Jun 2023 10:06:33 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add LoongArch spinlock support in s_lock.h."
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nPeter's patch set for autogenerating syscache info\n(https://postgr.es/m/75ae5875-3abc-dafc-8aec-73247ed41cde%40eisentraut.org)\ntouched on one of my least favourite parts of Catalog.pm: the\nparenthesis-counting nightmare that is the parsing of catalog header\ndirectives.\n\nHowever, now that we require Perl 5.14, we can use the named capture\nfeature (introduced in Perl 5.10) to make that a lot clearer, as in the\nattached patch.\n\nWhile I was rewriting the regexes I noticed that they were inconsistent\nabout whether they accepted whitespace in the parameter lists, so I took\nthe liberty to make them consistently allow whitespace after the opening\nparen and the commas, which is what most of them already did.\n\nI've verified that the generated postgres.bki is identical to before,\nand all tests pass.\n\n- ilmari",
"msg_date": "Thu, 01 Jun 2023 13:12:22 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "On Thu, Jun 1, 2023 at 7:12 PM Dagfinn Ilmari Mannsåker <[email protected]>\nwrote:\n>\n> Hi Hackers,\n>\n> Peter's patch set for autogenerating syscache info\n> (https://postgr.es/m/75ae5875-3abc-dafc-8aec-73247ed41cde%40eisentraut.org\n)\n> touched on one of my least favourite parts of Catalog.pm: the\n> parenthesis-counting nightmare that is the parsing of catalog header\n> directives.\n>\n> However, now that we require Perl 5.14, we can use the named capture\n> feature (introduced in Perl 5.10) to make that a lot clearer, as in the\n> attached patch.\n>\n> While I was rewriting the regexes I noticed that they were inconsistent\n> about whether they accepted whitespace in the parameter lists, so I took\n> the liberty to make them consistently allow whitespace after the opening\n> paren and the commas, which is what most of them already did.\n\nLGTM\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jun 1, 2023 at 7:12 PM Dagfinn Ilmari Mannsåker <[email protected]> wrote:>> Hi Hackers,>> Peter's patch set for autogenerating syscache info> (https://postgr.es/m/75ae5875-3abc-dafc-8aec-73247ed41cde%40eisentraut.org)> touched on one of my least favourite parts of Catalog.pm: the> parenthesis-counting nightmare that is the parsing of catalog header> directives.>> However, now that we require Perl 5.14, we can use the named capture> feature (introduced in Perl 5.10) to make that a lot clearer, as in the> attached patch.>> While I was rewriting the regexes I noticed that they were inconsistent> about whether they accepted whitespace in the parameter lists, so I took> the liberty to make them consistently allow whitespace after the opening> paren and the commas, which is what most of them already did.LGTM--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 17:50:10 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <[email protected]> writes:\n\n> However, now that we require Perl 5.14, we can use the named capture\n> feature (introduced in Perl 5.10) to make that a lot clearer, as in the\n> attached patch.\n\nAdded to the open commitfest: https://commitfest.postgresql.org/43/4361/\n\n- ilmari\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:14:27 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 01:12:22PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> While I was rewriting the regexes I noticed that they were inconsistent\n> about whether they accepted whitespace in the parameter lists, so I took\n> the liberty to make them consistently allow whitespace after the opening\n> paren and the commas, which is what most of them already did.\n\nThat's the business with \\s* in CATALOG. Is that right? Indeed,\nthat's more consistent.\n\n> I've verified that the generated postgres.bki is identical to before,\n> and all tests pass.\n\nI find that pretty cool. Nice. Patch looks OK from here.\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 10:03:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Thu, Jun 01, 2023 at 01:12:22PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> While I was rewriting the regexes I noticed that they were inconsistent\n>> about whether they accepted whitespace in the parameter lists, so I took\n>> the liberty to make them consistently allow whitespace after the opening\n>> paren and the commas, which is what most of them already did.\n>\n> That's the business with \\s* in CATALOG. Is that right? Indeed,\n> that's more consistent.\n\nYes, \\s* means \"zero or more whitespace characters\".\n\n>> I've verified that the generated postgres.bki is identical to before,\n>> and all tests pass.\n>\n> I find that pretty cool. Nice. Patch looks OK from here.\n\nThanks for the review!\n\n- ilmari\n\n\n",
"msg_date": "Wed, 14 Jun 2023 10:30:23 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 10:30:23AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Thanks for the review!\n\nv17 is now open, so applied this one.\n--\nMichael",
"msg_date": "Fri, 30 Jun 2023 09:26:51 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Wed, Jun 14, 2023 at 10:30:23AM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Thanks for the review!\n>\n> v17 is now open, so applied this one.\n\nThanks for committing!\n\n- ilmari\n\n\n",
"msg_date": "Fri, 30 Jun 2023 10:34:08 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Using named captures in Catalog::ParseHeader()"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nWhile hacking on Catalog.pm (over in\nhttps://postgr.es/m/87y1l3s7o9.fsf%40wibble.ilmari.org) I noticed that\nninja wouldn't rebuild postgres.bki on changes to the module. Here's a\npatch that adds it to depend_files for the targets I culd find that\ninvoke scripts that use it.\n\n- ilmari",
"msg_date": "Thu, 01 Jun 2023 13:41:40 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 01:41:40PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> While hacking on Catalog.pm (over in\n> https://postgr.es/m/87y1l3s7o9.fsf%40wibble.ilmari.org) I noticed that\n> ninja wouldn't rebuild postgres.bki on changes to the module. Here's a\n> patch that adds it to depend_files for the targets I culd find that\n> invoke scripts that use it.\n\nNice catch! Indeed, we would need to track the dependency in the\nthree areas that use this module.\n--\nMichael",
"msg_date": "Thu, 1 Jun 2023 09:11:49 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Could you create a variable for the file instead of calling files() 3\ntimes?\n\n> catalog_pm = files('path/to/Catalog.pm')\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 01 Jun 2023 16:16:27 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Thu, 1 Jun 2023, at 22:16, Tristan Partin wrote:\n> Could you create a variable for the file instead of calling files() 3\n> times?\n>\n>> catalog_pm = files('path/to/Catalog.pm')\n\nSure, but which meson.build file should it go in? I know nothing about meson variable scoping.\n\n> -- \n> Tristan Partin\n> Neon (https://neon.tech)\n\n-- \n- ilmari\n\n\n",
"msg_date": "Thu, 01 Jun 2023 22:22:12 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Thu Jun 1, 2023 at 4:22 PM CDT, Dagfinn Ilmari Mannsåker wrote:\n> On Thu, 1 Jun 2023, at 22:16, Tristan Partin wrote:\n> > Could you create a variable for the file instead of calling files() 3\n> > times?\n> >\n> >> catalog_pm = files('path/to/Catalog.pm')\n>\n> Sure, but which meson.build file should it go in? I know nothing about meson variable scoping.\n\nNot a problem. In Meson, variables are globally-scoped. You can use\nunset_variable() however to unset it.\n\nIn our case, we should add the ^line to src/backend/catalog/meson.build.\nI would say just throw the line after the copyright comment. Hopefully\nthere isn't a problem with the ordering of the Meson file tree traversal\n(ie the targets you are changing are configured after we get through\nsrc/backend/catalog/meson.build).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 01 Jun 2023 23:06:04 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-01 23:06:04 -0500, Tristan Partin wrote:\n> In our case, we should add the ^line to src/backend/catalog/meson.build.\n\nsrc/backend is only reached well after src/include, due to needing\ndependencies on files generated in src/include.\n\nI wonder if we instead could just make perl output the files it loads and\nhandle dependencies automatically that way? But that's more work, so it's\nprobably the right thing to go for the manual path for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Jun 2023 06:00:28 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Fri Jun 2, 2023 at 8:00 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-01 23:06:04 -0500, Tristan Partin wrote:\n> > In our case, we should add the ^line to src/backend/catalog/meson.build.\n>\n> src/backend is only reached well after src/include, due to needing\n> dependencies on files generated in src/include.\n\nI was worried about this :(.\n\n> I wonder if we instead could just make perl output the files it loads and\n> handle dependencies automatically that way? But that's more work, so it's\n> probably the right thing to go for the manual path for now.\n\nI am not familar with Perl enough (at all haha) to know if that is\npossible. I don't know exactly what these Perl files do, but perhaps it\nmight make sense to have some global lookup table that is setup near the\nbeginning of the script.\n\nperl_files = {\n 'Catalog.pm': files('path/to/Catalog.pm'),\n ...\n}\n\nOtherwise, manual as it is in the original patch seems like an alright\ncompromise for now.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 02 Jun 2023 08:10:43 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n> > I wonder if we instead could just make perl output the files it loads and\n> > handle dependencies automatically that way? But that's more work, so it's\n> > probably the right thing to go for the manual path for now.\n> \n> I am not familar with Perl enough (at all haha) to know if that is\n> possible. I don't know exactly what these Perl files do, but perhaps it\n> might make sense to have some global lookup table that is setup near the\n> beginning of the script.\n\nIt'd be nice to have something more general - there are other perl modules we\nload, e.g.\n./src/backend/catalog/Catalog.pm\n./src/backend/utils/mb/Unicode/convutils.pm\n./src/tools/PerfectHash.pm\n\n\n> perl_files = {\n> 'Catalog.pm': files('path/to/Catalog.pm'),\n> ...\n> }\n\nI think you got it, but just to make sure: I was thinking of generating a\ndepfile from within perl. Something like what you propose doesn't quite seems\nlike a sufficient improvement.\n\n\n> Otherwise, manual as it is in the original patch seems like an alright\n> compromise for now.\n\nYea. I'm working on a more complete version, also dealing with dependencies on\nPerfectHash.pm.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Jun 2023 06:47:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n> > > I wonder if we instead could just make perl output the files it loads and\n> > > handle dependencies automatically that way? But that's more work, so it's\n> > > probably the right thing to go for the manual path for now.\n> > \n> > I am not familar with Perl enough (at all haha) to know if that is\n> > possible. I don't know exactly what these Perl files do, but perhaps it\n> > might make sense to have some global lookup table that is setup near the\n> > beginning of the script.\n>\n> It'd be nice to have something more general - there are other perl modules we\n> load, e.g.\n> ./src/backend/catalog/Catalog.pm\n> ./src/backend/utils/mb/Unicode/convutils.pm\n> ./src/tools/PerfectHash.pm\n>\n>\n> > perl_files = {\n> > 'Catalog.pm': files('path/to/Catalog.pm'),\n> > ...\n> > }\n>\n> I think you got it, but just to make sure: I was thinking of generating a\n> depfile from within perl. Something like what you propose doesn't quite seems\n> like a sufficient improvement.\n\nWhatever I am proposing is definitely subpar to generating a depfile. So\nif that can be done, that is the best option!\n\n> > Otherwise, manual as it is in the original patch seems like an alright\n> > compromise for now.\n>\n> Yea. I'm working on a more complete version, also dealing with dependencies on\n> PerfectHash.pm.\n\nGood to hear. Happy to review any patches :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 02 Jun 2023 10:13:44 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-02 10:13:44 -0500, Tristan Partin wrote:\n> On Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n> > > > I wonder if we instead could just make perl output the files it loads and\n> > > > handle dependencies automatically that way? But that's more work, so it's\n> > > > probably the right thing to go for the manual path for now.\n> > > \n> > > I am not familar with Perl enough (at all haha) to know if that is\n> > > possible. I don't know exactly what these Perl files do, but perhaps it\n> > > might make sense to have some global lookup table that is setup near the\n> > > beginning of the script.\n> >\n> > It'd be nice to have something more general - there are other perl modules we\n> > load, e.g.\n> > ./src/backend/catalog/Catalog.pm\n> > ./src/backend/utils/mb/Unicode/convutils.pm\n> > ./src/tools/PerfectHash.pm\n> >\n> >\n> > > perl_files = {\n> > > 'Catalog.pm': files('path/to/Catalog.pm'),\n> > > ...\n> > > }\n> >\n> > I think you got it, but just to make sure: I was thinking of generating a\n> > depfile from within perl. Something like what you propose doesn't quite seems\n> > like a sufficient improvement.\n> \n> Whatever I am proposing is definitely subpar to generating a depfile. So\n> if that can be done, that is the best option!\n\nI looked for a bit, but couldn't find an easy way to do so. I would still like\nto pursue going towards dep files for the perl scripts, even if that requires\nexplicit support in the perl scripts, but that's a change for later.\n\n\n> > > Otherwise, manual as it is in the original patch seems like an alright\n> > > compromise for now.\n> >\n> > Yea. I'm working on a more complete version, also dealing with dependencies on\n> > PerfectHash.pm.\n> \n> Good to hear. Happy to review any patches :).\n\nAttached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 9 Jun 2023 11:43:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Patch looks good to me!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 09 Jun 2023 13:58:46 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 13:58:46 -0500, Tristan Partin wrote:\n> Patch looks good to me!\n\nThanks for the report Ilmari and the review Tristan! Pushed the fix.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 9 Jun 2023 20:16:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 11:43:54 -0700, Andres Freund wrote:\n> On 2023-06-02 10:13:44 -0500, Tristan Partin wrote:\n> > On Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n> > > Hi,\n> > >\n> > > On 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n> > > > > I wonder if we instead could just make perl output the files it loads and\n> > > > > handle dependencies automatically that way? But that's more work, so it's\n> > > > > probably the right thing to go for the manual path for now.\n> > > > \n> > > > I am not familar with Perl enough (at all haha) to know if that is\n> > > > possible. I don't know exactly what these Perl files do, but perhaps it\n> > > > might make sense to have some global lookup table that is setup near the\n> > > > beginning of the script.\n> > >\n> > > It'd be nice to have something more general - there are other perl modules we\n> > > load, e.g.\n> > > ./src/backend/catalog/Catalog.pm\n> > > ./src/backend/utils/mb/Unicode/convutils.pm\n> > > ./src/tools/PerfectHash.pm\n> > >\n> > >\n> > > > perl_files = {\n> > > > 'Catalog.pm': files('path/to/Catalog.pm'),\n> > > > ...\n> > > > }\n> > >\n> > > I think you got it, but just to make sure: I was thinking of generating a\n> > > depfile from within perl. Something like what you propose doesn't quite seems\n> > > like a sufficient improvement.\n> > \n> > Whatever I am proposing is definitely subpar to generating a depfile. So\n> > if that can be done, that is the best option!\n> \n> I looked for a bit, but couldn't find an easy way to do so. I would still like\n> to pursue going towards dep files for the perl scripts, even if that requires\n> explicit support in the perl scripts, but that's a change for later.\n\nTook a second look - sure looks like just using values %INC should suffice?\n\nIlmari, you're the perl expert, is there an issue with that?\n\nTristan, any chance you're interested hacking that up for a bunch of the\nscripts? Might be worth adding a common helper for, I guess?\n\nSomething like\n\nfor (values %INC)\n{\n\tprint STDERR \"$kw_def_file: $_\\n\";\n}\n\nseems to roughly do the right thing for gen_keywordlist.pl. Of course for\nsomething real it'd need an option where to put that data, instead of printing\nto stderr.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:32:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Wed Jun 14, 2023 at 2:32 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-09 11:43:54 -0700, Andres Freund wrote:\n> > On 2023-06-02 10:13:44 -0500, Tristan Partin wrote:\n> > > On Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n> > > > Hi,\n> > > >\n> > > > On 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n> > > > > > I wonder if we instead could just make perl output the files it loads and\n> > > > > > handle dependencies automatically that way? But that's more work, so it's\n> > > > > > probably the right thing to go for the manual path for now.\n> > > > > \n> > > > > I am not familar with Perl enough (at all haha) to know if that is\n> > > > > possible. I don't know exactly what these Perl files do, but perhaps it\n> > > > > might make sense to have some global lookup table that is setup near the\n> > > > > beginning of the script.\n> > > >\n> > > > It'd be nice to have something more general - there are other perl modules we\n> > > > load, e.g.\n> > > > ./src/backend/catalog/Catalog.pm\n> > > > ./src/backend/utils/mb/Unicode/convutils.pm\n> > > > ./src/tools/PerfectHash.pm\n> > > >\n> > > >\n> > > > > perl_files = {\n> > > > > 'Catalog.pm': files('path/to/Catalog.pm'),\n> > > > > ...\n> > > > > }\n> > > >\n> > > > I think you got it, but just to make sure: I was thinking of generating a\n> > > > depfile from within perl. Something like what you propose doesn't quite seems\n> > > > like a sufficient improvement.\n> > > \n> > > Whatever I am proposing is definitely subpar to generating a depfile. So\n> > > if that can be done, that is the best option!\n> > \n> > I looked for a bit, but couldn't find an easy way to do so. I would still like\n> > to pursue going towards dep files for the perl scripts, even if that requires\n> > explicit support in the perl scripts, but that's a change for later.\n>\n> Took a second look - sure looks like just using values %INC should suffice?\n>\n> Ilmari, you're the perl expert, is there an issue with that?\n>\n> Tristan, any chance you're interested hacking that up for a bunch of the\n> scripts? Might be worth adding a common helper for, I guess?\n>\n> Something like\n>\n> for (values %INC)\n> {\n> \tprint STDERR \"$kw_def_file: $_\\n\";\n> }\n>\n> seems to roughly do the right thing for gen_keywordlist.pl. Of course for\n> something real it'd need an option where to put that data, instead of printing\n> to stderr.\n\nI would need to familiarize myself with perl, but since you've written\nprobably all or almost all that needs to be written, I can probably\nscrape by :).\n\nI definitely have the bandwidth to make this change though pending what\nIlmari says.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 14 Jun 2023 19:25:51 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On 2023-06-14 We 15:32, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-09 11:43:54 -0700, Andres Freund wrote:\n>> On 2023-06-02 10:13:44 -0500, Tristan Partin wrote:\n>>> On Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n>>>> Hi,\n>>>>\n>>>> On 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n>>>>>> I wonder if we instead could just make perl output the files it loads and\n>>>>>> handle dependencies automatically that way? But that's more work, so it's\n>>>>>> probably the right thing to go for the manual path for now.\n>>>>> I am not familar with Perl enough (at all haha) to know if that is\n>>>>> possible. I don't know exactly what these Perl files do, but perhaps it\n>>>>> might make sense to have some global lookup table that is setup near the\n>>>>> beginning of the script.\n>>>> It'd be nice to have something more general - there are other perl modules we\n>>>> load, e.g.\n>>>> ./src/backend/catalog/Catalog.pm\n>>>> ./src/backend/utils/mb/Unicode/convutils.pm\n>>>> ./src/tools/PerfectHash.pm\n>>>>\n>>>>\n>>>>> perl_files = {\n>>>>> 'Catalog.pm': files('path/to/Catalog.pm'),\n>>>>> ...\n>>>>> }\n>>>> I think you got it, but just to make sure: I was thinking of generating a\n>>>> depfile from within perl. Something like what you propose doesn't quite seems\n>>>> like a sufficient improvement.\n>>> Whatever I am proposing is definitely subpar to generating a depfile. So\n>>> if that can be done, that is the best option!\n>> I looked for a bit, but couldn't find an easy way to do so. I would still like\n>> to pursue going towards dep files for the perl scripts, even if that requires\n>> explicit support in the perl scripts, but that's a change for later.\n> Took a second look - sure looks like just using values %INC should suffice?\n>\n> Ilmari, you're the perl expert, is there an issue with that?\n>\n> Tristan, any chance you're interested hacking that up for a bunch of the\n> scripts? Might be worth adding a common helper for, I guess?\n>\n> Something like\n>\n> for (values %INC)\n> {\n> \tprint STDERR \"$kw_def_file: $_\\n\";\n> }\n>\n> seems to roughly do the right thing for gen_keywordlist.pl. Of course for\n> something real it'd need an option where to put that data, instead of printing\n> to stderr.\n>\n\nUnless I'm misunderstanding, this doesn't look terribly feasible to me. \nYou can only get at %INC by loading the module, which in many cases will \nhave side effects. And then you would also need to filter out things \nloaded that are not our artefacts (e.g. Catalog.pm loads File::Compare).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-14 We 15:32, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-06-09 11:43:54 -0700, Andres Freund wrote:\n\n\nOn 2023-06-02 10:13:44 -0500, Tristan Partin wrote:\n\n\nOn Fri Jun 2, 2023 at 8:47 AM CDT, Andres Freund wrote:\n\n\nHi,\n\nOn 2023-06-02 08:10:43 -0500, Tristan Partin wrote:\n\n\n\nI wonder if we instead could just make perl output the files it loads and\nhandle dependencies automatically that way? But that's more work, so it's\nprobably the right thing to go for the manual path for now.\n\n\n\nI am not familar with Perl enough (at all haha) to know if that is\npossible. I don't know exactly what these Perl files do, but perhaps it\nmight make sense to have some global lookup table that is setup near the\nbeginning of the script.\n\n\n\nIt'd be nice to have something more general - there are other perl modules we\nload, e.g.\n./src/backend/catalog/Catalog.pm\n./src/backend/utils/mb/Unicode/convutils.pm\n./src/tools/PerfectHash.pm\n\n\n\n\nperl_files = {\n 'Catalog.pm': files('path/to/Catalog.pm'),\n ...\n}\n\n\n\nI think you got it, but just to make sure: I was thinking of generating a\ndepfile from within perl. Something like what you propose doesn't quite seems\nlike a sufficient improvement.\n\n\n\nWhatever I am proposing is definitely subpar to generating a depfile. So\nif that can be done, that is the best option!\n\n\n\nI looked for a bit, but couldn't find an easy way to do so. I would still like\nto pursue going towards dep files for the perl scripts, even if that requires\nexplicit support in the perl scripts, but that's a change for later.\n\n\n\nTook a second look - sure looks like just using values %INC should suffice?\n\nIlmari, you're the perl expert, is there an issue with that?\n\nTristan, any chance you're interested hacking that up for a bunch of the\nscripts? Might be worth adding a common helper for, I guess?\n\nSomething like\n\nfor (values %INC)\n{\n\tprint STDERR \"$kw_def_file: $_\\n\";\n}\n\nseems to roughly do the right thing for gen_keywordlist.pl. Of course for\nsomething real it'd need an option where to put that data, instead of printing\nto stderr.\n\n\n\n\n\nUnless I'm misunderstanding, this doesn't look terribly feasible\n to me. You can only get at %INC by loading the module, which in\n many cases will have side effects. And then you would also need to\n filter out things loaded that are not our artefacts (e.g.\n Catalog.pm loads File::Compare).\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 16 Jun 2023 16:20:14 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-16 16:20:14 -0400, Andrew Dunstan wrote:\n> Unless I'm misunderstanding, this doesn't look terribly feasible to me. You\n> can only get at %INC by loading the module, which in many cases will have\n> side effects.\n\nI was envisioning using %INC after the use/require block - I don't think our\nscripts load additional modules after that point?\n\n\n> And then you would also need to filter out things loaded that\n> are not our artefacts (e.g. Catalog.pm loads File::Compare).\n\nI don't think we would need to filter the output. This would just be for a\nbuild dependency file. I don't see a problem with rerunning genbki.pl et al after\nsomebody updates File::Compare?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 16 Jun 2023 14:16:28 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n\n> Hi,\n>\n> On 2023-06-16 16:20:14 -0400, Andrew Dunstan wrote:\n>> Unless I'm misunderstanding, this doesn't look terribly feasible to me. You\n>> can only get at %INC by loading the module, which in many cases will have\n>> side effects.\n>\n> I was envisioning using %INC after the use/require block - I don't think our\n> scripts load additional modules after that point?\n\nI was thinking of a module for writing depfile entries that would append\n`values %INC` to the list of source files for each target specified by\nthe script.\n\n>> And then you would also need to filter out things loaded that\n>> are not our artefacts (e.g. Catalog.pm loads File::Compare).\n>\n> I don't think we would need to filter the output. This would just be for a\n> build dependency file. I don't see a problem with rerunning genbki.pl et al after\n> somebody updates File::Compare?\n\nAs long as mason doesn't object to dep files outside the source tree.\nOtherwise, and option would be to pass in @SOURCE_ROOT@ and only include\n`grep /^\\Q$source_root\\E\\b/, values %INC` in the depfile.\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari\n\n\n",
"msg_date": "Fri, 16 Jun 2023 23:10:26 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
},
{
"msg_contents": "On Fri Jun 16, 2023 at 5:10 PM CDT, Dagfinn Ilmari Mannsåker wrote:\n> Andres Freund <[email protected]> writes:\n>\n> > Hi,\n> >\n> > On 2023-06-16 16:20:14 -0400, Andrew Dunstan wrote:\n> >> Unless I'm misunderstanding, this doesn't look terribly feasible to me. You\n> >> can only get at %INC by loading the module, which in many cases will have\n> >> side effects.\n> >\n> > I was envisioning using %INC after the use/require block - I don't think our\n> > scripts load additional modules after that point?\n>\n> I was thinking of a module for writing depfile entries that would append\n> `values %INC` to the list of source files for each target specified by\n> the script.\n>\n> >> And then you would also need to filter out things loaded that\n> >> are not our artefacts (e.g. Catalog.pm loads File::Compare).\n> >\n> > I don't think we would need to filter the output. This would just be for a\n> > build dependency file. I don't see a problem with rerunning genbki.pl et al after\n> > somebody updates File::Compare?\n>\n> As long as mason doesn't object to dep files outside the source tree.\n> Otherwise, and option would be to pass in @SOURCE_ROOT@ and only include\n> `grep /^\\Q$source_root\\E\\b/, values %INC` in the depfile.\n\nMeson has no such restrictions.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 16 Jun 2023 17:11:59 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Missing dep on Catalog.pm in meson rules"
}
] |
[
{
"msg_contents": "Hello all,\n\nI am a researcher in databases who would like to suggest a new function. I\nam writing to you because you have an active developer community. Your\nwebsite said that suggestions for new functions should go to this mailing\nlist. If there is another mailing list you prefer, please let me know.\n\nMy research is in updating views -- the problem of translating an update in\na view to an update to a set of underlying base tables. This problem has\nbeen partially solved for many years, including in PostgreSQL, but a\ncomplete solution hasn't been found.\n\nViews are useful for data independence; if users only access data through\nviews, then underlying databases can change without user programs. Data\nindependence requires an automatic solution to the view update problem.\n\nIn my research, I went back to the initial papers about the problem. The\nmost promising approach was the \"constant complement\" approach. It starts\nfrom the idea that a view shows only part of the information in a database,\nand that view updates should never change the part of the database that\nisn't exposed in the view. (The \"complement\" is the unexposed part, and\n\"constant\" means that a view update shouldn't change the complement.) The\n\"constant complement\" constraint is intuitive, that a view update shouldn't\nhave side effects on information not available through the view.\n\nA seminal paper showed that defining a complement is enough, because each\ncomplement of a view creates a unique view update. Unfortunately, there\nare limitations. Views have multiple complements, and no unique minimal\ncomplement exists. Because of this limitation and other practical\ndifficulties, the constant complement approach was abandoned.\n\nI used a theorem in this initial paper that other researchers didn't use,\nthat shows the inverse. An update method defines a unique complement. I\nused the two theorems as a saw's upstroke and downstroke to devise view\nupdate methods for several relational operators. Unlike other approaches,\nthese methods have a solid mathematical foundation.\n\nSome relational operators are easy (selection), others are hard\n(projection); some have several valid update methods that can be used\ninterchangeably (union) and some can have several valid update methods that\nreflect different semantics (joins). For joins, I found clues in the\ndatabase that can determine which update method to use. I address the\nother relational operators, but not in the attached paper\n.\nI also discuss the problem of when views can't have updates, and possible\nreasons why.\n\nI have attached my arXiv paper. I would appreciate anyone's interest in\nthis topic.\n\nYours\nTerry Brennan",
"msg_date": "Thu, 1 Jun 2023 12:18:47 -0500",
"msg_from": "Terry Brennan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Request for new function in view update"
},
{
"msg_contents": "On 01/06/2023 13:18, Terry Brennan wrote:\n> Hello all,\n> \n> I am a researcher in databases who would like to suggest a \n> new function. I am writing to you because you have an active developer \n> community. Your website said that suggestions for new functions should \n> go to this mailing list. If there is another mailing list you prefer, \n> please let me know.\n\nYou're in the right place.\n\n> My research is in updating views -- the problem of translating an update \n> in a view to an update to a set of underlying base tables. This problem \n> has been partially solved for many years, including in PostgreSQL, but a \n> complete solution hasn't been found.\n\nYeah, PostgreSQL only supports updating views in some simple cases [1]. \nPatches to handle more cases welcome.\n\n[1] \nhttps://www.postgresql.org/docs/current/sql-createview.html#SQL-CREATEVIEW-UPDATABLE-VIEWS\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 1 Jun 2023 21:43:16 -0400",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for new function in view update"
},
{
"msg_contents": "Hi Heikki\n\nPostgreSQL supports only one-table views, which means that the relational\noperators are limited to \"selection\" and \"projection.\" I have provided\nupdate methods for these two, plus for two kinds of joins and unions.\n\nI discuss a hierarchical join, when two tables together define an entity.\nThe classic example is an invoice. One table, the Invoice Master table,\nhas a row for each invoice. The daughter table, the Invoice Detail table,\nhas a row for each item on each invoice. The two tables are linked by\nhaving the same key -- the invoice number. When updating, a detail row\nshould never be left without a corresponding master row, though master rows\nwithout detail rows can exist.\n\nThe second type is a foreign key. For example, an invoice detail line will\nhave an item number column, containing a key for the Item table. The key\nfor the Invoice Detail table is unrelated to the key for the Item table.\nDeleting an Invoice Detail row should never delete an Item row, and adding\nan Invoice Detail row should never add an Item row.\n\nThese two examples have the same relational operator -- join -- that have\ndifferent semantics -- hierarchical and foreign key -- leading to different\nupdate methods. PostgreSQL can determine which type of join is present by\nexamining the primary keys of the two tables, and by examining other clues,\nsuch as referential integrity checking.\n\nAdding join and union would allow many more views to be updateable.\n\nYours,\nTerry Brennan\n\n\nOn Thu, Jun 1, 2023 at 8:43 PM Heikki Linnakangas <[email protected]> wrote:\n\n> On 01/06/2023 13:18, Terry Brennan wrote:\n> > Hello all,\n> >\n> > I am a researcher in databases who would like to suggest a\n> > new function. I am writing to you because you have an active developer\n> > community. Your website said that suggestions for new functions should\n> > go to this mailing list. If there is another mailing list you prefer,\n> > please let me know.\n>\n> You're in the right place.\n>\n> > My research is in updating views -- the problem of translating an update\n> > in a view to an update to a set of underlying base tables. This problem\n> > has been partially solved for many years, including in PostgreSQL, but a\n> > complete solution hasn't been found.\n>\n> Yeah, PostgreSQL only supports updating views in some simple cases [1].\n> Patches to handle more cases welcome.\n>\n> [1]\n>\n> https://www.postgresql.org/docs/current/sql-createview.html#SQL-CREATEVIEW-UPDATABLE-VIEWS\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n\nHi HeikkiPostgreSQL supports only one-table views, which means that the relational operators are limited to \"selection\" and \"projection.\" I have provided update methods for these two, plus for two kinds of joins and unions.I discuss a hierarchical join, when two tables together define an entity. The classic example is an invoice. One table, the Invoice Master table, has a row for each invoice. The daughter table, the Invoice Detail table, has a row for each item on each invoice. The two tables are linked by having the same key -- the invoice number. When updating, a detail row should never be left without a corresponding master row, though master rows without detail rows can exist.The second type is a foreign key. For example, an invoice detail line will have an item number column, containing a key for the Item table. The key for the Invoice Detail table is unrelated to the key for the Item table. Deleting an Invoice Detail row should never delete an Item row, and adding an Invoice Detail row should never add an Item row.These two examples have the same relational operator -- join -- that have different semantics -- hierarchical and foreign key -- leading to different update methods. PostgreSQL can determine which type of join is present by examining the primary keys of the two tables, and by examining other clues, such as referential integrity checking.Adding join and union would allow many more views to be updateable.Yours,Terry BrennanOn Thu, Jun 1, 2023 at 8:43 PM Heikki Linnakangas <[email protected]> wrote:On 01/06/2023 13:18, Terry Brennan wrote:\n> Hello all,\n> \n> I am a researcher in databases who would like to suggest a \n> new function. I am writing to you because you have an active developer \n> community. Your website said that suggestions for new functions should \n> go to this mailing list. If there is another mailing list you prefer, \n> please let me know.\n\nYou're in the right place.\n\n> My research is in updating views -- the problem of translating an update \n> in a view to an update to a set of underlying base tables. This problem \n> has been partially solved for many years, including in PostgreSQL, but a \n> complete solution hasn't been found.\n\nYeah, PostgreSQL only supports updating views in some simple cases [1]. \nPatches to handle more cases welcome.\n\n[1] \nhttps://www.postgresql.org/docs/current/sql-createview.html#SQL-CREATEVIEW-UPDATABLE-VIEWS\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 2 Jun 2023 07:17:01 -0500",
"msg_from": "Terry Brennan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Request for new function in view update"
},
{
"msg_contents": "On Thu, 1 Jun 2023 12:18:47 -0500\nTerry Brennan <[email protected]> wrote:\n\n> Hello all,\n> \n> I am a researcher in databases who would like to suggest a new function. I\n> am writing to you because you have an active developer community. Your\n> website said that suggestions for new functions should go to this mailing\n> list. If there is another mailing list you prefer, please let me know.\n> \n> My research is in updating views -- the problem of translating an update in\n> a view to an update to a set of underlying base tables. This problem has\n> been partially solved for many years, including in PostgreSQL, but a\n> complete solution hasn't been found.\n> \n> Views are useful for data independence; if users only access data through\n> views, then underlying databases can change without user programs. Data\n> independence requires an automatic solution to the view update problem.\n> \n> In my research, I went back to the initial papers about the problem. The\n> most promising approach was the \"constant complement\" approach. It starts\n> from the idea that a view shows only part of the information in a database,\n> and that view updates should never change the part of the database that\n> isn't exposed in the view. (The \"complement\" is the unexposed part, and\n> \"constant\" means that a view update shouldn't change the complement.) The\n> \"constant complement\" constraint is intuitive, that a view update shouldn't\n> have side effects on information not available through the view.\n> \n> A seminal paper showed that defining a complement is enough, because each\n> complement of a view creates a unique view update. Unfortunately, there\n> are limitations. Views have multiple complements, and no unique minimal\n> complement exists. Because of this limitation and other practical\n> difficulties, the constant complement approach was abandoned.\n> \n> I used a theorem in this initial paper that other researchers didn't use,\n> that shows the inverse. An update method defines a unique complement. I\n> used the two theorems as a saw's upstroke and downstroke to devise view\n> update methods for several relational operators. Unlike other approaches,\n> these methods have a solid mathematical foundation.\n> \n> Some relational operators are easy (selection), others are hard\n> (projection); some have several valid update methods that can be used\n> interchangeably (union) and some can have several valid update methods that\n> reflect different semantics (joins). For joins, I found clues in the\n> database that can determine which update method to use. I address the\n> other relational operators, but not in the attached paper\n> .\n> I also discuss the problem of when views can't have updates, and possible\n> reasons why.\n> \n> I have attached my arXiv paper. I would appreciate anyone's interest in\n> this topic.\n\nI'm interested in the view update problem because we have some works on\nthis topic [1][2].\n\nI read your paper. Although I don't understand the theoretical part enough,\nI found your proposal methods to update views as for several relational operators. \n\nThe method for updating selection views seems same as the way of automatically\nupdatable views in the current PostgreSQL, that is, deleting/updating rows in\na view results in deletes/updates for corresponding rows in the base table.\nInserting rows that is not compliant to the view is checked and prevented if\nthe view is defined with WITH CHECK OPTION. \n\nHowever, the proposed method for projection is that a column not contained\nin the view is updated to NULL when a row is deleted. I think it is not\ndesirable to use NULL in a such special purpose, and above all, this is\ndifferent the current PostgreSQL behavior, so I wonder it would not accepted\nto change it. I think it would be same for the method for JOIN that uses NULL\nfor the special use.\n\nI think it would be nice to extend view updatability of PostgreSQL because\nthe SQL standard allows more than the current limited support. In this case, \nI wonder we should follow SQL:1999 or later, and maybe this would be somehow\ncompatible to the spec in Oracle.\n\n[1] https://dl.acm.org/doi/10.1145/3164541.3164584\n[2] https://www.pgcon.org/2017/schedule/events/1074.en.html\n\nRegards,\nYugo Nagata\n\n> Yours\n> Terry Brennan\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n",
"msg_date": "Wed, 28 Jun 2023 16:49:24 +0900",
"msg_from": "Yugo NAGATA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Request for new function in view update"
},
{
"msg_contents": "Hi Yugo\n\nThank you for taking a look at the paper.\n\nThe key difference from PostgreSQL is that it only allows updates on single\ntable views. My paper discusses two kinds of joins of two tables. It\ndiscusses how to update them and how to determine when they occur.\n\nThe paper also discusses unioning two tables. I've done work on table\nintersection and table difference too, but didn't include it in the paper.\n\nTerry Brennan\n\n\n\n\n\nOn Wed, Jun 28, 2023 at 2:49 AM Yugo NAGATA <[email protected]> wrote:\n\n> On Thu, 1 Jun 2023 12:18:47 -0500\n> Terry Brennan <[email protected]> wrote:\n>\n> > Hello all,\n> >\n> > I am a researcher in databases who would like to suggest a new\n> function. I\n> > am writing to you because you have an active developer community. Your\n> > website said that suggestions for new functions should go to this mailing\n> > list. If there is another mailing list you prefer, please let me know.\n> >\n> > My research is in updating views -- the problem of translating an update\n> in\n> > a view to an update to a set of underlying base tables. This problem has\n> > been partially solved for many years, including in PostgreSQL, but a\n> > complete solution hasn't been found.\n> >\n> > Views are useful for data independence; if users only access data through\n> > views, then underlying databases can change without user programs. Data\n> > independence requires an automatic solution to the view update problem.\n> >\n> > In my research, I went back to the initial papers about the problem. The\n> > most promising approach was the \"constant complement\" approach. It\n> starts\n> > from the idea that a view shows only part of the information in a\n> database,\n> > and that view updates should never change the part of the database that\n> > isn't exposed in the view. (The \"complement\" is the unexposed part, and\n> > \"constant\" means that a view update shouldn't change the complement.)\n> The\n> > \"constant complement\" constraint is intuitive, that a view update\n> shouldn't\n> > have side effects on information not available through the view.\n> >\n> > A seminal paper showed that defining a complement is enough, because each\n> > complement of a view creates a unique view update. Unfortunately, there\n> > are limitations. Views have multiple complements, and no unique minimal\n> > complement exists. Because of this limitation and other practical\n> > difficulties, the constant complement approach was abandoned.\n> >\n> > I used a theorem in this initial paper that other researchers didn't use,\n> > that shows the inverse. An update method defines a unique complement. I\n> > used the two theorems as a saw's upstroke and downstroke to devise view\n> > update methods for several relational operators. Unlike other\n> approaches,\n> > these methods have a solid mathematical foundation.\n> >\n> > Some relational operators are easy (selection), others are hard\n> > (projection); some have several valid update methods that can be used\n> > interchangeably (union) and some can have several valid update methods\n> that\n> > reflect different semantics (joins). For joins, I found clues in the\n> > database that can determine which update method to use. I address the\n> > other relational operators, but not in the attached paper\n> > .\n> > I also discuss the problem of when views can't have updates, and possible\n> > reasons why.\n> >\n> > I have attached my arXiv paper. I would appreciate anyone's interest in\n> > this topic.\n>\n> I'm interested in the view update problem because we have some works on\n> this topic [1][2].\n>\n> I read your paper. Although I don't understand the theoretical part enough,\n> I found your proposal methods to update views as for several relational\n> operators.\n>\n> The method for updating selection views seems same as the way of\n> automatically\n> updatable views in the current PostgreSQL, that is, deleting/updating rows\n> in\n> a view results in deletes/updates for corresponding rows in the base table.\n> Inserting rows that is not compliant to the view is checked and prevented\n> if\n> the view is defined with WITH CHECK OPTION.\n>\n> However, the proposed method for projection is that a column not contained\n> in the view is updated to NULL when a row is deleted. I think it is not\n> desirable to use NULL in a such special purpose, and above all, this is\n> different the current PostgreSQL behavior, so I wonder it would not\n> accepted\n> to change it. I think it would be same for the method for JOIN that uses\n> NULL\n> for the special use.\n>\n> I think it would be nice to extend view updatability of PostgreSQL because\n> the SQL standard allows more than the current limited support. In this\n> case,\n> I wonder we should follow SQL:1999 or later, and maybe this would be\n> somehow\n> compatible to the spec in Oracle.\n>\n> [1] https://dl.acm.org/doi/10.1145/3164541.3164584\n> [2] https://www.pgcon.org/2017/schedule/events/1074.en.html\n>\n> Regards,\n> Yugo Nagata\n>\n> > Yours\n> > Terry Brennan\n>\n>\n> --\n> Yugo NAGATA <[email protected]>\n>\n\nHi YugoThank you for taking a look at the paper.The key difference from PostgreSQL is that it only allows updates on single table views. My paper discusses two kinds of joins of two tables. It discusses how to update them and how to determine when they occur. The paper also discusses unioning two tables. I've done work on table intersection and table difference too, but didn't include it in the paper.Terry BrennanOn Wed, Jun 28, 2023 at 2:49 AM Yugo NAGATA <[email protected]> wrote:On Thu, 1 Jun 2023 12:18:47 -0500\nTerry Brennan <[email protected]> wrote:\n\n> Hello all,\n> \n> I am a researcher in databases who would like to suggest a new function. I\n> am writing to you because you have an active developer community. Your\n> website said that suggestions for new functions should go to this mailing\n> list. If there is another mailing list you prefer, please let me know.\n> \n> My research is in updating views -- the problem of translating an update in\n> a view to an update to a set of underlying base tables. This problem has\n> been partially solved for many years, including in PostgreSQL, but a\n> complete solution hasn't been found.\n> \n> Views are useful for data independence; if users only access data through\n> views, then underlying databases can change without user programs. Data\n> independence requires an automatic solution to the view update problem.\n> \n> In my research, I went back to the initial papers about the problem. The\n> most promising approach was the \"constant complement\" approach. It starts\n> from the idea that a view shows only part of the information in a database,\n> and that view updates should never change the part of the database that\n> isn't exposed in the view. (The \"complement\" is the unexposed part, and\n> \"constant\" means that a view update shouldn't change the complement.) The\n> \"constant complement\" constraint is intuitive, that a view update shouldn't\n> have side effects on information not available through the view.\n> \n> A seminal paper showed that defining a complement is enough, because each\n> complement of a view creates a unique view update. Unfortunately, there\n> are limitations. Views have multiple complements, and no unique minimal\n> complement exists. Because of this limitation and other practical\n> difficulties, the constant complement approach was abandoned.\n> \n> I used a theorem in this initial paper that other researchers didn't use,\n> that shows the inverse. An update method defines a unique complement. I\n> used the two theorems as a saw's upstroke and downstroke to devise view\n> update methods for several relational operators. Unlike other approaches,\n> these methods have a solid mathematical foundation.\n> \n> Some relational operators are easy (selection), others are hard\n> (projection); some have several valid update methods that can be used\n> interchangeably (union) and some can have several valid update methods that\n> reflect different semantics (joins). For joins, I found clues in the\n> database that can determine which update method to use. I address the\n> other relational operators, but not in the attached paper\n> .\n> I also discuss the problem of when views can't have updates, and possible\n> reasons why.\n> \n> I have attached my arXiv paper. I would appreciate anyone's interest in\n> this topic.\n\nI'm interested in the view update problem because we have some works on\nthis topic [1][2].\n\nI read your paper. Although I don't understand the theoretical part enough,\nI found your proposal methods to update views as for several relational operators. \n\nThe method for updating selection views seems same as the way of automatically\nupdatable views in the current PostgreSQL, that is, deleting/updating rows in\na view results in deletes/updates for corresponding rows in the base table.\nInserting rows that is not compliant to the view is checked and prevented if\nthe view is defined with WITH CHECK OPTION. \n\nHowever, the proposed method for projection is that a column not contained\nin the view is updated to NULL when a row is deleted. I think it is not\ndesirable to use NULL in a such special purpose, and above all, this is\ndifferent the current PostgreSQL behavior, so I wonder it would not accepted\nto change it. I think it would be same for the method for JOIN that uses NULL\nfor the special use.\n\nI think it would be nice to extend view updatability of PostgreSQL because\nthe SQL standard allows more than the current limited support. In this case, \nI wonder we should follow SQL:1999 or later, and maybe this would be somehow\ncompatible to the spec in Oracle.\n\n[1] https://dl.acm.org/doi/10.1145/3164541.3164584\n[2] https://www.pgcon.org/2017/schedule/events/1074.en.html\n\nRegards,\nYugo Nagata\n\n> Yours\n> Terry Brennan\n\n\n-- \nYugo NAGATA <[email protected]>",
"msg_date": "Wed, 28 Jun 2023 07:40:21 -0500",
"msg_from": "Terry Brennan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Request for new function in view update"
}
] |
[
{
"msg_contents": "In make_outerjoininfo, I think we can additionally check a property\nthat's needed to apply OJ identity 3: the lower OJ in the RHS cannot be\na member of inner_join_rels because we do not try to commute with any of\nlower inner joins.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -1593,7 +1593,8 @@ make_outerjoininfo(PlannerInfo *root,\n }\n else if (jointype == JOIN_LEFT &&\n otherinfo->jointype == JOIN_LEFT &&\n- otherinfo->lhs_strict)\n+ otherinfo->lhs_strict &&\n+ !bms_is_member(otherinfo->ojrelid, inner_join_rels))\n {\n /* Identity 3 applies, so remove the ordering restriction */\n min_righthand = bms_del_member(min_righthand,\n\nThis check will help to avoid bogus commute_xxx bits in some cases, such\nas in query\n\nexplain (costs off)\nselect * from a left join\n (b left join c on b.i = c.i inner join d on true)\non a.i = b.i;\n\nIt will help us know that the b/c join and the join of a cannot commute\nand thus save us from generating cloned clauses for 'b.i = c.i'. Plus,\nit is very cheap. So I think it's worth doing. Any thoughts?\n\nThanks\nRichard\n\nIn make_outerjoininfo, I think we can additionally check a propertythat's needed to apply OJ identity 3: the lower OJ in the RHS cannot bea member of inner_join_rels because we do not try to commute with any oflower inner joins.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -1593,7 +1593,8 @@ make_outerjoininfo(PlannerInfo *root, } else if (jointype == JOIN_LEFT && otherinfo->jointype == JOIN_LEFT &&- otherinfo->lhs_strict)+ otherinfo->lhs_strict &&+ !bms_is_member(otherinfo->ojrelid, inner_join_rels)) { /* Identity 3 applies, so remove the ordering restriction */ min_righthand = bms_del_member(min_righthand,This check will help to avoid bogus commute_xxx bits in some cases, suchas in queryexplain (costs off)select * from a left join (b left join c on b.i = c.i inner join d on true)on a.i = b.i;It will help us know that the b/c join and the join of a cannot commuteand thus save us from generating cloned clauses for 'b.i = c.i'. Plus,it is very cheap. So I think it's worth doing. Any thoughts?ThanksRichard",
"msg_date": "Fri, 2 Jun 2023 15:05:31 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tighten a little bit the computation of potential commutator pairs"
}
] |
[
{
"msg_contents": "Hi, I noticed a feature description [1] referring to a command example:\n\nCREATE PUBLICATION ... FOR ALL TABLES IN SCHEMA ....\n\n~~\n\nAFAIK that should say \"FOR TABLES IN SCHEMA\" (without the \"ALL\", see [2])\n\n------\n[1] https://www.postgresql.org/about/featurematrix/detail/391/\n[2] https://www.postgresql.org/docs/16/sql-createpublication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 2 Jun 2023 09:30:46 -0400",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong syntax in feature description"
},
{
"msg_contents": "On Fri, Jun 2, 2023 at 7:01 PM Peter Smith <[email protected]> wrote:\n>\n> Hi, I noticed a feature description [1] referring to a command example:\n>\n> CREATE PUBLICATION ... FOR ALL TABLES IN SCHEMA ....\n>\n> ~~\n>\n> AFAIK that should say \"FOR TABLES IN SCHEMA\" (without the \"ALL\", see [2])\n>\n\nRight, this should be changed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 4 Jun 2023 22:18:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong syntax in feature description"
},
{
"msg_contents": "> On 4 Jun 2023, at 18:48, Amit Kapila <[email protected]> wrote:\n> \n> On Fri, Jun 2, 2023 at 7:01 PM Peter Smith <[email protected]> wrote:\n>> \n>> Hi, I noticed a feature description [1] referring to a command example:\n>> \n>> CREATE PUBLICATION ... FOR ALL TABLES IN SCHEMA ....\n>> \n>> ~~\n>> \n>> AFAIK that should say \"FOR TABLES IN SCHEMA\" (without the \"ALL\", see [2])\n> \n> Right, this should be changed.\n\nAgreed, so I've fixed this in the featurematrix on the site. I will mark this\nCF entry as committed even though there is nothing to commit (the featurematrix\nis stored in the postgresql.org django instance) since there was a change\nperformed.\n\nThanks for the report!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 09:37:27 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong syntax in feature description"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 5:37 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 4 Jun 2023, at 18:48, Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Jun 2, 2023 at 7:01 PM Peter Smith <[email protected]> wrote:\n> >>\n> >> Hi, I noticed a feature description [1] referring to a command example:\n> >>\n> >> CREATE PUBLICATION ... FOR ALL TABLES IN SCHEMA ....\n> >>\n> >> ~~\n> >>\n> >> AFAIK that should say \"FOR TABLES IN SCHEMA\" (without the \"ALL\", see [2])\n> >\n> > Right, this should be changed.\n>\n> Agreed, so I've fixed this in the featurematrix on the site. I will mark this\n> CF entry as committed even though there is nothing to commit (the featurematrix\n> is stored in the postgresql.org django instance) since there was a change\n> performed.\n>\n> Thanks for the report!\n\nThanks for (not) pushing ;-)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 6 Jul 2023 09:05:06 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Wrong syntax in feature description"
}
] |
[
{
"msg_contents": "Attached is a patch to allow a new behavior for the \\watch command in psql.\nWhen enabled, this instructs \\watch to stop running once the query returns\nzero rows. The use case is the scenario in which you are watching the\noutput of something long-running such as pg_stat_progress_create_index, but\nonce it finishes you don't need thousands of runs showing empty rows from\nthe view.\n\nThis adds a new argument \"zero\" to the existing i=SEC and c=N arguments\n\nNotes:\n\n* Not completely convinced of the name \"zero\" (better than\n\"stop_when_no_rows_returned\"). Considered adding a new x=y argument, or\noverloading c (c=-1) but neither seemed very intuitive. On the other hand,\nit's tempting to stick to a single method moving forward, although this is\na boolean option not a x=y one like the other two.\n\n* Did not update help.c on purpose - no need to make \\watch span two lines\nthere.\n\n* Considered leaving early (e.g. don't display the last empty result) but\nseemed better to show the final empty result as an explicit confirmation as\nto why it stopped.\n\n* Quick way to test:\nselect * from pg_stat_activity where backend_start > now() - '20\nseconds'::interval;\n\\watch zero\n\nCheers,\nGreg",
"msg_date": "Fri, 2 Jun 2023 11:47:16 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "On Fri, Jun 02, 2023 at 11:47:16AM -0400, Greg Sabino Mullane wrote:\n> * Not completely convinced of the name \"zero\" (better than\n> \"stop_when_no_rows_returned\"). Considered adding a new x=y argument, or\n> overloading c (c=-1) but neither seemed very intuitive. On the other hand,\n> it's tempting to stick to a single method moving forward, although this is\n> a boolean option not a x=y one like the other two.\n\nWouldn't something like a target_rows be more flexible? You could use\nthis parameter with a target number of rows to expect, zero being one\nchoice in that.\n--\nMichael",
"msg_date": "Sat, 3 Jun 2023 17:58:46 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "On Sat, Jun 3, 2023 at 5:58 PM Michael Paquier <[email protected]> wrote:\n\n>\n> Wouldn't something like a target_rows be more flexible? You could use\n> this parameter with a target number of rows to expect, zero being one\n> choice in that.\n>\n\nThank you! That does feel better to me. Please see attached a new v2 patch\nthat uses\na min_rows=X syntax (defaults to 0). Also added some help.c changes.\n\nCheers,\nGreg",
"msg_date": "Sun, 4 Jun 2023 14:55:12 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "> On 4 Jun 2023, at 20:55, Greg Sabino Mullane <[email protected]> wrote:\n> \n> On Sat, Jun 3, 2023 at 5:58 PM Michael Paquier <[email protected]> wrote:\n> \n> Wouldn't something like a target_rows be more flexible? You could use\n> this parameter with a target number of rows to expect, zero being one\n> choice in that.\n> \n> Thank you! That does feel better to me. Please see attached a new v2 patch that uses \n> a min_rows=X syntax (defaults to 0). Also added some help.c changes.\n\nThis is a feature I've wanted on several occasions so definitely a +1 on this\nsuggestion. I've yet to test it out and do a full review, but a few comments\nfrom skimming the patch:\n\n-\t\t\t bool is_watch,\n+\t\t\t bool is_watch, int min_rows,\n\nThe comment on ExecQueryAndProcessResults() needs to be updated with an\nexplanation of what this parameter is.\n\n\n-\treturn cancel_pressed ? 0 : success ? 1 : -1;\n+\treturn (cancel_pressed || return_early) ? 0 : success ? 1 : -1;\n\nI think this is getting tangled up enough that it should be replaced with\nseparate if() statements for the various cases.\n\n\n-\tHELP0(\" \\\\watch [[i=]SEC] [c=N] execute query every SEC seconds, up to N times\\n\");\n+\tHELP0(\" \\\\watch [[i=]SEC] [c=N] [m=ROW]\\n\");\n+\tHELP0(\" execute query every SEC seconds, up to N times\\n\");\n+\tHELP0(\" stop if less than ROW minimum rows are rerturned\\n\");\n\n\"less than ROW minimum rows\" reads a bit awkward IMO, how about calling it\n[m=MIN] and describe as \"stop if less than MIN rows are returned\"? Also, there\nis a typo: s/rerturned/returned/.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:51:15 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "Thanks for the feedback!\n\nOn Wed, Jul 5, 2023 at 5:51 AM Daniel Gustafsson <[email protected]> wrote:\n\n>\n> The comment on ExecQueryAndProcessResults() needs to be updated with an\n> explanation of what this parameter is.\n>\n\nI added a comment in the place where min_rows is used, but not sure what\nyou mean by adding it to the main comment at the top of the function? None\nof the other args are explained there, even the non-intuitive ones (e.g.\nsvpt_gone_p)\n\n- return cancel_pressed ? 0 : success ? 1 : -1;\n> + return (cancel_pressed || return_early) ? 0 : success ? 1 : -1;\n>\n> I think this is getting tangled up enough that it should be replaced with\n> separate if() statements for the various cases.\n>\n\nWould like to hear others weigh in, I think it's still only three states\nplus a default, so I'm not convinced it warrants multiple statements yet. :)\n\n+ HELP0(\" \\\\watch [[i=]SEC] [c=N] [m=ROW]\\n\");\n> + HELP0(\" execute query every SEC seconds,\n> up to N times\\n\");\n> + HELP0(\" stop if less than ROW minimum\n> rows are rerturned\\n\");\n>\n> \"less than ROW minimum rows\" reads a bit awkward IMO, how about calling it\n> [m=MIN] and describe as \"stop if less than MIN rows are returned\"? Also,\n> there\n> is a typo: s/rerturned/returned/.\n>\n\nGreat idea: changed and will attach a new patch\n\nCheers,\nGreg",
"msg_date": "Wed, 5 Jul 2023 10:14:22 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 10:14:22AM -0400, Greg Sabino Mullane wrote:\n> Would like to hear others weigh in, I think it's still only three states\n> plus a default, so I'm not convinced it warrants multiple statements yet. :)\n\nI find that hard to parse, so having more lines to get a better idea\nof what the states are would be good.\n\nWhile on it, I can see that the patch has no tests. Could you add\nsomething in psql's 001_basic.pl? The option to define the number of\niterations for \\watch can make the tests predictible even for the\nnon-failure cases. I would do checks with incorrect values, as well,\nsee the part of the test script about the intervals.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 09:41:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "Thank you for the feedback, everyone. Attached is version 4 of the patch,\nfeaturing a few tests and minor rewordings.\n\nCheers,\nGreg",
"msg_date": "Tue, 22 Aug 2023 17:23:54 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "> On 22 Aug 2023, at 23:23, Greg Sabino Mullane <[email protected]> wrote:\n> \n> Thank you for the feedback, everyone. Attached is version 4 of the patch, featuring a few tests and minor rewordings.\n\nThanks, the changes seem good from a quick skim. I'll take a better look\ntomorrow to hopefully close this one.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 23:49:23 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "> On 22 Aug 2023, at 23:23, Greg Sabino Mullane <[email protected]> wrote:\n> \n> Thank you for the feedback, everyone. Attached is version 4 of the patch, featuring a few tests and minor rewordings.\n\nI had another look, and did some playing around with this and I think this\nversion is ready to go in, so I will try to get that sorted shortly.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 15:44:49 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
},
{
"msg_contents": "> On 22 Aug 2023, at 23:23, Greg Sabino Mullane <[email protected]> wrote:\n> \n> Thank you for the feedback, everyone. Attached is version 4 of the patch, featuring a few tests and minor rewordings.\n\nI went over this once more, and pushed it along with pgindenting. I did reduce\nthe number of tests since they were overlapping and a bit too expensive to have\nmultiples of. Thanks for the patch!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 29 Aug 2023 12:02:48 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prevent psql \\watch from running queries that return no rows"
}
] |
[
{
"msg_contents": "Hello,\n\nI was just in a pg_upgrade unconference session at PGCon where the\nlack of $SUBJECT came up. This system call gives the kernel the\noption to use fast block cloning on XFS, ZFS (as of very recently),\netc, and works on Linux and FreeBSD. It's probably much the same as\n--clone mode on COW file systems, except that is Linux-only. On\noverwrite file systems (ie not copy-on-write, like ext4), it may also\nbe able to push copies down to storage hardware/network file systems.\n\nThere was something like this in the nearby large files patch set, but\nin that version it just magically did it when available in --copy\nmode. Now I think the user should have to have to opt in with\n--copy-file-range, and simply to error out if it fails. It may not\nwork in some cases -- for example, the man page says that older Linux\nsystems can fail with EXDEV when you try to copy across file systems,\nwhile newer systems will do something less efficient but still\nsensible internally; also I saw a claim that some older versions had\nweird bugs. Better to just expose the raw functionality and let users\nsay when they want it and read the error if it fail, I think.",
"msg_date": "Fri, 2 Jun 2023 15:30:44 -0400",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 02.06.23 21:30, Thomas Munro wrote:\n> I was just in a pg_upgrade unconference session at PGCon where the\n> lack of $SUBJECT came up. This system call gives the kernel the\n> option to use fast block cloning on XFS, ZFS (as of very recently),\n> etc, and works on Linux and FreeBSD. It's probably much the same as\n> --clone mode on COW file systems, except that is Linux-only. On\n> overwrite file systems (ie not copy-on-write, like ext4), it may also\n> be able to push copies down to storage hardware/network file systems.\n> \n> There was something like this in the nearby large files patch set, but\n> in that version it just magically did it when available in --copy\n> mode. Now I think the user should have to have to opt in with\n> --copy-file-range, and simply to error out if it fails. It may not\n> work in some cases -- for example, the man page says that older Linux\n> systems can fail with EXDEV when you try to copy across file systems,\n> while newer systems will do something less efficient but still\n> sensible internally; also I saw a claim that some older versions had\n> weird bugs. Better to just expose the raw functionality and let users\n> say when they want it and read the error if it fail, I think.\n\nWhen we added --clone, copy_file_range() was available, but the problem \nwas that it was hard for the user to predict whether you'd get the fast \nclone behavior or the slow copy behavior. That's the kind of thing you \nwant to know when planning and testing your upgrade. At the time, there \nwere patches passed around in Linux kernel circles that would have been \nable to enforce cloning via the flags argument of copy_file_range(), but \nthat didn't make it to the mainline.\n\nSo, yes, being able to specify exactly which copy mechanism to use makes \nsense, so that users can choose the tradeoffs.\n\nAbout your patch:\n\nI think you should have a \"check\" function called from \ncheck_new_cluster(). That check function can then also handle the \"not \nsupported\" case, and you don't need to handle that in \nparseCommandLine(). I suggest following the clone example for these, \nsince the issues there are very similar.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 09:47:05 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 7:47 PM Peter Eisentraut <[email protected]> wrote:\n> When we added --clone, copy_file_range() was available, but the problem\n> was that it was hard for the user to predict whether you'd get the fast\n> clone behavior or the slow copy behavior. That's the kind of thing you\n> want to know when planning and testing your upgrade. At the time, there\n> were patches passed around in Linux kernel circles that would have been\n> able to enforce cloning via the flags argument of copy_file_range(), but\n> that didn't make it to the mainline.\n>\n> So, yes, being able to specify exactly which copy mechanism to use makes\n> sense, so that users can choose the tradeoffs.\n\nThanks for looking. Yeah, it is quite inconvenient for planning\npurposes that it is hard for a user to know which internal strategy it\nuses, but that's the interface we have (and clearly \"flags\" is\nreserved for future usage so that might still evolve..).\n\n> About your patch:\n>\n> I think you should have a \"check\" function called from\n> check_new_cluster(). That check function can then also handle the \"not\n> supported\" case, and you don't need to handle that in\n> parseCommandLine(). I suggest following the clone example for these,\n> since the issues there are very similar.\n\nDone.",
"msg_date": "Sun, 8 Oct 2023 18:15:18 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 08.10.23 07:15, Thomas Munro wrote:\n>> About your patch:\n>>\n>> I think you should have a \"check\" function called from\n>> check_new_cluster(). That check function can then also handle the \"not\n>> supported\" case, and you don't need to handle that in\n>> parseCommandLine(). I suggest following the clone example for these,\n>> since the issues there are very similar.\n> \n> Done.\n\nThis version looks good to me.\n\nTiny nit: You copy-and-pasted \"%s/PG_VERSION.clonetest\"; maybe choose a \ndifferent suffix.\n\n\n",
"msg_date": "Mon, 13 Nov 2023 08:15:01 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 13.11.23 08:15, Peter Eisentraut wrote:\n> On 08.10.23 07:15, Thomas Munro wrote:\n>>> About your patch:\n>>>\n>>> I think you should have a \"check\" function called from\n>>> check_new_cluster(). That check function can then also handle the \"not\n>>> supported\" case, and you don't need to handle that in\n>>> parseCommandLine(). I suggest following the clone example for these,\n>>> since the issues there are very similar.\n>>\n>> Done.\n> \n> This version looks good to me.\n> \n> Tiny nit: You copy-and-pasted \"%s/PG_VERSION.clonetest\"; maybe choose a \n> different suffix.\n\nThomas, are you planning to proceed with this patch?\n\n\n\n",
"msg_date": "Fri, 22 Dec 2023 21:40:48 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 9:40 AM Peter Eisentraut <[email protected]> wrote:\n> On 13.11.23 08:15, Peter Eisentraut wrote:\n> > On 08.10.23 07:15, Thomas Munro wrote:\n> >>> About your patch:\n> >>>\n> >>> I think you should have a \"check\" function called from\n> >>> check_new_cluster(). That check function can then also handle the \"not\n> >>> supported\" case, and you don't need to handle that in\n> >>> parseCommandLine(). I suggest following the clone example for these,\n> >>> since the issues there are very similar.\n> >>\n> >> Done.\n> >\n> > This version looks good to me.\n> >\n> > Tiny nit: You copy-and-pasted \"%s/PG_VERSION.clonetest\"; maybe choose a\n> > different suffix.\n>\n> Thomas, are you planning to proceed with this patch?\n\nYes. Sorry for being slow... got stuck working on an imminent new\nversion of streaming read. I will be defrosting my commit bit and\ncommitting this one and a few things shortly.\n\nAs it happens I was just thinking about this particular patch because\nI suddenly had a strong urge to teach pg_combinebackup to use\ncopy_file_range. I wonder if you had the same idea...\n\n\n",
"msg_date": "Sat, 23 Dec 2023 09:52:59 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 09:52:59AM +1300, Thomas Munro wrote:\n> As it happens I was just thinking about this particular patch because\n> I suddenly had a strong urge to teach pg_combinebackup to use\n> copy_file_range. I wonder if you had the same idea...\n\nYeah, +1. That would make copy_file_blocks() more efficient where the\ncode is copying 50 blocks in batches because it needs to reassign\nchecksums to the blocks copied.\n--\nMichael",
"msg_date": "Sun, 24 Dec 2023 11:57:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "Hi Thomas, Michael, Peter and -hackers,\n\nOn Sun, Dec 24, 2023 at 3:57 AM Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Dec 23, 2023 at 09:52:59AM +1300, Thomas Munro wrote:\n> > As it happens I was just thinking about this particular patch because\n> > I suddenly had a strong urge to teach pg_combinebackup to use\n> > copy_file_range. I wonder if you had the same idea...\n>\n> Yeah, +1. That would make copy_file_blocks() more efficient where the\n> code is copying 50 blocks in batches because it needs to reassign\n> checksums to the blocks copied.\n\nI've tried to achieve what you were discussing. Actually this was my\nfirst thought when using pg_combinebackup with larger (realistic)\nbackup sizes back in December. Attached is a set of very DIRTY (!)\npatches that provide CoW options (--clone/--copy-range-file) to\npg_combinebackup (just like pg_upgrade to keep it in sync), while also\nrefactoring some related bits of code to avoid duplication.\n\nWith XFS (with reflink=1 which is default) on Linux with kernel 5.10\nand ~210GB backups, I'm getting:\n\nroot@jw-test-1:/xfs# du -sm *\n210229 full\n250 incr.1\n\nToday in master, the old classic read()/while() loop without\nCoW/reflink optimization :\nroot@jw-test-1:/xfs# rm -rf outtest; sync; sync ; sync; echo 3 | sudo\ntee /proc/sys/vm/drop_caches ; time /usr/pgsql17/bin/pg_combinebackup\n--manifest-checksums=NONE -o outtest full incr.1\n3\n\nreal 49m43.963s\nuser 0m0.887s\nsys 2m52.697s\n\nVS patch with \"--clone\" :\n\nroot@jw-test-1:/xfs# rm -rf outtest; sync; sync ; sync; echo 3 | sudo\ntee /proc/sys/vm/drop_caches ; time /usr/pgsql17/bin/pg_combinebackup\n--manifest-checksums=NONE --clone -o outtest full incr.1\n3\n\nreal 0m39.812s\nuser 0m0.325s\nsys 0m2.401s\n\nSo it is 49mins down to 40 seconds(!) +/-10s (3 tries) if the FS\nsupports CoW/reflinks (XFS, BTRFS, upcoming bcachefs?). It looks to me\nthat this might mean that if one actually wants to use incremental\nbackups (to get minimal RTO), it would be wise to only use CoW\nfilesystems from the start so that RTO is as low as possible.\n\nRandom patch notes:\n- main meat is in v3-0002*, I hope i did not screw something seriously\n- in worst case: it is opt-in through switch, so the user always can\nstick to the classic copy\n- no docs so far\n- pg_copyfile_offload_supported() should actually be fixed if it is a\ngood path forward\n- pgindent actually indents larger areas of code that I would like to,\nany ideas or is it ok?\n- not tested on Win32/MacOS/FreeBSD\n- i've tested pg_upgrade manually and it seems to work and issue\ncorrect syscalls, however some tests are failing(?). I haven't\ninvestigated why yet due to lack of time.\n\nAny help is appreciated.\n\n-J.",
"msg_date": "Fri, 5 Jan 2024 13:40:45 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 05.01.24 13:40, Jakub Wartak wrote:\n> Random patch notes:\n> - main meat is in v3-0002*, I hope i did not screw something seriously\n> - in worst case: it is opt-in through switch, so the user always can\n> stick to the classic copy\n> - no docs so far\n> - pg_copyfile_offload_supported() should actually be fixed if it is a\n> good path forward\n> - pgindent actually indents larger areas of code that I would like to,\n> any ideas or is it ok?\n> - not tested on Win32/MacOS/FreeBSD\n> - i've tested pg_upgrade manually and it seems to work and issue\n> correct syscalls, however some tests are failing(?). I haven't\n> investigated why yet due to lack of time.\n\nSomething is wrong with the pgindent in your patch set. Maybe you used \na wrong version. You should try to fix that, because it is hard to \nprocess your patch with that amount of unrelated reformatting.\n\nAs far as I can tell, the original pg_upgrade patch has been ready to \ncommit since October. Unless Thomas has any qualms that have not been \nmade explicit in this thread, I suggest we move ahead with that.\n\nAnd then Jakub could rebase his patch set on top of that. It looks like \nif the formatting issues are fixed, the remaining pg_combinebackup \nsupport isn't that big.\n\n\n\n",
"msg_date": "Tue, 5 Mar 2024 14:43:33 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 2:43 AM Peter Eisentraut <[email protected]> wrote:\n> As far as I can tell, the original pg_upgrade patch has been ready to\n> commit since October. Unless Thomas has any qualms that have not been\n> made explicit in this thread, I suggest we move ahead with that.\n\npg_upgrade --copy-file-range pushed. The only change I made was to\nremove the EINTR retry condition which was subtly wrong and actually\nnot needed here AFAICS. (Erm, maybe I did have an unexpressed qualm\nabout some bug reports unfolding around that time about corruption\nlinked to copy_file_range that might have spooked me but those seem to\nhave been addressed.)\n\n> And then Jakub could rebase his patch set on top of that. It looks like\n> if the formatting issues are fixed, the remaining pg_combinebackup\n> support isn't that big.\n\n+1\n\nI'll also go and rebase CREATE DATABASE ... STRATEGY=file_clone[1].\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLM%2Bt%2BSwBU-cHeMUXJCOgBxSHLGZutV5zCwY4qrCcE02w%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 6 Mar 2024 12:13:46 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "Hi,\n\nI took a quick look at the remaining part adding copy_file_range to\npg_combinebackup. The patch no longer applies, so I had to rebase it.\nMost of the issues were trivial, but I had to fix a couple missing\nprototypes - I added them to copy_file.h/c, mostly.\n\n0001 is the minimal rebase + those fixes\n\n0002 has a couple review comments in copy_file, and it also undoes a lot\nof unnecessary formatting changes (already pointed out by Peter a couple\ndays ago).\n\nA couple review comments:\n\n1) AFAIK opt_errinfo() returns pointer to the local \"buf\" variable.\n\n2) I wonder if we even need opt_errinfo(). I'm not sure it actually\nmakes anything simpler.\n\n3) I think it'd be nice to make CopyFileMethod more consistent with\ntransferMode in pg_upgrade.h (I mean, it seems wise to make the naming\nmore consistent, it's probably not worth unifying this somehow).\n\n4) I wonder how we came up with copying the files by 50 blocks, but I\nnow realize it's been like this before this patch. I only noticed\nbecause the patch adds a comment before buffer_size calculation.\n\n5) I dislike the renaming of copy_file_blocks to pg_copyfile. The new\nname is way more generic / less descriptive - it's clear it copies the\nfile block by block (well, in chunks). pg_copyfile is pretty vague.\n\n6) This leaves behind copy_file_copyfile, which is now unused.\n\n7) The patch reworks how combinebackup deals with alternative copy\nimplementations - instead of setting strategy_implementation and calling\nthat, the decisions now happen in pg_copyfile_offload with a lot of\nconditions / ifdef / defined ... I find it pretty hard to understand and\nreason about. I liked the strategy_implementation approach, as it forces\nus to keep each method in a separate function.\n\nPerhaps there's a reason why that doesn't work for copy_file_range? But\nin that case this needs much clearer comments.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 19 Mar 2024 16:22:46 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "Hi Tomas,\n\n> I took a quick look at the remaining part adding copy_file_range to\n> pg_combinebackup. The patch no longer applies, so I had to rebase it.\n> Most of the issues were trivial, but I had to fix a couple missing\n> prototypes - I added them to copy_file.h/c, mostly.\n>\n> 0001 is the minimal rebase + those fixes\n>\n> 0002 has a couple review comments in copy_file, and it also undoes a lot\n> of unnecessary formatting changes (already pointed out by Peter a couple\n> days ago).\n>\n\nThank you very much for this! As discussed privately, I'm not in\nposition right now to pursue this further at this late stage (at least\nfor v17, which would require an aggressive schedule ). My plan was\nmore for v18 after Peter's email, due to other obligations. But if you\nhave cycles and want to continue, please do so without hesitation -\nI'll try to chime in a long way to test and review for sure.\n\n> A couple review comments:\n>\n> 1) AFAIK opt_errinfo() returns pointer to the local \"buf\" variable.\n>\n> 2) I wonder if we even need opt_errinfo(). I'm not sure it actually\n> makes anything simpler.\n\nYes, as it stands it's broken (somewhat I've missed gcc warning),\nshould be pg_malloc(). I hardly remember, but I wanted to avoid code\nduplication. No strong opinion, maybe that's a different style, I'll\nadapt as necessary.\n\n> 3) I think it'd be nice to make CopyFileMethod more consistent with\n> transferMode in pg_upgrade.h (I mean, it seems wise to make the naming\n> more consistent, it's probably not worth unifying this somehow).\n>\n> 4) I wonder how we came up with copying the files by 50 blocks, but I\n> now realize it's been like this before this patch. I only noticed\n> because the patch adds a comment before buffer_size calculation.\n\nIt looks like it was like that before pg_upgrade even was moved into\nthe core. 400kB is indeed bit strange value, so we can leave it as it\nis or make the COPY_BUF_SIZ 128kb - see [1] (i've double checked cp(1)\nuses still 128kB today), or maybe just stick to something like 256 or\n512 kBs.\n\n> 5) I dislike the renaming of copy_file_blocks to pg_copyfile. The new\n> name is way more generic / less descriptive - it's clear it copies the\n> file block by block (well, in chunks). pg_copyfile is pretty vague.\n>\n> 6) This leaves behind copy_file_copyfile, which is now unused.\n>\n> 7) The patch reworks how combinebackup deals with alternative copy\n> implementations - instead of setting strategy_implementation and calling\n> that, the decisions now happen in pg_copyfile_offload with a lot of\n> conditions / ifdef / defined ... I find it pretty hard to understand and\n> reason about. I liked the strategy_implementation approach, as it forces\n> us to keep each method in a separate function.\n\nWell some context (maybe it was my mistake to continue in this\n./thread rather starting a new one): my plan was 3-in-1: in the\noriginal proposal (from Jan) to provide CoW as generic facility for\nother to use - in src/common/file_utils.c as per\nv3-0002-Confine-various-OS-copy-on-write-and-other-copy-a.patch - to\nunify & confine CoW methods and their quirkiness between\npg_combinebackup and pg_upgrade and other potential CoW uses too. That\nwas before Thomas M. pushed CoW just for pg_upgrade as\nd93627bcbe5001750e7611f0e637200e2d81dcff. I had this idea back then to\nhave pg_copyfile() [normal blocks copy] and\npg_copyfile_offload_supported(),\npg_copyfile_offload(PG_COPYFILE_IOCTL_FICLONE ,\nPG_COPYFILE_COPY_FILE_RANGE,\nPG_COPYFILE_who_has_idea_what_they_come_up_with_in_future). In Your's\nversion of the patch it's local to pg_combinebackup, so it might make\nno sense after all. If you look at the pg_upgrade and pg_combinebackup\nthey both have code duplication with lots of those ifs/IFs (assuming\nuser wants to have it as drop-in [--clone/--copy/--copyfile] and\nplatform may / may not have it). I've even considered\n--cow=ficlone|copy_file_range to sync both tools from CLI arguments\npoint of view, but that would break backwards compatibility, so I did\nnot do that.\n\nAlso there's a problem with pg_combinebackup's strategy_implementation\nthat it actually cannot on its own decide (I think) which CoW to use\nor not. There were some longer discussions that settled on one thing\n(for pg_upgrade): it's the user who is in control HOW the copy gets\ndone (due to potential issues in OS CoW() implementations where e.g.\nif NFS would be involved on one side). See pg_upgrade\n--clone/--copy/--copy-file-range/--sync-method options. I wanted to\nstick to that, so pg_combinebackup also needs to give the same options\nto the user.\n\nThat's was for the historical context, now you wrote \"it's probably\nnot worth unifying this somehow\" few sentences earlier, so my take is\nthe following: we can just concentrate on getting the\ncopy_file_range() and ioctl_ficlone to pg_combinebackup at the price\nof duplicating some complexity for now (in short to start with clear\nplate , it doesn't necessary needs to be my patch as base if we think\nit's worthwhile for v17 - or stick to your reworked patch of mine).\n\nLater (v18?) some bigger than this refactor could unify and move the\ncopy methods to some more central place (so then we would have sync as\nthere would be no doubling like you mentioned e.g.: pg_upgrade's enum\ntransferMode <-> patch enum CopyFileMethod.\n\nSo for now I'm +1 to renaming all the things as you want -- indeed\npg_copy* might not be a good fit in a localized version.\n\n-J.\n\n[1] - https://eklitzke.org/efficient-file-copying-on-linux\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:17:28 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "Here's a patch reworked along the lines from a couple days ago.\n\nThe primary goals were to add clone/copy_file_range while minimizing\nunnecessary disruption, and overall cleanup of the patch. I'm not saying\nit's committable, but I think the patch is much easier to understand.\n\nThe main change is that this abandons the idea of handling all possible\ncases in a single function that looks like a maze of ifdefs, and instead\nseparates each case into it's own function and the decision happens much\nearlier. This is pretty much exactly what pg_upgrade does, BTW.\n\nThere's maybe an argument that these functions could be unified and\nmoved to a library in src/common - I can imagine doing that, but I don't\nthink it's required. The functions are pretty trivial wrappers, and it's\nnot like we expect many more callers. And there's probably stuff we'd\nneed to keep out of that library (e.g. the decision which copy/clone\nmethods are available / should be used or error reporting). So it\ndoesn't seem worth it, at least for now.\n\nThere's one question, though. As it stands, there's a bit of asymmetry\nbetween handling CopyFile() on WIN32 and the clone/copy_file_range on\nother platforms). On WIN32, we simply automatically switch to CopyFile\nautomatically, if we don't need to calculate checksum. But with the\nother methods, error out if the user requests those and we need to\ncalculate the checksum.\n\nThe asymmetry comes from the fact there's no option to request CopyFile\non WIN32, and we feel confident it's always the right thing to do (safe,\nfaster). We don't seem to know that for the other methods, so the user\nhas to explicitly request those. And if the user requests --clone, for\nexample, it'd be wrong to silently fallback to plain copy.\n\nStill, I wonder if this might cause some undesirable issues during\nrestores. But I guess that's why we have --dry-run.\n\nThis asymmetry also shows a bit in the code - the CopyFile is coded and\ncalled a bit differently from the other methods. FWIW I abandoned the\napproach with \"strategy\" and just use a switch on CopyMode enum, just\nlike pg_upgrade does.\n\nThere's a couple more smaller changes:\n\n- Addition of docs for --clone/--copy-file-range (shameless copy from\npg_upgrade docs).\n\n- Removal of opt_errinfo - not only was it buggy, I think the code is\nactually cleaner without it.\n\n- Removal of EINTR retry condition from copy_file_range handling (this\nis what Thomas ended up for pg_upgrade while committing that part).\n\nPut together, this cuts the patch from ~40kB to ~15kB (most of this is\ndue to the cleanup of unnecessary whitespace changes, though).\n\nI think to make this committable, this requires some review and testing,\nideally on a range of platforms.\n\nOne open question is how to allow testing this. For pg_upgrade we now\nhave PG_TEST_PG_UPGRADE_MODE, which can be set to e.g. \"--clone\". I\nwonder if we should add PG_TEST_PG_COMBINEBACKUP_MODE ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Mar 2024 15:40:40 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 10:40 AM Tomas Vondra\n<[email protected]> wrote:\n> There's one question, though. As it stands, there's a bit of asymmetry\n> between handling CopyFile() on WIN32 and the clone/copy_file_range on\n> other platforms). On WIN32, we simply automatically switch to CopyFile\n> automatically, if we don't need to calculate checksum. But with the\n> other methods, error out if the user requests those and we need to\n> calculate the checksum.\n\nThat seems completely broken. copy_file() needs to have the ability to\ncalculate a checksum if one is required; when one isn't required, it\ncan do whatever it likes. So we should always fall back to the\nblock-by-block method if we need a checksum. Whatever option the user\nspecified should only be applied when we don't need a checksum.\n\nConsider, for example:\n\npg_basebackup -D sunday -c fast --manifest-checksums=CRC32C\npg_basebackup -D monday -c fast --manifest-checksums=SHA224\n--incremental sunday/backup_manifest\npg_combinebackup sunday monday -o tuesday --manifest-checksums=CRC32C --clone\n\nAny files that are copied in their entirety from Sunday's backup can\nbe cloned, if we have support for cloning. But any files copied from\nMonday's backup will need to be re-checksummed, since the checksum\nalgorithms don't match. With what you're describing, it sounds like\npg_combinebackup would just fail in this case; I don't think that's\nthe behavior that the user is going to want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 12:42:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 3/22/24 17:42, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 10:40 AM Tomas Vondra\n> <[email protected]> wrote:\n>> There's one question, though. As it stands, there's a bit of asymmetry\n>> between handling CopyFile() on WIN32 and the clone/copy_file_range on\n>> other platforms). On WIN32, we simply automatically switch to CopyFile\n>> automatically, if we don't need to calculate checksum. But with the\n>> other methods, error out if the user requests those and we need to\n>> calculate the checksum.\n> \n> That seems completely broken. copy_file() needs to have the ability to\n> calculate a checksum if one is required; when one isn't required, it\n> can do whatever it likes. So we should always fall back to the\n> block-by-block method if we need a checksum. Whatever option the user\n> specified should only be applied when we don't need a checksum.\n> \n> Consider, for example:\n> \n> pg_basebackup -D sunday -c fast --manifest-checksums=CRC32C\n> pg_basebackup -D monday -c fast --manifest-checksums=SHA224\n> --incremental sunday/backup_manifest\n> pg_combinebackup sunday monday -o tuesday --manifest-checksums=CRC32C --clone\n> \n> Any files that are copied in their entirety from Sunday's backup can\n> be cloned, if we have support for cloning. But any files copied from\n> Monday's backup will need to be re-checksummed, since the checksum\n> algorithms don't match. With what you're describing, it sounds like\n> pg_combinebackup would just fail in this case; I don't think that's\n> the behavior that the user is going to want.\n> \n\nRight, this will happen:\n\n pg_combinebackup: error: unable to use accelerated copy when manifest\n checksums are to be calculated. Use --no-manifest\n\nAre you saying we should just silently override the copy method and do\nthe copy block by block? I'm not strongly opposed to that, but it feels\nwrong to just ignore that the user explicitly requested cloning, and I'm\nnot sure why should this be different from any other case when the user\nrequests incompatible combination of options and/or options that are not\nsupported on the current configuration.\n\nWhy not just to tell the user to use the correct parameters, i.e. either\nremove --clone or add --no-manifest?\n\nFWIW I now realize it actually fails a bit earlier than I thought - when\nparsing the options, not in copy_file. But then some checks (if a given\ncopy method is supported) happen in the copy functions. I wonder if it'd\nbe better/possible to do all of that in one place, not sure.\n\nAlso, the message only suggests to use --no-manifest. It probably should\nsuggest removing --clone too.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 22 Mar 2024 18:22:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 1:22 PM Tomas Vondra\n<[email protected]> wrote:\n> Right, this will happen:\n>\n> pg_combinebackup: error: unable to use accelerated copy when manifest\n> checksums are to be calculated. Use --no-manifest\n>\n> Are you saying we should just silently override the copy method and do\n> the copy block by block?\n\nYes.\n\n> I'm not strongly opposed to that, but it feels\n> wrong to just ignore that the user explicitly requested cloning, and I'm\n> not sure why should this be different from any other case when the user\n> requests incompatible combination of options and/or options that are not\n> supported on the current configuration.\n\nI don't feel like copying block-by-block when that's needed to compute\na checksum is ignoring what the user requested. I mean, if we'd had to\nperform reconstruction rather than copying an entire file, we would\nhave done that regardless of whether --clone had been there, and I\ndon't see why the need-to-compute-a-checksum case is any different. I\nthink we should view a flag like --clone as specifying how to copy a\nfile when we don't need to do anything but copy it. I don't think it\nshould dictate that we're not allowed to perform other processing when\nthat other processing is required.\n\n From my point of view, this is not a case of incompatible options\nhaving been specified. If you specify run pg_basebackup with both\n--format=p and --format=t, those are incompatible options; the backup\ncan be done one way or the other, but not both at once. But here\nthere's no such conflict. Block-by-block copying and fast-copying can\nhappen as part of the same operation, as in the example that I showed,\nwhere some files need the block-by-block copying and some can be\nfast-copied. The user is entitled to specify which fast-copying method\nthey would like to have used for the files where fast-copying is\npossible without getting a failure just because it isn't possible for\nevery single file.\n\nOr to say it the other way around, if there's 1 file that needs to be\ncopied block by block and 99 files that can be fast-copied, you want\nto force the user to the block-by-block method for all 100 files. I\nwant to force the use of the block-by-block method for the 1 file\nwhere that's the only valid method, and let them choose what they want\nto do for the other 99.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Mar 2024 14:40:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "Hmm, this discussion seems to assume that we only use\ncopy_file_range() to copy/clone whole segment files, right? That's\ngreat and may even get most of the available benefit given typical\ndatabases with many segments of old data that never changes, but... I\nthink copy_write_range() allows us to go further than the other\nwhole-file clone techniques: we can stitch together parts of an old\nbackup segment file and an incremental backup to create a new file.\nIf you're interested in minimising disk use while also removing\ndependencies on the preceding chain of backups, then it might make\nsense to do that even if you *also* have to read the data to compute\nthe checksums, I think? That's why I mentioned it: if\ncopy_file_range() (ie sub-file-level block sharing) is a solution in\nsearch of a problem, has the world ever seen a better problem than\npg_combinebackup?\n\n\n",
"msg_date": "Sat, 23 Mar 2024 13:25:38 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro <[email protected]> wrote:\n> Hmm, this discussion seems to assume that we only use\n> copy_file_range() to copy/clone whole segment files, right? That's\n> great and may even get most of the available benefit given typical\n> databases with many segments of old data that never changes, but... I\n> think copy_write_range() allows us to go further than the other\n> whole-file clone techniques: we can stitch together parts of an old\n> backup segment file and an incremental backup to create a new file.\n> If you're interested in minimising disk use while also removing\n> dependencies on the preceding chain of backups, then it might make\n> sense to do that even if you *also* have to read the data to compute\n> the checksums, I think? That's why I mentioned it: if\n> copy_file_range() (ie sub-file-level block sharing) is a solution in\n> search of a problem, has the world ever seen a better problem than\n> pg_combinebackup?\n\nThat makes sense; it's just a different part of the code than I\nthought we were talking about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 23 Mar 2024 08:38:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 3/22/24 19:40, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 1:22 PM Tomas Vondra\n> <[email protected]> wrote:\n>> Right, this will happen:\n>>\n>> pg_combinebackup: error: unable to use accelerated copy when manifest\n>> checksums are to be calculated. Use --no-manifest\n>>\n>> Are you saying we should just silently override the copy method and do\n>> the copy block by block?\n> \n> Yes.\n> \n>> I'm not strongly opposed to that, but it feels\n>> wrong to just ignore that the user explicitly requested cloning, and I'm\n>> not sure why should this be different from any other case when the user\n>> requests incompatible combination of options and/or options that are not\n>> supported on the current configuration.\n> \n> I don't feel like copying block-by-block when that's needed to compute\n> a checksum is ignoring what the user requested. I mean, if we'd had to\n> perform reconstruction rather than copying an entire file, we would\n> have done that regardless of whether --clone had been there, and I\n> don't see why the need-to-compute-a-checksum case is any different. I\n> think we should view a flag like --clone as specifying how to copy a\n> file when we don't need to do anything but copy it. I don't think it\n> should dictate that we're not allowed to perform other processing when\n> that other processing is required.\n> \n> From my point of view, this is not a case of incompatible options\n> having been specified. If you specify run pg_basebackup with both\n> --format=p and --format=t, those are incompatible options; the backup\n> can be done one way or the other, but not both at once. But here\n> there's no such conflict. Block-by-block copying and fast-copying can\n> happen as part of the same operation, as in the example that I showed,\n> where some files need the block-by-block copying and some can be\n> fast-copied. The user is entitled to specify which fast-copying method\n> they would like to have used for the files where fast-copying is\n> possible without getting a failure just because it isn't possible for\n> every single file.\n> \n> Or to say it the other way around, if there's 1 file that needs to be\n> copied block by block and 99 files that can be fast-copied, you want\n> to force the user to the block-by-block method for all 100 files. I\n> want to force the use of the block-by-block method for the 1 file\n> where that's the only valid method, and let them choose what they want\n> to do for the other 99.\n> \n\nOK, that makes sense. Here's a patch that should work like this - in\ncopy_file we check if we need to calculate checksums, and either use the\nrequested copy method, or fall back to the block-by-block copy.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 23 Mar 2024 14:37:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 3/23/24 13:38, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro <[email protected]> wrote:\n>> Hmm, this discussion seems to assume that we only use\n>> copy_file_range() to copy/clone whole segment files, right? That's\n>> great and may even get most of the available benefit given typical\n>> databases with many segments of old data that never changes, but... I\n>> think copy_write_range() allows us to go further than the other\n>> whole-file clone techniques: we can stitch together parts of an old\n>> backup segment file and an incremental backup to create a new file.\n>> If you're interested in minimising disk use while also removing\n>> dependencies on the preceding chain of backups, then it might make\n>> sense to do that even if you *also* have to read the data to compute\n>> the checksums, I think? That's why I mentioned it: if\n>> copy_file_range() (ie sub-file-level block sharing) is a solution in\n>> search of a problem, has the world ever seen a better problem than\n>> pg_combinebackup?\n> \n> That makes sense; it's just a different part of the code than I\n> thought we were talking about.\n> \n\nYeah, that's in write_reconstructed_file() and the patch does not touch\nthat at all. I agree it would be nice to use copy_file_range() in this\npart too, and it doesn't seem it'd be that hard to do, I think.\n\nIt seems we'd just need a \"fork\" that either calls pread/pwrite or\ncopy_file_range, depending on checksums and what was requested.\n\nBTW is there a reason why the code calls \"write\" and not \"pg_pwrite\"?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 23 Mar 2024 14:47:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 3/23/24 14:47, Tomas Vondra wrote:\n> On 3/23/24 13:38, Robert Haas wrote:\n>> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro <[email protected]> wrote:\n>>> Hmm, this discussion seems to assume that we only use\n>>> copy_file_range() to copy/clone whole segment files, right? That's\n>>> great and may even get most of the available benefit given typical\n>>> databases with many segments of old data that never changes, but... I\n>>> think copy_write_range() allows us to go further than the other\n>>> whole-file clone techniques: we can stitch together parts of an old\n>>> backup segment file and an incremental backup to create a new file.\n>>> If you're interested in minimising disk use while also removing\n>>> dependencies on the preceding chain of backups, then it might make\n>>> sense to do that even if you *also* have to read the data to compute\n>>> the checksums, I think? That's why I mentioned it: if\n>>> copy_file_range() (ie sub-file-level block sharing) is a solution in\n>>> search of a problem, has the world ever seen a better problem than\n>>> pg_combinebackup?\n>>\n>> That makes sense; it's just a different part of the code than I\n>> thought we were talking about.\n>>\n> \n> Yeah, that's in write_reconstructed_file() and the patch does not touch\n> that at all. I agree it would be nice to use copy_file_range() in this\n> part too, and it doesn't seem it'd be that hard to do, I think.\n> \n> It seems we'd just need a \"fork\" that either calls pread/pwrite or\n> copy_file_range, depending on checksums and what was requested.\n> \n\nHere's a patch to use copy_file_range in write_reconstructed_file too,\nwhen requested/possible. One thing that I'm not sure about is whether to\ndo pg_fatal() if --copy-file-range but the platform does not support it.\nThis is more like what pg_upgrade does, but maybe we should just ignore\nwhat the user requested and fallback to the regular copy (a bit like\nwhen having to do a checksum for some files). Or maybe the check should\njust happen earlier ...\n\nI've been thinking about what Thomas wrote - that maybe it'd be good to\ndo copy_file_range() even when calculating the checksum, and I think he\nmay be right. But the current patch does not do that, and while it\ndoesn't seem very difficult to do (at least when reconstructing the file\nfrom incremental backups) I don't have a very good intuition whether\nit'd be a win or not in typical cases.\n\nI have a naive question about the checksumming - if we used a\nmerkle-tree-like scheme, i.e. hashing blocks and not hashes of blocks,\nwouldn't that allow calculating the hashes even without having to read\nthe blocks, making copy_file_range more efficient? Sure, it's more\ncomplex, but a well known scheme. (OK, now I realized it'd mean we can't\nuse tools like sha224sum to hash the files and compare to manifest. I\nguess that's why we don't do this ...)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 23 Mar 2024 18:57:49 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Sat, Mar 23, 2024 at 9:37 AM Tomas Vondra\n<[email protected]> wrote:\n> OK, that makes sense. Here's a patch that should work like this - in\n> copy_file we check if we need to calculate checksums, and either use the\n> requested copy method, or fall back to the block-by-block copy.\n\n+ Use efficient file cloning (also known as <quote>reflinks</quote> on\n+ some systems) instead of copying files to the new cluster. This can\n\nnew cluster -> output directory\n\nI think your version kind of messes up the debug logging. In my\nversion, every call to copy_file() would emit either \"would copy\n\\\"%s\\\" to \\\"%s\\\" using strategy %s\" and \"copying \\\"%s\\\" to \\\"%s\\\"\nusing strategy %s\". In your version, the dry_run mode emits a string\nsimilar to the former, but creates separate translatable strings for\neach copy method instead of using the same one with a different value\nof %s. In non-dry-run mode, I think your version loses the debug\nlogging altogether.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 10:31:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Sat, Mar 23, 2024 at 9:47 AM Tomas Vondra\n<[email protected]> wrote:\n> BTW is there a reason why the code calls \"write\" and not \"pg_pwrite\"?\n\nI think it's mostly because I learned to code a really long time ago.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Mar 2024 10:32:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Sat, Mar 23, 2024 at 6:57 PM Tomas Vondra\n<[email protected]> wrote:\n\n> On 3/23/24 14:47, Tomas Vondra wrote:\n> > On 3/23/24 13:38, Robert Haas wrote:\n> >> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro <[email protected]> wrote:\n[..]\n> > Yeah, that's in write_reconstructed_file() and the patch does not touch\n> > that at all. I agree it would be nice to use copy_file_range() in this\n> > part too, and it doesn't seem it'd be that hard to do, I think.\n> >\n> > It seems we'd just need a \"fork\" that either calls pread/pwrite or\n> > copy_file_range, depending on checksums and what was requested.\n> >\n>\n> Here's a patch to use copy_file_range in write_reconstructed_file too,\n> when requested/possible. One thing that I'm not sure about is whether to\n> do pg_fatal() if --copy-file-range but the platform does not support it.\n[..]\n\nHi Tomas, so I gave a go to the below patches today:\n- v20240323-2-0001-pg_combinebackup-allow-using-clone-copy_.patch\n- v20240323-2-0002-write_reconstructed_file.patch\n\nMy assessment:\n\nv20240323-2-0001-pg_combinebackup-allow-using-clone-copy_.patch -\nlooks like more or less good to go\nv20240323-2-0002-write_reconstructed_file.patch - needs work and\nwithout that clone/copy_file_range() good effects are unlikely\n\nGiven Debian 12, ~100GB DB, (pgbench -i -s 7000 , and some additional\ntables with GiST and GIN indexes , just to see more WAL record types)\nand with backups sizes in MB like that:\n\n106831 full\n2823 incr.1 # captured after some time with pgbench -R 100\n165 incr.2 # captured after some time with pgbench -R 100\n\nTest cmd: rm -rf outtest; sync; sync ; sync; echo 3 | sudo tee\n/proc/sys/vm/drop_caches ; time /usr/pgsql17/bin/pg_combinebackup -o\nouttest full incr.1 incr.2\n\nTest results of various copies on small I/O constrained XFS device:\nnormal copy: 31m47.407s\n--clone copy: error: file cloning not supported on this platform (it's\ndue #ifdef of having COPY_FILE_RANGE available)\n--copy-file-range: aborted, as it was taking too long , I was\nexpecting it to accelerate, but it did not... obviously this is the\ntransparent failover in case of calculating checksums...\n--manifest-checksums=NONE --copy-file-range: BUG, it keep on appending\nto just one file e.g. outtest/base/5/16427.29 with 200GB+ ?? and ended\nup with ENOSPC [more on this later]\n--manifest-checksums=NONE --copy-file-range without v20240323-2-0002: 27m23.887s\n--manifest-checksums=NONE --copy-file-range with v20240323-2-0002 and\nloop-fix: 5m1.986s but it creates corruption as it stands\n\nIssues:\n\n1. https://cirrus-ci.com/task/5937513653600256?logs=mingw_cross_warning#L327\ncompains about win32/mingw:\n\n[15:47:27.184] In file included from copy_file.c:22:\n[15:47:27.184] copy_file.c: In function ‘copy_file’:\n[15:47:27.184] ../../../src/include/common/logging.h:134:6: error:\nthis statement may fall through [-Werror=implicit-fallthrough=]\n[15:47:27.184] 134 | if (unlikely(__pg_log_level <= PG_LOG_DEBUG)) \\\n[15:47:27.184] | ^\n[15:47:27.184] copy_file.c:96:5: note: in expansion of macro ‘pg_log_debug’\n[15:47:27.184] 96 | pg_log_debug(\"would copy \\\"%s\\\" to \\\"%s\\\"\n(copy_file_range)\",\n[15:47:27.184] | ^~~~~~~~~~~~\n[15:47:27.184] copy_file.c:99:4: note: here\n[15:47:27.184] 99 | case COPY_MODE_COPYFILE:\n[15:47:27.184] | ^~~~\n[15:47:27.184] cc1: all warnings being treated as errors\n\n2. I do not know what's the consensus between --clone and\n--copy-file-range , but if we have #ifdef FICLONE clone_works() #elif\nHAVE_COPY_FILE_RANGE copy_file_range_only_works() then we should also\napply the same logic to the --help so that --clone is not visible\nthere (for consistency?). Also the \"error: file cloning not supported\non this platform \" is technically incorrect, Linux does support\nioctl(FICLONE) and copy_file_range(), but we are just not choosing one\nover another (so technically it is \"available\"). Nitpicking I know.\n\n3. [v20240323-2-0002-write_reconstructed_file.patch]: The mentioned\nENOSPACE spiral-of-death-bug symptoms are like that:\n\nstrace:\ncopy_file_range(8, [697671680], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697679872], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697688064], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697696256], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697704448], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697712640], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697720832], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697729024], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697737216], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697745408], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697753600], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697761792], 9, NULL, 8192, 0) = 8192\ncopy_file_range(8, [697769984], 9, NULL, 8192, 0) = 8192\n\nNotice that dest_off_t (poutoff) is NULL.\n\n(gdb) where\n#0 0x00007f2cd56f6733 in copy_file_range (infd=8,\npinoff=pinoff@entry=0x7f2cd53f54e8, outfd=outfd@entry=9,\npoutoff=poutoff@entry=0x0,\n length=length@entry=8192, flags=flags@entry=0) at\n../sysdeps/unix/sysv/linux/copy_file_range.c:28\n#1 0x00005555ecd077f4 in write_reconstructed_file\n(copy_mode=COPY_MODE_COPY_FILE_RANGE, dry_run=false, debug=true,\nchecksum_ctx=0x7ffc4cdb7700,\n offsetmap=<optimized out>, sourcemap=0x7f2cd54f6010,\nblock_length=<optimized out>, output_filename=0x7ffc4cdba910\n\"outtest/base/5/16427.29\",\n input_filename=0x7ffc4cdba510\n\"incr.2/base/5/INCREMENTAL.16427.29\") at reconstruct.c:648\n#2 reconstruct_from_incremental_file\n(input_filename=input_filename@entry=0x7ffc4cdba510\n\"incr.2/base/5/INCREMENTAL.16427.29\",\n output_filename=output_filename@entry=0x7ffc4cdba910\n\"outtest/base/5/16427.29\",\nrelative_path=relative_path@entry=0x7ffc4cdbc670 \"base/5\",\n bare_file_name=bare_file_name@entry=0x5555ee2056ef \"16427.29\",\nn_prior_backups=n_prior_backups@entry=2,\n prior_backup_dirs=prior_backup_dirs@entry=0x7ffc4cdbf248,\nmanifests=0x5555ee137a10, manifest_path=0x7ffc4cdbad10\n\"base/5/16427.29\",\n checksum_type=CHECKSUM_TYPE_NONE, checksum_length=0x7ffc4cdb9864,\nchecksum_payload=0x7ffc4cdb9868, debug=true, dry_run=false,\n copy_method=COPY_MODE_COPY_FILE_RANGE) at reconstruct.c:327\n\n.. it's a spiral of death till ENOSPC. Reverting the\nv20240323-2-0002-write_reconstructed_file.patch helps. The problem\nlies in that do-wb=-inifity-loop (?) along with NULL for destination\noff_t. This seem to solves that thingy(?):\n\n- do\n- {\n- wb = copy_file_range(s->fd,\n&offsetmap[i], wfd, NULL, BLCKSZ, 0);\n+ //do\n+ //{\n+ wb = copy_file_range(s->fd,\n&offsetmap[i], wfd, &offsetmap[i], BLCKSZ, 0);\n if (wb < 0)\n pg_fatal(\"error while copying\nfile range from \\\"%s\\\" to \\\"%s\\\": %m\",\n\ninput_filename, output_filename);\n- } while (wb > 0);\n+ //} while (wb > 0);\n #else\n\n...so that way I've got it down to 5mins.\n\n3. .. but onn startup I've got this after trying psql login: invalid\npage in block 0 of relation base/5/1259 . I've again reverted the\nv20240323-2-0002 to see if that helped for next-round of\npg_combinebackup --manifest-checksums=NONE --copy-file-range and after\n32mins of waiting it did help indeed: I was able to login and select\ncounts worked and matched properly the data. I've reapplied the\nv20240323-2-0002 (with my fix to prevent that endless loop) and the\nissue was again(!) there. Probably it's related to the destination\noffset. I couldn't find more time to look on it today and the setup\nwas big 100GB on slow device, so just letting You know as fast as\npossible.\n\n4. More efficiency is on the table option (optional patch node ; just\nfor completeness; I dont think we have time for that? ): even if\nv20240323-2-0002 would work, the problem is that it would be sending\nsyscall for every 8kB. We seem to be performing lots of per-8KB\nsyscalls which hinder performance (both in copy_file_range and in\nnormal copy):\n\npread64(8, \"\"..., 8192, 369115136) = 8192 // 369115136 + 8192 =\n369123328 (matches next pread offset)\nwrite(9, \"\"..., 8192) = 8192\npread64(8, \"\"..., 8192, 369123328) = 8192 // 369123328 + 8192 = 369131520\nwrite(9, \"\"..., 8192) = 8192\npread64(8, \"\"..., 8192, 369131520) = 8192 // and so on\nwrite(9, \"\"..., 8192) = 8192\n\nApparently there's no merging of adjacent IO/s, so pg_combinebackup\nwastes lots of time on issuing instead small syscalls but it could\nlet's say do single pread/write (or even copy_file_range()). I think\nit was not evident in my earlier testing (200GB; 39min vs ~40s) as I\nhad much smaller modifications in my incremental (think of 99% of\nstatic data).\n\n5. I think we should change the subject with new patch revision, so\nthat such functionality for incremental backups is not buried down in\nthe pg_upgrade thread ;)\n\n-J.\n\n\n",
"msg_date": "Tue, 26 Mar 2024 15:09:54 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On 3/26/24 15:09, Jakub Wartak wrote:\n> On Sat, Mar 23, 2024 at 6:57 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>> On 3/23/24 14:47, Tomas Vondra wrote:\n>>> On 3/23/24 13:38, Robert Haas wrote:\n>>>> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro <[email protected]> wrote:\n> [..]\n>>> Yeah, that's in write_reconstructed_file() and the patch does not touch\n>>> that at all. I agree it would be nice to use copy_file_range() in this\n>>> part too, and it doesn't seem it'd be that hard to do, I think.\n>>>\n>>> It seems we'd just need a \"fork\" that either calls pread/pwrite or\n>>> copy_file_range, depending on checksums and what was requested.\n>>>\n>>\n>> Here's a patch to use copy_file_range in write_reconstructed_file too,\n>> when requested/possible. One thing that I'm not sure about is whether to\n>> do pg_fatal() if --copy-file-range but the platform does not support it.\n> [..]\n> \n> Hi Tomas, so I gave a go to the below patches today:\n> - v20240323-2-0001-pg_combinebackup-allow-using-clone-copy_.patch\n> - v20240323-2-0002-write_reconstructed_file.patch\n> \n> My assessment:\n> \n> v20240323-2-0001-pg_combinebackup-allow-using-clone-copy_.patch -\n> looks like more or less good to go\n\nThere's some issues with the --dry-run, pointed out by Robert. Should be\nfixed in the attached version.\n\n> v20240323-2-0002-write_reconstructed_file.patch - needs work and\n> without that clone/copy_file_range() good effects are unlikely\n> \n> Given Debian 12, ~100GB DB, (pgbench -i -s 7000 , and some additional\n> tables with GiST and GIN indexes , just to see more WAL record types)\n> and with backups sizes in MB like that:\n> \n> 106831 full\n> 2823 incr.1 # captured after some time with pgbench -R 100\n> 165 incr.2 # captured after some time with pgbench -R 100\n> \n> Test cmd: rm -rf outtest; sync; sync ; sync; echo 3 | sudo tee\n> /proc/sys/vm/drop_caches ; time /usr/pgsql17/bin/pg_combinebackup -o\n> outtest full incr.1 incr.2\n> \n> Test results of various copies on small I/O constrained XFS device:\n> normal copy: 31m47.407s\n> --clone copy: error: file cloning not supported on this platform (it's\n> due #ifdef of having COPY_FILE_RANGE available)\n> --copy-file-range: aborted, as it was taking too long , I was\n> expecting it to accelerate, but it did not... obviously this is the\n> transparent failover in case of calculating checksums...\n> --manifest-checksums=NONE --copy-file-range: BUG, it keep on appending\n> to just one file e.g. outtest/base/5/16427.29 with 200GB+ ?? and ended\n> up with ENOSPC [more on this later]\n\nThat's really strange.\n\n> --manifest-checksums=NONE --copy-file-range without v20240323-2-0002: 27m23.887s\n> --manifest-checksums=NONE --copy-file-range with v20240323-2-0002 and\n> loop-fix: 5m1.986s but it creates corruption as it stands\n> \n\nThanks. I plan to do more similar tests, once my machines get done with\nsome other stuff.\n\n> Issues:\n> \n> 1. https://cirrus-ci.com/task/5937513653600256?logs=mingw_cross_warning#L327\n> compains about win32/mingw:\n> \n> [15:47:27.184] In file included from copy_file.c:22:\n> [15:47:27.184] copy_file.c: In function ‘copy_file’:\n> [15:47:27.184] ../../../src/include/common/logging.h:134:6: error:\n> this statement may fall through [-Werror=implicit-fallthrough=]\n> [15:47:27.184] 134 | if (unlikely(__pg_log_level <= PG_LOG_DEBUG)) \\\n> [15:47:27.184] | ^\n> [15:47:27.184] copy_file.c:96:5: note: in expansion of macro ‘pg_log_debug’\n> [15:47:27.184] 96 | pg_log_debug(\"would copy \\\"%s\\\" to \\\"%s\\\"\n> (copy_file_range)\",\n> [15:47:27.184] | ^~~~~~~~~~~~\n> [15:47:27.184] copy_file.c:99:4: note: here\n> [15:47:27.184] 99 | case COPY_MODE_COPYFILE:\n> [15:47:27.184] | ^~~~\n> [15:47:27.184] cc1: all warnings being treated as errors\n> \n\nYup, missing break.\n\n> 2. I do not know what's the consensus between --clone and\n> --copy-file-range , but if we have #ifdef FICLONE clone_works() #elif\n> HAVE_COPY_FILE_RANGE copy_file_range_only_works() then we should also\n> apply the same logic to the --help so that --clone is not visible\n> there (for consistency?). Also the \"error: file cloning not supported\n> on this platform \" is technically incorrect, Linux does support\n> ioctl(FICLONE) and copy_file_range(), but we are just not choosing one\n> over another (so technically it is \"available\"). Nitpicking I know.\n> \n\nThat's a good question, I'm not sure. But whatever we do, we should do\nthe same thing in pg_upgrade. Maybe there's some sort of precedent?\n\n> 3. [v20240323-2-0002-write_reconstructed_file.patch]: The mentioned\n> ENOSPACE spiral-of-death-bug symptoms are like that:\n> \n> strace:\n> copy_file_range(8, [697671680], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697679872], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697688064], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697696256], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697704448], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697712640], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697720832], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697729024], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697737216], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697745408], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697753600], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697761792], 9, NULL, 8192, 0) = 8192\n> copy_file_range(8, [697769984], 9, NULL, 8192, 0) = 8192\n> \n> Notice that dest_off_t (poutoff) is NULL.\n> \n> (gdb) where\n> #0 0x00007f2cd56f6733 in copy_file_range (infd=8,\n> pinoff=pinoff@entry=0x7f2cd53f54e8, outfd=outfd@entry=9,\n> poutoff=poutoff@entry=0x0,\n> length=length@entry=8192, flags=flags@entry=0) at\n> ../sysdeps/unix/sysv/linux/copy_file_range.c:28\n> #1 0x00005555ecd077f4 in write_reconstructed_file\n> (copy_mode=COPY_MODE_COPY_FILE_RANGE, dry_run=false, debug=true,\n> checksum_ctx=0x7ffc4cdb7700,\n> offsetmap=<optimized out>, sourcemap=0x7f2cd54f6010,\n> block_length=<optimized out>, output_filename=0x7ffc4cdba910\n> \"outtest/base/5/16427.29\",\n> input_filename=0x7ffc4cdba510\n> \"incr.2/base/5/INCREMENTAL.16427.29\") at reconstruct.c:648\n> #2 reconstruct_from_incremental_file\n> (input_filename=input_filename@entry=0x7ffc4cdba510\n> \"incr.2/base/5/INCREMENTAL.16427.29\",\n> output_filename=output_filename@entry=0x7ffc4cdba910\n> \"outtest/base/5/16427.29\",\n> relative_path=relative_path@entry=0x7ffc4cdbc670 \"base/5\",\n> bare_file_name=bare_file_name@entry=0x5555ee2056ef \"16427.29\",\n> n_prior_backups=n_prior_backups@entry=2,\n> prior_backup_dirs=prior_backup_dirs@entry=0x7ffc4cdbf248,\n> manifests=0x5555ee137a10, manifest_path=0x7ffc4cdbad10\n> \"base/5/16427.29\",\n> checksum_type=CHECKSUM_TYPE_NONE, checksum_length=0x7ffc4cdb9864,\n> checksum_payload=0x7ffc4cdb9868, debug=true, dry_run=false,\n> copy_method=COPY_MODE_COPY_FILE_RANGE) at reconstruct.c:327\n> \n> .. it's a spiral of death till ENOSPC. Reverting the\n> v20240323-2-0002-write_reconstructed_file.patch helps. The problem\n> lies in that do-wb=-inifity-loop (?) along with NULL for destination\n> off_t. This seem to solves that thingy(?):\n> \n> - do\n> - {\n> - wb = copy_file_range(s->fd,\n> &offsetmap[i], wfd, NULL, BLCKSZ, 0);\n> + //do\n> + //{\n> + wb = copy_file_range(s->fd,\n> &offsetmap[i], wfd, &offsetmap[i], BLCKSZ, 0);\n> if (wb < 0)\n> pg_fatal(\"error while copying\n> file range from \\\"%s\\\" to \\\"%s\\\": %m\",\n> \n> input_filename, output_filename);\n> - } while (wb > 0);\n> + //} while (wb > 0);\n> #else\n> \n> ...so that way I've got it down to 5mins.\n> \n\nYeah, that retry logic is wrong. I ended up copying the check from the\n\"regular\" copy branch, which simply bails out if copy_file_range returns\nanything but the expected 8192.\n\nI wonder if this should deal with partial writes, though. I mean, it's\nallowed copy_file_range() copies only some of the bytes - I don't know\nhow often / in what situations that happens, though ... And if we want\nto handle that for copy_file_range(), pwrite() needs the same treatment.\n\n> 3. .. but onn startup I've got this after trying psql login: invalid\n> page in block 0 of relation base/5/1259 . I've again reverted the\n> v20240323-2-0002 to see if that helped for next-round of\n> pg_combinebackup --manifest-checksums=NONE --copy-file-range and after\n> 32mins of waiting it did help indeed: I was able to login and select\n> counts worked and matched properly the data. I've reapplied the\n> v20240323-2-0002 (with my fix to prevent that endless loop) and the\n> issue was again(!) there. Probably it's related to the destination\n> offset. I couldn't find more time to look on it today and the setup\n> was big 100GB on slow device, so just letting You know as fast as\n> possible.\n> \n\nCan you see if you can still reproduce this with the attached version?\n\n> 4. More efficiency is on the table option (optional patch node ; just\n> for completeness; I dont think we have time for that? ): even if\n> v20240323-2-0002 would work, the problem is that it would be sending\n> syscall for every 8kB. We seem to be performing lots of per-8KB\n> syscalls which hinder performance (both in copy_file_range and in\n> normal copy):\n> \n> pread64(8, \"\"..., 8192, 369115136) = 8192 // 369115136 + 8192 =\n> 369123328 (matches next pread offset)\n> write(9, \"\"..., 8192) = 8192\n> pread64(8, \"\"..., 8192, 369123328) = 8192 // 369123328 + 8192 = 369131520\n> write(9, \"\"..., 8192) = 8192\n> pread64(8, \"\"..., 8192, 369131520) = 8192 // and so on\n> write(9, \"\"..., 8192) = 8192\n> \n> Apparently there's no merging of adjacent IO/s, so pg_combinebackup\n> wastes lots of time on issuing instead small syscalls but it could\n> let's say do single pread/write (or even copy_file_range()). I think\n> it was not evident in my earlier testing (200GB; 39min vs ~40s) as I\n> had much smaller modifications in my incremental (think of 99% of\n> static data).\n> \n\nYes, I've been thinking about exactly this optimization, but I think\nwe're way past proposing this for PG17. The changes that would require\nin reconstruct_from_incremental_file are way too significant. Has to\nwait for PG18 ;-)\n\nI do think there's more on the table, as mentioned by Thomas a couple\ndays ago - maybe we shouldn't approach clone/copy_file_range merely as\nan optimization to save time, it might be entirely reasonable to do this\nsimply to allow the filesystem to do CoW magic and save space (even if\nwe need to read the data and recalculate the checksum, which now\ndisables these copy methods).\n\n\n> 5. I think we should change the subject with new patch revision, so\n> that such functionality for incremental backups is not buried down in\n> the pg_upgrade thread ;)\n> \n\nOK.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 26 Mar 2024 19:02:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 3/25/24 15:31, Robert Haas wrote:\n> On Sat, Mar 23, 2024 at 9:37 AM Tomas Vondra\n> <[email protected]> wrote:\n>> OK, that makes sense. Here's a patch that should work like this - in\n>> copy_file we check if we need to calculate checksums, and either use the\n>> requested copy method, or fall back to the block-by-block copy.\n> \n> + Use efficient file cloning (also known as <quote>reflinks</quote> on\n> + some systems) instead of copying files to the new cluster. This can\n> \n> new cluster -> output directory\n> \n\nOoops, forgot to fix this. Will do in next version.\n\n> I think your version kind of messes up the debug logging. In my\n> version, every call to copy_file() would emit either \"would copy\n> \\\"%s\\\" to \\\"%s\\\" using strategy %s\" and \"copying \\\"%s\\\" to \\\"%s\\\"\n> using strategy %s\". In your version, the dry_run mode emits a string\n> similar to the former, but creates separate translatable strings for\n> each copy method instead of using the same one with a different value\n> of %s. In non-dry-run mode, I think your version loses the debug\n> logging altogether.\n> \n\nYeah. Sorry for not being careful enough about that, I was focusing on\nthe actual copy logic and forgot about this.\n\nThe patch I shared a couple minutes ago should fix this, effectively\nrestoring the original debug behavior. I liked the approach with calling\nstrategy_implementation a bit more, I wonder if it'd be better to go\nback to that for the \"accelerated\" copy methods, somehow.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 26 Mar 2024 19:09:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 7:03 PM Tomas Vondra\n<[email protected]> wrote:\n[..]\n>\n> That's really strange.\n\nHi Tomas, but it looks like it's fixed now :)\n\n> > --manifest-checksums=NONE --copy-file-range without v20240323-2-0002: 27m23.887s\n> > --manifest-checksums=NONE --copy-file-range with v20240323-2-0002 and\n> > loop-fix: 5m1.986s but it creates corruption as it stands\n> >\n>\n> Thanks. I plan to do more similar tests, once my machines get done with\n> some other stuff.\n\nPlease do so as I do not trust my fingers :-)\n\n> > Issues:\n> >\n> > 1. https://cirrus-ci.com/task/5937513653600256?logs=mingw_cross_warning#L327\n> > compains about win32/mingw:\n> >\n[..]\n> >\n>\n> Yup, missing break.\n\nNow it's https://cirrus-ci.com/task/4997185324974080?logs=headers_headerscheck#L10\n, reproducible with \"make -s headerscheck\nEXTRAFLAGS='-fmax-errors=10'\":\n/tmp/postgres/src/bin/pg_combinebackup/reconstruct.h:34:91: error:\nunknown type name ‘CopyMode’ / CopyMode copy_mode);\nto me it looks like reconstruct.h needs to include definition of\nCopyMode which is in \"#include \"copy_file.h\"\n\n> > 2. I do not know what's the consensus between --clone and\n> > --copy-file-range , but if we have #ifdef FICLONE clone_works() #elif\n> > HAVE_COPY_FILE_RANGE copy_file_range_only_works() then we should also\n> > apply the same logic to the --help so that --clone is not visible\n> > there (for consistency?). Also the \"error: file cloning not supported\n> > on this platform \" is technically incorrect, Linux does support\n> > ioctl(FICLONE) and copy_file_range(), but we are just not choosing one\n> > over another (so technically it is \"available\"). Nitpicking I know.\n> >\n>\n> That's a good question, I'm not sure. But whatever we do, we should do\n> the same thing in pg_upgrade. Maybe there's some sort of precedent?\n\nSigh, you are right... It's consistent hell.\n\n> > 3. [v20240323-2-0002-write_reconstructed_file.patch]: The mentioned\n> > ENOSPACE spiral-of-death-bug symptoms are like that:\n[..]\n>\n> Yeah, that retry logic is wrong. I ended up copying the check from the\n> \"regular\" copy branch, which simply bails out if copy_file_range returns\n> anything but the expected 8192.\n>\n> I wonder if this should deal with partial writes, though. I mean, it's\n> allowed copy_file_range() copies only some of the bytes - I don't know\n> how often / in what situations that happens, though ... And if we want\n> to handle that for copy_file_range(), pwrite() needs the same treatment.\n\nMaybe that helps?\nhttps://github.com/coreutils/coreutils/blob/606f54d157c3d9d558bdbe41da8d108993d86aeb/src/copy.c#L1427\n, it's harder than I anticipated (we can ignore the sparse logic\nthough, I think)\n\n> > 3. .. but onn startup I've got this after trying psql login: invalid\n> > page in block 0 of relation base/5/1259 .\n[..]\n>\n> Can you see if you can still reproduce this with the attached version?\n\nLooks like it's fixed now and it works great (~3min, multiple times)!\n\nBTW: I've tried to also try it over NFSv4 over loopback with XFS as\ncopy_file_range() does server-side optimization probably, but somehow\nit was so slow there that's it is close to being unusable (~9GB out of\n104GB reconstructed after 45mins) - maybe it's due to NFS mount opts,\ni don't think we should worry too much. I think it's related to\nmissing the below optimization if that matters. I think it's too early\nto warn users about NFS (I've spent on it just 10 mins), but on the\nother hand people might complain it's broken...\n\n> > Apparently there's no merging of adjacent IO/s, so pg_combinebackup\n> > wastes lots of time on issuing instead small syscalls but it could\n> > let's say do single pread/write (or even copy_file_range()). I think\n> > it was not evident in my earlier testing (200GB; 39min vs ~40s) as I\n> > had much smaller modifications in my incremental (think of 99% of\n> > static data).\n> >\n>\n> Yes, I've been thinking about exactly this optimization, but I think\n> we're way past proposing this for PG17. The changes that would require\n> in reconstruct_from_incremental_file are way too significant. Has to\n> wait for PG18 ;-)\n\nSure thing!\n\n> I do think there's more on the table, as mentioned by Thomas a couple\n> days ago - maybe we shouldn't approach clone/copy_file_range merely as\n> an optimization to save time, it might be entirely reasonable to do this\n> simply to allow the filesystem to do CoW magic and save space (even if\n> we need to read the data and recalculate the checksum, which now\n> disables these copy methods).\n\nSure ! I think time will still be a priority though, as\npg_combinebackup duration impacts RTO while disk space is relatively\ncheap.\n\nOne could argue that reconstructing 50TB will be a challenge though.\nNow my tests indicate space saving is already happening with 0002\npatch - 100GB DB / full backup stats look like that (so we are good I\nthink when using CoW - not so without using CoW) -- or i misunderstood\nsomething?:\n\nroot@jw-test-1:/backups# du -sm /backups/\n214612 /backups/\nroot@jw-test-1:/backups# du -sm *\n106831 full\n2823 incr.1\n165 incr.2\n104794 outtest\nroot@jw-test-1:/backups# df -h . # note this double confirms that just\n114GB is used (XFS), great!\nFilesystem Size Used Avail Use% Mounted on\n/dev/sdb1 500G 114G 387G 23% /backups\nroot@jw-test-1:/backups# # https://github.com/pwaller/sharedextents\nroot@jw-test-1:/backups# ./sharedextents-linux-amd64\nfull/base/5/16427.68 outtest/base/5/16427.68\n1056915456 / 1073741824 bytes (98.43%) # extents reuse\n\nNow I was wondering a little bit if the huge XFS extent allocation\nwon't hurt read performance (probably they were created due many\nindependent copy_file_range() calls):\n\nroot@jw-test-1:/backups# filefrag full/base/5/16427.68\nfull/base/5/16427.68: 1 extent found\nroot@jw-test-1:/backups# filefrag outtest/base/5/16427.68\nouttest/base/5/16427.68: 3979 extents found\n\nHowever in first look on seq reads of such CoW file it's still good\n(I'm assuming such backup after reconstruction would be copied back to\nthe proper DB server from this backup server):\n\nroot@jw-test-1:/backups# echo 3 > /proc/sys/vm/drop_caches\nroot@jw-test-1:/backups# time cat outtest/base/5/16427.68 > /dev/null\nreal 0m4.286s\nroot@jw-test-1:/backups# echo 3 > /proc/sys/vm/drop_caches\nroot@jw-test-1:/backups# time cat full/base/5/16427.68 > /dev/null\nreal 0m4.325s\n\nNow Thomas wrote there \"then it might make sense to do that even if\nyou *also* have to read the data to compute the checksums, I think? \"\n... sounds like read(), checksum() and still do copy_file_range()\ninstead of pwrite? PG 18 ? :D\n\n-J.\n\n\n",
"msg_date": "Wed, 27 Mar 2024 12:05:24 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 2:09 PM Tomas Vondra\n<[email protected]> wrote:\n> The patch I shared a couple minutes ago should fix this, effectively\n> restoring the original debug behavior. I liked the approach with calling\n> strategy_implementation a bit more, I wonder if it'd be better to go\n> back to that for the \"accelerated\" copy methods, somehow.\n\nSomehow I don't see this patch?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:45:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "\n\nOn 3/28/24 21:45, Robert Haas wrote:\n> On Tue, Mar 26, 2024 at 2:09 PM Tomas Vondra\n> <[email protected]> wrote:\n>> The patch I shared a couple minutes ago should fix this, effectively\n>> restoring the original debug behavior. I liked the approach with calling\n>> strategy_implementation a bit more, I wonder if it'd be better to go\n>> back to that for the \"accelerated\" copy methods, somehow.\n> \n> Somehow I don't see this patch?\n> \n\nIt's here:\n\nhttps://www.postgresql.org/message-id/90866c27-265a-4adb-89d0-18c8dd22bc19%40enterprisedb.com\n\nI did change the subject to reflect that it's no longer about\npg_upgrade, maybe that breaks the threading for you somehow?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 28 Mar 2024 22:48:28 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade --copy-file-range"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 2:03 PM Tomas Vondra\n<[email protected]> wrote:\n> [ new patches ]\n\nTomas, thanks for pointing me to this email; as you speculated, gmail\nbreaks threading if the subject line changes.\n\nThe documentation still needs work here:\n\n- It refers to --link mode, which is not a thing.\n\n- It should talk about the fact that in some cases block-by-block\ncopying will still be needed, possibly mentioning that it specifically\nhappens when the old backup manifest is not available or does not\ncontain checksums or does not contain checksums of the right type, or\nmaybe just being a bit vague.\n\nIn copy_file.c:\n\n- You added an unnecessary blank line to the beginning of a comment block.\n\n- You could keep the strategy_implementation variable here. I don't\nthink that's 100% necessary, but now that you've got the code\nstructured this way, there's no compelling reason to remove it.\n\n- I don't know what the +/* XXX maybe this should do the check\ninternally, same as the other functions? */ comment is talking about.\n\n- Maybe these functions should have header comments.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:23:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 3/29/24 15:23, Robert Haas wrote:\n> On Tue, Mar 26, 2024 at 2:03 PM Tomas Vondra\n> <[email protected]> wrote:\n>> [ new patches ]\n> \n> Tomas, thanks for pointing me to this email; as you speculated, gmail\n> breaks threading if the subject line changes.\n> \n> The documentation still needs work here:\n> \n> - It refers to --link mode, which is not a thing.\n> \n> - It should talk about the fact that in some cases block-by-block\n> copying will still be needed, possibly mentioning that it specifically\n> happens when the old backup manifest is not available or does not\n> contain checksums or does not contain checksums of the right type, or\n> maybe just being a bit vague.\n> \n> In copy_file.c:\n> \n> - You added an unnecessary blank line to the beginning of a comment block.\n> \n\nThanks, should be all cleaned up now, I think.\n\n> - You could keep the strategy_implementation variable here. I don't\n> think that's 100% necessary, but now that you've got the code\n> structured this way, there's no compelling reason to remove it.\n> \n\nYeah, I think you're right. The strategy_implementation seemed a bit\nweird to me because we now have 4 functions with different signatures.\nMost only take srd/dst, but copy_file_blocks() also takes checksum. And\nit seemed better to handle everything the same way, rather than treating\ncopy_file_blocks as an exception.\n\nBut it's not that bad, so 0001 has strategy_implementation again. But\nI'll get back to this in a minute.\n\n> - I don't know what the +/* XXX maybe this should do the check\n> internally, same as the other functions? */ comment is talking about.\n> \n\nI think this is stale. The XXX is about how the various functions\ndetect/report support. In most we have the ifdefs/pg_fatal() inside the\nfunction, but CopyFile() has nothing like that, because the detection\nhappens earlier. I wasn't sure if maybe we should make all these\nfunctions more alike, but I don't think we should.\n\n> - Maybe these functions should have header comments.\n> \n\nRight, added.\n\n\nI was thinking about the comment [1] from a couple a days ago, where\nThomas suggested that maybe we should try doing the CoW stuff\n(clone/copy_file_range) even in cases when we need to read the block,\nsay to calculate checksum, or even reconstruct from incremental backups.\n\nI wasn't sure how big the benefits of the patches shared so far might\nbe, so I decided to do some tests. I did a fairly simple thing:\n\n1) initialize a cluster with pgbench scale 5000 (~75GB)\n\n2) create a full backup\n\n3) do a run that updates ~1%, 10% and 20% of the blocks\n\n4) create an incremental backup after each run\n\n5) do pg_combinebackup for for each of the increments, with\nblock-by-block copy and copy_file_range, measure how long it takes and\nhow much disk space it consumes\n\nI did this on xfs and btrfs, and it quickly became obvious that there's\nvery little benefit unless --no-manifest is used. Which makes perfect\nsense, because pgbench is uniform updates so all segments need to be\nreconstructed from increments (so copy_file.c is mostly irrelevant), and\nwrite_reconstructed_file only uses copy_file_range() without checksums.\n\nI don't know how common --no-manifest is going to be, but I guess people\nwill want to keep manifests in at least some backup schemes (e.g. to\nrebuild full backup instead of having to take a full backup regularly).\n\nSo I decided to take a stab at Thomas' idea, i.e. reading the data to\ncalculate checksums, but then using copy_file_range instead of just\nwriting the data onto disk. This is in 0003, which relaxes the\nconditions in 0002 shared a couple days ago. And this helped a lot.\n\nThe attached PDF shows results for xfs/btrfs. Charts on the left are\ndisk space occupied by the reconstructed backup, measured as difference\nbetween \"df\" before and after running pg_combinebackup. The duration of\nthe pg_combinebackup execution is on the right. First row is without\nmanifest (i.e. --no-manifest), the second row is with manifest.\n\nThe 1%, 10% and 20% groups are for the various increments, updating\ndifferent fractions of the database.\n\nThe database is ~75GB, so that's what we expect a plain copy to have. If\nthere are some CoW benefits of copy_file_range, allowing the fs to reuse\nsome of the space or, the disk space should be reduced. And similarly,\nthere could/should be some improvements in pg_combinebackup duration.\n\nEach bar is a different copy method and patch:\n\n* copy on master/0001/0002/0003 - we don't expect any difference between\nthese, it should all perform the same and use the \"full\" space\n\n* copy_file_range on 0001/0002/0003 - 0001 should perform the same as\ncopy, because it's only about full-segment copies, and we don't any of\nthose here, 0002/0003 should help, depending on --no-manifest\n\nAnd indeed, this is what we see. 0002/0003 use only a fraction of disk\nspace, roughly the same as the updated fraction (which matches the size\nof the increment). This is nice.\n\nFor duration, the benefits seem to depend on the file system. For btrfs\nit actually is faster, as expected. 0002/0003 saves maybe 30-50% of\ntime, compared to block-by-block copy. On XFS it's not that great, the\ncopy_file_range is actually slower by up to about 50%. And this is not\nabout the extra read - this affects the 0002/no-manifest case too, where\nthe read is not necessary.\n\nI think this is fine. It's a tradeoff, where on some filesystems you can\nsave time or space, and on other filesystems you can save both. That's a\ntradeoff for the users to decide, I think.\n\nI'll see how this works on EXT4/ZFS next ...\n\nBut thinking about this a bit more, I realized there's no reason not to\napply the same logic to the copy_file part. I mean, if we need to copy a\nfile but also calculate a checksum, we can simply do the clone using\nclone/copy_file_range, but then also read the file and calculate the\nchecksum ...\n\n0004 does this, by simply passing the checksum_cxt around, which also\nhas the nice consequence that all the functions now have the same\nsignature, which makes the strategy_implementation work for all cases. I\nneed to do more testing of this, but like how this looks.\n\nOf course, maybe there's not an agreement this is the right way to\napproach this, and we should do the regular block-by-block copy?\n\nThere's one more change in 0003 - the checks if clone/copy_file_range\nare supported by the platform now happen right at the beginning when\nparsing the arguments, so that when a user specifies one of those\noptions, the error happens right away instead of sometime later when we\nhappen to hit one of those pg_fatal() places.\n\nI think this is the right place to do these checks, as it makes the\nwrite_reconstructed_file much easier to understand (without all the\nifdefs etc.).\n\nBut there's an argument whether this should fail with pg_fatal() or just\nfallback to the default copy method.\n\nBTW I wonder if it makes sense to only allow one of those methods? At\nthe moment the user can specify both --clone and --copy-file-range, and\nwhich one \"wins\" depends on the order in which they are specified. Seems\nconfusing at best. But maybe it'd make sense to allow both, and e.g. use\nclone() to copy whole segments and copy_file_range() for other places?\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2B8KDk%2BpM6vZHWT6XtZzh-sdieUDohcjj0fia6aqK3Oxg%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 31 Mar 2024 01:37:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 1:37 PM Tomas Vondra\n<[email protected]> wrote:\n> So I decided to take a stab at Thomas' idea, i.e. reading the data to\n> ...\n> I'll see how this works on EXT4/ZFS next ...\n\nWow, very cool! A couple of very quick thoughts/notes:\n\nZFS: the open source version only gained per-file block cloning in\n2.2, so if you're on an older release I expect copy_file_range() to\nwork but not really do the magic thing. On the FreeBSD version you\nalso have to turn cloning on with a sysctl because people were worried\nabout bugs in early versions so by default you still get actual\ncopying, not sure if you need something like that on the Linux\nversion... (Obviously ZFS is always completely COW-based, but before\nthe new block cloning stuff it could only share blocks by its own\nmagic deduplication if enabled, or by cloning/snapshotting a whole\ndataset/mountpoint; there wasn't a way to control it explicitly like\nthis.)\n\nAlignment: block sharing on any fs requires it. I haven't re-checked\nrecently but IIRC the incremental file format might have a\nnon-block-sized header? That means that if you copy_file_range() from\nboth the older backup and also the incremental backup, only the former\nwill share blocks, and the latter will silently be done by copying to\nnewly allocated blocks. If that's still true, I'm not sure how hard\nit would be to tweak the format to inject some padding and to make\nsure that there isn't any extra header before each block.\n\n\n",
"msg_date": "Sun, 31 Mar 2024 14:03:25 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "+ wb = copy_file_range(s->fd, &offsetmap[i], wfd, NULL, BLCKSZ, 0);\n\nCan you collect adjacent blocks in one multi-block call? And then I\nthink the contract is that you need to loop if it returns short.\n\n\n",
"msg_date": "Sun, 31 Mar 2024 14:56:06 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 3/31/24 03:03, Thomas Munro wrote:\n> On Sun, Mar 31, 2024 at 1:37 PM Tomas Vondra\n> <[email protected]> wrote:\n>> So I decided to take a stab at Thomas' idea, i.e. reading the data to\n>> ...\n>> I'll see how this works on EXT4/ZFS next ...\n> \n> Wow, very cool! A couple of very quick thoughts/notes:\n> \n> ZFS: the open source version only gained per-file block cloning in\n> 2.2, so if you're on an older release I expect copy_file_range() to\n> work but not really do the magic thing. On the FreeBSD version you\n> also have to turn cloning on with a sysctl because people were worried\n> about bugs in early versions so by default you still get actual\n> copying, not sure if you need something like that on the Linux\n> version... (Obviously ZFS is always completely COW-based, but before\n> the new block cloning stuff it could only share blocks by its own\n> magic deduplication if enabled, or by cloning/snapshotting a whole\n> dataset/mountpoint; there wasn't a way to control it explicitly like\n> this.)\n> \n\nI'm on 2.2.2 (on Linux). But there's something wrong, because the\npg_combinebackup that took ~150s on xfs/btrfs, takes ~900s on ZFS.\n\nI'm not sure it's a ZFS config issue, though, because it's not CPU or\nI/O bound, and I see this on both machines. And some simple dd tests\nshow the zpool can do 10x the throughput. Could this be due to the file\nheader / pool alignment?\n\n> Alignment: block sharing on any fs requires it. I haven't re-checked\n> recently but IIRC the incremental file format might have a\n> non-block-sized header? That means that if you copy_file_range() from\n> both the older backup and also the incremental backup, only the former\n> will share blocks, and the latter will silently be done by copying to\n> newly allocated blocks. If that's still true, I'm not sure how hard\n> it would be to tweak the format to inject some padding and to make\n> sure that there isn't any extra header before each block.\n\nI admit I'm not very familiar with the format, but you're probably right\nthere's a header, and header_length does not seem to consider alignment.\nmake_incremental_rfile simply does this:\n\n /* Remember length of header. */\n rf->header_length = sizeof(magic) + sizeof(rf->num_blocks) +\n sizeof(rf->truncation_block_length) +\n sizeof(BlockNumber) * rf->num_blocks;\n\nand sendFile() does the same thing when creating incremental basebackup.\nI guess it wouldn't be too difficult to make sure to align this to\nBLCKSZ or something like this. I wonder if the file format is documented\nsomewhere ... It'd certainly be nicer to tweak before v18, if necessary.\n\nAnyway, is that really a problem? I mean, in my tests the CoW stuff\nseemed to work quite fine - at least on the XFS/BTRFS. Although, maybe\nthat's why it took longer on XFS ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 31 Mar 2024 06:33:56 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 5:33 PM Tomas Vondra\n<[email protected]> wrote:\n> I'm on 2.2.2 (on Linux). But there's something wrong, because the\n> pg_combinebackup that took ~150s on xfs/btrfs, takes ~900s on ZFS.\n>\n> I'm not sure it's a ZFS config issue, though, because it's not CPU or\n> I/O bound, and I see this on both machines. And some simple dd tests\n> show the zpool can do 10x the throughput. Could this be due to the file\n> header / pool alignment?\n\nCould ZFS recordsize > 8kB be making it worse, repeatedly dealing with\nthe same 128kB record as you copy_file_range 16 x 8kB blocks?\n(Guessing you might be using the default recordsize?)\n\n> I admit I'm not very familiar with the format, but you're probably right\n> there's a header, and header_length does not seem to consider alignment.\n> make_incremental_rfile simply does this:\n>\n> /* Remember length of header. */\n> rf->header_length = sizeof(magic) + sizeof(rf->num_blocks) +\n> sizeof(rf->truncation_block_length) +\n> sizeof(BlockNumber) * rf->num_blocks;\n>\n> and sendFile() does the same thing when creating incremental basebackup.\n> I guess it wouldn't be too difficult to make sure to align this to\n> BLCKSZ or something like this. I wonder if the file format is documented\n> somewhere ... It'd certainly be nicer to tweak before v18, if necessary.\n>\n> Anyway, is that really a problem? I mean, in my tests the CoW stuff\n> seemed to work quite fine - at least on the XFS/BTRFS. Although, maybe\n> that's why it took longer on XFS ...\n\nYeah I'm not sure, I assume it did more allocating and copying because\nof that. It doesn't matter and it would be fine if a first version\nweren't as good as possible, and fine if we tune the format later once\nwe know more, ie leaving improvements on the table. I just wanted to\nshare the observation. I wouldn't be surprised if the block-at-a-time\ncoding makes it slower and maybe makes the on disk data structures\nworse, but I dunno I'm just guessing.\n\nIt's also interesting but not required to figure out how to tune ZFS\nwell for this purpose right now...\n\n\n",
"msg_date": "Sun, 31 Mar 2024 17:46:10 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "\n\nOn 3/31/24 06:46, Thomas Munro wrote:\n> On Sun, Mar 31, 2024 at 5:33 PM Tomas Vondra\n> <[email protected]> wrote:\n>> I'm on 2.2.2 (on Linux). But there's something wrong, because the\n>> pg_combinebackup that took ~150s on xfs/btrfs, takes ~900s on ZFS.\n>>\n>> I'm not sure it's a ZFS config issue, though, because it's not CPU or\n>> I/O bound, and I see this on both machines. And some simple dd tests\n>> show the zpool can do 10x the throughput. Could this be due to the file\n>> header / pool alignment?\n> \n> Could ZFS recordsize > 8kB be making it worse, repeatedly dealing with\n> the same 128kB record as you copy_file_range 16 x 8kB blocks?\n> (Guessing you might be using the default recordsize?)\n> \n\nNo, I reduced the record size to 8kB. And the pgbench init takes about\nthe same as on other filesystems on this hardware, I think. ~10 minutes\nfor scale 5000.\n\n>> I admit I'm not very familiar with the format, but you're probably right\n>> there's a header, and header_length does not seem to consider alignment.\n>> make_incremental_rfile simply does this:\n>>\n>> /* Remember length of header. */\n>> rf->header_length = sizeof(magic) + sizeof(rf->num_blocks) +\n>> sizeof(rf->truncation_block_length) +\n>> sizeof(BlockNumber) * rf->num_blocks;\n>>\n>> and sendFile() does the same thing when creating incremental basebackup.\n>> I guess it wouldn't be too difficult to make sure to align this to\n>> BLCKSZ or something like this. I wonder if the file format is documented\n>> somewhere ... It'd certainly be nicer to tweak before v18, if necessary.\n>>\n>> Anyway, is that really a problem? I mean, in my tests the CoW stuff\n>> seemed to work quite fine - at least on the XFS/BTRFS. Although, maybe\n>> that's why it took longer on XFS ...\n> \n> Yeah I'm not sure, I assume it did more allocating and copying because\n> of that. It doesn't matter and it would be fine if a first version\n> weren't as good as possible, and fine if we tune the format later once\n> we know more, ie leaving improvements on the table. I just wanted to\n> share the observation. I wouldn't be surprised if the block-at-a-time\n> coding makes it slower and maybe makes the on disk data structures\n> worse, but I dunno I'm just guessing.\n> \n> It's also interesting but not required to figure out how to tune ZFS\n> well for this purpose right now...\n\nNo idea. Any idea if there's some good ZFS statistics to check?\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 31 Mar 2024 07:42:55 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "Hi,\n\nI've been running some benchmarks and experimenting with various stuff,\ntrying to improve the poor performance on ZFS, and the regression on XFS\nwhen using copy_file_range. And oh boy, did I find interesting stuff ...\n\nAttached is a PDF with results of my benchmark for ZFS/XFS/BTRFS, on my\ntwo machines. I already briefly described what the benchmark does, but\nto clarify:\n\n1) generate backups: initialize pgbench scale 5000, do full backup,\nupdate roughly 1%, 10% and 20% blocks and do an incremental backup after\neach of those steps\n\n2) combine backups: full + 1%, full + 1% + 10%, full + 1% + 10% + 20%\n\n3) measure how long it takes and how much more disk space is used (to\nsee how well the CoW stuff works)\n\n4) after each pg_combinebackup run to pg_verifybackup, start the cluster\nto finish recovery, run pg_checksums --check (to check the patches don't\nproduce something broken)\n\nThere's a lot of interesting stuff to discuss, some of which was already\nmentioned in this thread earlier - in particular, I want to talk about\nblock alignment, prefetching and processing larger chunks of blocks.\n\nAttached is also all the patches including the ugly WIP parts discussed\nlater, complete results if you want to do your own analysis, and the\nscripts used to generate/restore scripts.\n\nFWIW I'm not claiming the patches are commit-ready (especially the new\nWIP parts), but should be correct and good enough for discussion (that\napplies especially to 0007). I think I could get them ready in a day or\ntwo, but I'd like some feedback to my findings, and also if someone\nwould have objections to get this in so short before the feature freeze,\nI'd prefer to know about that.\n\nThe patches are numbered the same as in the benchmark results, i.e. 0001\nis \"1\", 0002 is \"2\" etc. The \"0-baseline\" option is current master\nwithout any patches.\n\n\nNow to the findings ....\n\n\n1) block alignment\n------------------\n\nThis was mentioned by Thomas a couple days ago, when he pointed out the\nincremental files have a variable-length header (to record which blocks\nare stored in the file), followed by the block data, which means the\nblock data is not aligned to fs block. I haven't realized this, I just\nused whatever the reconstruction function received, but Thomas pointed\nout this may interfere with CoW, which needs the blocks to be aligned.\n\nAnd I think he's right, and my tests confirm this. I did a trivial patch\nto align the blocks to 8K boundary, by forcing the header to be a\nmultiple of 8K (I think 4K alignment would be enough). See the 0001\npatch that does this.\n\nAnd if I measure the disk space used by pg_combinebackup, and compare\nthe results with results without the patch ([1] from a couple days\nback), I see this:\n\n pct not aligned aligned\n -------------------------------------\n 1% 689M 19M\n 10% 3172M 22M\n 20% 13797M 27M\n\nYes, those numbers are correct. I didn't believe this at first, but the\nbackups are valid/verified, checksums are OK, etc. BTRFS has similar\nnumbers (e.g. drop from 20GB to 600MB).\n\nIf you look at the charts in the PDF, charts for on-disk space are on\nthe right side. It might seem like copy_file_range/CoW has no impact,\nbut that's just an illusion - the bars for the last three cases are so\nsmall it's difficult to see them (especially on XFS). While this does\nnot show the impact of alignment (because all of the cases in these runs\nhave blocks aligned), it shows how tiny the backups can be made. But it\ndoes have significant impact, per the preceding paragraph.\n\nThis also affect the prefetching, that I'm going to talk about next. But\nhaving the blocks misaligned (spanning multiple 4K pages) forces the\nsystem to prefetch more pages than necessary. I don't know how big the\nimpact is, because the prefetch patch is 0002, so I only have results\nfor prefetching on aligned blocks, but I don't see how it could not have\na cost.\n\nI do think we should just align the blocks properly. The 0001 patch does\nthat simply by adding a bunch of \\0 bytes up to the next 8K boundary.\nYes, this has a cost - if you have tiny files with only one or two\nblocks changed, the increment file will be a bit larger. Files without\nany blocks don't need alignment/padding, and as the number of blocks\nincreases, it gets negligible pretty quickly. Also, files use a multiple\nof fs blocks anyway, so if we align to 4K blocks it wouldn't actually\nneed more space at all. And even if it does, it's all \\0, so pretty damn\ncompressible (and I'm sorry, but if you care about tiny amounts of data\nadded by alignment, but refuse to use compression ...).\n\nI think we absolutely need to align the blocks in the incremental files,\nand I think we should do that now. I think 8K would work, but maybe we\nshould add alignment parameter to basebackup & manifest?\n\nThe reason why I think maybe this should be a basebackup parameter is\nthe recent discussion about large fs blocks - it seems to be in the\nworks, so maybe better to be ready and not assume all fs have 4K.\n\nAnd I think we probably want to do this now, because this affects all\ntools dealing with incremental backups - even if someone writes a custom\nversion of pg_combinebackup, it will have to deal with misaligned data.\nPerhaps there might be something like pg_basebackup that \"transforms\"\nthe data received from the server (and also the backup manifest), but\nthat does not seem like a great direction.\n\nNote: Of course, these space savings only exist thanks to sharing blocks\nwith the input backups, because the blocks in the combined backup point\nto one of the other backups. If those old backups are removed, then the\n\"saved space\" disappears because there's only a single copy.\n\n\n2) prefetch\n-----------\n\nI was very puzzled by the awful performance on ZFS. When every other fs\n(EXT4/XFS/BTRFS) took 150-200 seconds to run pg_combinebackup, it took\n900-1000 seconds on ZFS, no matter what I did. I tried all the tuning\nadvice I could think of, with almost no effect.\n\nUltimately I decided that it probably is the \"no readahead\" behavior\nI've observed on ZFS. I assume it's because it doesn't use the page\ncache where the regular readahead is detected etc. And there's no\nprefetching in pg_combinebackup, so I decided to an experiment and added\na trivial explicit prefetch when reconstructing the file - every time\nwe'd read data from a file, we do posix_fadvise for up to 128 blocks\nahead (similar to what bitmap heap scan code does). See 0002.\n\nAnd tadaaa - the duration dropped from 900-1000 seconds to only about\n250-300 seconds, so an improvement of a factor of 3-4x. I think this is\npretty massive.\n\nThere's a couple more interesting ZFS details - the prefetching seems to\nbe necessary even when using copy_file_range() and don't need to read\nthe data (to calculate checksums). This is why the \"manifest=off\" chart\nhas the strange group of high bars at the end - the copy cases are fast\nbecause prefetch happens, but if we switch to copy_file_range() there\nare no prefetches and it gets slow.\n\nThis is a bit bizarre, especially because the manifest=on cases are\nstill fast, exactly because the pread + prefetching still happens. I'm\nsure users would find this puzzling.\n\nUnfortunately, the prefetching is not beneficial for all filesystems.\nFor XFS it does not seem to make any difference, but on BTRFS it seems\nto cause a regression.\n\nI think this means we may need a \"--prefetch\" option, that'd force\nprefetching, probably both before pread and copy_file_range. Otherwise\npeople on ZFS are doomed and will have poor performance.\n\n\n3) bulk operations\n------------------\n\nAnother thing suggested by Thomas last week was that maybe we should try\ndetecting longer runs of blocks coming from the same file, and operate\non them as a single chunk of data. If you see e.g. 32 blocks, instead of\ndoing read/write or copy_file_range for each of them, we could simply do\none call for all those blocks at once.\n\nI think this is pretty likely, especially for small incremental backups\nwhere most of the blocks will come from the full backup. And I was\nsuspecting the XFS regression (where the copy-file-range was up to\n30-50% slower in some cases, see [1]) is related to this, because the\nperf profiles had stuff like this:\n\n 97.28% 2.10% pg_combinebacku [kernel.vmlinux] [k]\n |\n |--95.18%--entry_SYSCALL_64\n | |\n | --94.99%--do_syscall_64\n | |\n | |--74.13%--__do_sys_copy_file_range\n | | |\n | | --73.72%--vfs_copy_file_range\n | | |\n | | --73.14%--xfs_file_remap_range\n | | |\n | | |--70.65%--xfs_reflink_remap_blocks\n | | | |\n | | | --69.86%--xfs_reflink_remap_extent\n\nSo I took a stab at this in 0007, which detects runs of blocks coming\nfrom the same source file (limited to 128 blocks, i.e. 1MB). I only did\nthis for the copy_file_range() calls in 0007, and the results for XFS\nlook like this (complete results are in the PDF):\n\n old (block-by-block) new (batches)\n ------------------------------------------------------\n 1% 150s 4s\n 10% 150-200s 46s\n 20% 150-200s 65s\n\nYes, once again, those results are real, the backups are valid etc. So\nnot only it takes much less space (thanks to block alignment), it also\ntakes much less time (thanks to bulk operations).\n\nThe cases with \"manifest=on\" improve too, but not nearly this much. I\nbelieve this is simply because the read/write still happens block by\nblock. But it shouldn't be difficult to do in a bulk manner too (we\nalready have the range detected, but I was lazy).\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/0e27835d-dab5-49cd-a3ea-52cf6d9ef59e%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 1 Apr 2024 21:43:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 8:43 AM Tomas Vondra\n<[email protected]> wrote:\n> And I think he's right, and my tests confirm this. I did a trivial patch\n> to align the blocks to 8K boundary, by forcing the header to be a\n> multiple of 8K (I think 4K alignment would be enough). See the 0001\n> patch that does this.\n>\n> And if I measure the disk space used by pg_combinebackup, and compare\n> the results with results without the patch ([1] from a couple days\n> back), I see this:\n>\n> pct not aligned aligned\n> -------------------------------------\n> 1% 689M 19M\n> 10% 3172M 22M\n> 20% 13797M 27M\n>\n> Yes, those numbers are correct. I didn't believe this at first, but the\n> backups are valid/verified, checksums are OK, etc. BTRFS has similar\n> numbers (e.g. drop from 20GB to 600MB).\n\nFantastic.\n\n> I think we absolutely need to align the blocks in the incremental files,\n> and I think we should do that now. I think 8K would work, but maybe we\n> should add alignment parameter to basebackup & manifest?\n>\n> The reason why I think maybe this should be a basebackup parameter is\n> the recent discussion about large fs blocks - it seems to be in the\n> works, so maybe better to be ready and not assume all fs have 4K.\n>\n> And I think we probably want to do this now, because this affects all\n> tools dealing with incremental backups - even if someone writes a custom\n> version of pg_combinebackup, it will have to deal with misaligned data.\n> Perhaps there might be something like pg_basebackup that \"transforms\"\n> the data received from the server (and also the backup manifest), but\n> that does not seem like a great direction.\n\n+1, and I think BLCKSZ is the right choice.\n\n> I was very puzzled by the awful performance on ZFS. When every other fs\n> (EXT4/XFS/BTRFS) took 150-200 seconds to run pg_combinebackup, it took\n> 900-1000 seconds on ZFS, no matter what I did. I tried all the tuning\n> advice I could think of, with almost no effect.\n>\n> Ultimately I decided that it probably is the \"no readahead\" behavior\n> I've observed on ZFS. I assume it's because it doesn't use the page\n> cache where the regular readahead is detected etc. And there's no\n> prefetching in pg_combinebackup, so I decided to an experiment and added\n> a trivial explicit prefetch when reconstructing the file - every time\n> we'd read data from a file, we do posix_fadvise for up to 128 blocks\n> ahead (similar to what bitmap heap scan code does). See 0002.\n>\n> And tadaaa - the duration dropped from 900-1000 seconds to only about\n> 250-300 seconds, so an improvement of a factor of 3-4x. I think this is\n> pretty massive.\n\nInteresting. ZFS certainly has its own prefetching heuristics with\nlots of logic and settings, but it could be that it's using\nstrict-next-block detection of access pattern (ie what I called\neffective_io_readahead_window=0 in the streaming I/O thread) instead\nof a window (ie like the Linux block device level read ahead where,\nAFAIK, if you access anything in that sliding window it is triggered),\nand perhaps your test has a lot of non-contiguous but close-enough\nblocks? (Also reminds me of the similar discussion on the BHS thread\nabout distinguishing sequential access from\nmostly-sequential-but-with-lots-of-holes-like-Swiss-cheese, and the\nfine line between them.)\n\nYou could double-check this and related settings (for example I think\nit might disable itself automatically if you're on a VM with small RAM\nsize):\n\nhttps://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-prefetch-disable\n\n> There's a couple more interesting ZFS details - the prefetching seems to\n> be necessary even when using copy_file_range() and don't need to read\n> the data (to calculate checksums). This is why the \"manifest=off\" chart\n> has the strange group of high bars at the end - the copy cases are fast\n> because prefetch happens, but if we switch to copy_file_range() there\n> are no prefetches and it gets slow.\n\nHmm, at a guess, it might be due to prefetching the dnode (root object\nfor a file) and block pointers, ie the structure but not the data\nitself.\n\n> This is a bit bizarre, especially because the manifest=on cases are\n> still fast, exactly because the pread + prefetching still happens. I'm\n> sure users would find this puzzling.\n>\n> Unfortunately, the prefetching is not beneficial for all filesystems.\n> For XFS it does not seem to make any difference, but on BTRFS it seems\n> to cause a regression.\n>\n> I think this means we may need a \"--prefetch\" option, that'd force\n> prefetching, probably both before pread and copy_file_range. Otherwise\n> people on ZFS are doomed and will have poor performance.\n\nSeems reasonable if you can't fix it by tuning ZFS. (Might also be an\ninteresting research topic for a potential ZFS patch:\nprefetch_swiss_cheese_window_size. I will not be nerd-sniped into\nreading the relevant source today, but I'll figure it out soonish...)\n\n> So I took a stab at this in 0007, which detects runs of blocks coming\n> from the same source file (limited to 128 blocks, i.e. 1MB). I only did\n> this for the copy_file_range() calls in 0007, and the results for XFS\n> look like this (complete results are in the PDF):\n>\n> old (block-by-block) new (batches)\n> ------------------------------------------------------\n> 1% 150s 4s\n> 10% 150-200s 46s\n> 20% 150-200s 65s\n>\n> Yes, once again, those results are real, the backups are valid etc. So\n> not only it takes much less space (thanks to block alignment), it also\n> takes much less time (thanks to bulk operations).\n\nAgain, fantastic.\n\n\n",
"msg_date": "Tue, 2 Apr 2024 10:45:17 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 4/1/24 23:45, Thomas Munro wrote:\n> ...\n>>\n>> I was very puzzled by the awful performance on ZFS. When every other fs\n>> (EXT4/XFS/BTRFS) took 150-200 seconds to run pg_combinebackup, it took\n>> 900-1000 seconds on ZFS, no matter what I did. I tried all the tuning\n>> advice I could think of, with almost no effect.\n>>\n>> Ultimately I decided that it probably is the \"no readahead\" behavior\n>> I've observed on ZFS. I assume it's because it doesn't use the page\n>> cache where the regular readahead is detected etc. And there's no\n>> prefetching in pg_combinebackup, so I decided to an experiment and added\n>> a trivial explicit prefetch when reconstructing the file - every time\n>> we'd read data from a file, we do posix_fadvise for up to 128 blocks\n>> ahead (similar to what bitmap heap scan code does). See 0002.\n>>\n>> And tadaaa - the duration dropped from 900-1000 seconds to only about\n>> 250-300 seconds, so an improvement of a factor of 3-4x. I think this is\n>> pretty massive.\n> \n> Interesting. ZFS certainly has its own prefetching heuristics with\n> lots of logic and settings, but it could be that it's using\n> strict-next-block detection of access pattern (ie what I called\n> effective_io_readahead_window=0 in the streaming I/O thread) instead\n> of a window (ie like the Linux block device level read ahead where,\n> AFAIK, if you access anything in that sliding window it is triggered),\n> and perhaps your test has a lot of non-contiguous but close-enough\n> blocks? (Also reminds me of the similar discussion on the BHS thread\n> about distinguishing sequential access from\n> mostly-sequential-but-with-lots-of-holes-like-Swiss-cheese, and the\n> fine line between them.)\n> \n\nI don't think the files have a lot of non-contiguous but close-enough\nblocks (it's rather that we'd skip blocks that need to come from a later\nincremental file). The backups are generated to have a certain fraction\nof modified blocks.\n\nFor example the smallest backup has 1% means 99% of blocks comes from\nthe base backup, and 1% comes from the increment. And indeed, the whole\ndatabase is ~75GB and the backup is ~740MB. Which means that on average\nthere will be runs of 99 blocks in the base backup, then skip 1 block\n(to come from the increment), and then again 99-1-99-1. So it's very\nsequential, almost no holes, and the increment is 100% sequential. And\nit still does not seem to prefetch anything.\n\n\n> You could double-check this and related settings (for example I think\n> it might disable itself automatically if you're on a VM with small RAM\n> size):\n> \n> https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-prefetch-disable\n> \n\nI haven't touched that parameter at all, and it's \"enabled\" by default:\n\n# cat /sys/module/zfs/parameters/zfs_prefetch_disable\n0\n\nWhile trying to make the built-in prefetch work I reviewed the other\nparameters with the \"prefetch\" tag, without success. And I haven't seen\nany advice on how to make it work ...\n\n>> There's a couple more interesting ZFS details - the prefetching seems to\n>> be necessary even when using copy_file_range() and don't need to read\n>> the data (to calculate checksums). This is why the \"manifest=off\" chart\n>> has the strange group of high bars at the end - the copy cases are fast\n>> because prefetch happens, but if we switch to copy_file_range() there\n>> are no prefetches and it gets slow.\n> \n> Hmm, at a guess, it might be due to prefetching the dnode (root object\n> for a file) and block pointers, ie the structure but not the data\n> itself.\n> \n\nYeah, that's possible. But the effects are the same - it doesn't matter\nwhat exactly is not prefetched. But perhaps we could prefetch just a\ntiny part of the record, enough to prefetch the dnode+pointers, not the\nwhole record. Might save some space in ARC, perhaps?\n\n>> This is a bit bizarre, especially because the manifest=on cases are\n>> still fast, exactly because the pread + prefetching still happens. I'm\n>> sure users would find this puzzling.\n>>\n>> Unfortunately, the prefetching is not beneficial for all filesystems.\n>> For XFS it does not seem to make any difference, but on BTRFS it seems\n>> to cause a regression.\n>>\n>> I think this means we may need a \"--prefetch\" option, that'd force\n>> prefetching, probably both before pread and copy_file_range. Otherwise\n>> people on ZFS are doomed and will have poor performance.\n> \n> Seems reasonable if you can't fix it by tuning ZFS. (Might also be an\n> interesting research topic for a potential ZFS patch:\n> prefetch_swiss_cheese_window_size. I will not be nerd-sniped into\n> reading the relevant source today, but I'll figure it out soonish...)\n> \n\nIt's entirely possible I'm just too stupid and it works just fine for\neveryone else. But maybe not, and I'd say an implementation that is this\ndifficult to configure is almost as if it didn't exist at all. The linux\nread-ahead works by default pretty great.\n\nSo I don't see how to make this work without explicit prefetch ... Of\ncourse, we could also do no prefetch and tell users it's up to ZFS to\nmake this work, but I don't think it does them any service.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 2 Apr 2024 11:25:10 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 9:46 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I've been running some benchmarks and experimenting with various stuff,\n> trying to improve the poor performance on ZFS, and the regression on XFS\n> when using copy_file_range. And oh boy, did I find interesting stuff ...\n\n[..]\n\nCongratulations on great results!\n\n> 4) after each pg_combinebackup run to pg_verifybackup, start the cluster\n> to finish recovery, run pg_checksums --check (to check the patches don't\n> produce something broken)\n\nI've performed some follow-up small testing on all patches mentioned\nhere (1..7), with the earlier developed nano-incremental-backup-tests\nthat helped detect some issues for Robert earlier during original\ndevelopment. They all went fine in both cases:\n- no special options when using pg_combinebackup\n- using pg_combinebackup --copy-file-range --manifest-checksums=NONE\n\nThose were:\ntest_across_wallevelminimal.sh\ntest_full_pri__incr_stby__restore_on_pri.sh\ntest_full_pri__incr_stby__restore_on_stby.sh\ntest_full_stby__incr_stby__restore_on_pri.sh\ntest_full_stby__incr_stby__restore_on_stby.sh\ntest_incr_after_timelineincrease.sh\ntest_incr_on_standby_after_promote.sh\ntest_many_incrementals_dbcreate_duplicateOID.sh\ntest_many_incrementals_dbcreate_filecopy_NOINCR.sh\ntest_many_incrementals_dbcreate_filecopy.sh\ntest_many_incrementals_dbcreate.sh\ntest_many_incrementals.sh\ntest_multixact.sh\ntest_pending_2pc.sh\ntest_reindex_and_vacuum_full.sh\ntest_repro_assert_RP.sh\ntest_repro_assert.sh\ntest_standby_incr_just_backup.sh\ntest_stuck_walsum.sh\ntest_truncaterollback.sh\ntest_unlogged_table.sh\n\n> Now to the findings ....\n>\n>\n> 1) block alignment\n\n[..]\n\n> And I think we probably want to do this now, because this affects all\n> tools dealing with incremental backups - even if someone writes a custom\n> version of pg_combinebackup, it will have to deal with misaligned data.\n> Perhaps there might be something like pg_basebackup that \"transforms\"\n> the data received from the server (and also the backup manifest), but\n> that does not seem like a great direction.\n\nIf anything is on the table, then I think in the far future\npg_refresh_standby_using_incremental_backup_from_primary would be the\nonly other tool using the format ?\n\n> 2) prefetch\n> -----------\n[..]\n> I think this means we may need a \"--prefetch\" option, that'd force\n> prefetching, probably both before pread and copy_file_range. Otherwise\n> people on ZFS are doomed and will have poor performance.\n\nRight, we could optionally cover in the docs later-on various options\nto get the performance (on XFS use $this, but without $that and so\non). It's kind of madness dealing with all those performance\nvariations.\n\nAnother idea: remove that 128 posifx_fadvise() hardcode in 0002 and a\ngetopt variant like: --prefetch[=HOWMANY] with 128 being as default ?\n\n-J.\n\n\n",
"msg_date": "Wed, 3 Apr 2024 15:39:33 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "Hi,\n\nHere's a much more polished and cleaned up version of the patches,\nfixing all the issues I've been aware of, and with various parts merged\ninto much more cohesive parts (instead of keeping them separate to make\nthe changes/evolution more obvious).\n\nI decided to reorder the changes like this:\n\n1) block alignment - As I said earlier, I think we should definitely do\nthis, even if only to make future improvements possible. After chatting\nabout this with Robert off-list a bit, he pointed out I actually forgot\nto not align the headers for files without any blocks, so this version\nfixes that.\n\n2) add clone/copy_file_range for the case that copies whole files. This\nis pretty simple, but it also adds the --clone/copy-file-range options,\nand similar infrastructure. The one slightly annoying bit is that we now\nhave the ifdef stuff in two places - when parsing the options, and then\nin the copy_file_XXX methods, and the pg_fatal() calls should not be\nreachable in practice. But that doesn't seem harmful, and might be a\nuseful protection against someone calling function that does nothing.\n\nThis also merges the original two parts, where the first one only did\nthis cloning/CoW stuff when checksum did not need to be calculated, and\nthe second extended it to those cases too (by also reading the data, but\nstill doing the copy the old way).\n\nI think this is the right way - if that's not desisable, it's easy to\neither add --no-manifest or not use the CoW options. Or just not change\nthe checksum type. There's no other way.\n\n3) add copy_file_range to write_reconstructed_file, by using roughly the\nsame logic/reasoning as (2). If --copy-file-range is specified and a\nchecksum should be calculated, the data is read for the checksum, but\nthe copy is done using copy_file_range.\n\nI did rework the flow write_reconstructed_file() flow a bit, because\ntracking what exactly needs to be read/written in each of the cases\n(which copy method, zeroed block, checksum calculated) made the flow\nreally difficult to follow. Instead I introduced a function to\nread/write a block, and call them from different places.\n\nI think this is much better, and it also makes the following part\ndealing with batches of blocks much easier / smaller change.\n\n4) prefetching - This is mostly unrelated to the CoW stuff, but it has\ntremendous benefits, especially for ZFS. I haven't been able to tune ZFS\nto get decent performance, and ISTM it'd be rather unusable for backup\npurposes without this.\n\n5) batching in write_reconstructed_file - This benefits especially the\ncopy_file_range case, where I've seen it to yield massive speedups (see\nthe message from Monday for better data).\n\n6) batching for prefetch - Similar logic to (5), but for fadvise. This\nis the only part where I'm not entirely sure whether it actually helps\nor not. Needs some more analysis, but I'm including it for completeness.\n\n\nI do think the parts (1)-(4) are in pretty good shape, good enough to\nmake them committable in a day or two. I see it mostly a matter of\ntesting and comment improvements rather than code changes.\n\n(5) is in pretty good shape too, but I'd like to maybe simplify and\nrefine the write_reconstructed_file changes a bit more. I don't think\nit's broken, but it feels a bit too cumbersome.\n\nNot sure about (6) yet.\n\nI changed how I think about this a bit - I don't really see the CoW copy\nmethods as necessary faster than the regular copy (even though it can be\nwith (5)). I think the main benefits are the space savings, enabled by\npatches (1)-(3). If (4) and (5) get it, that's a bonus, but even without\nthat I don't think the performance is an issue - everything has a cost.\n\n\nOn 4/3/24 15:39, Jakub Wartak wrote:\n> On Mon, Apr 1, 2024 at 9:46 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> I've been running some benchmarks and experimenting with various stuff,\n>> trying to improve the poor performance on ZFS, and the regression on XFS\n>> when using copy_file_range. And oh boy, did I find interesting stuff ...\n> \n> [..]\n> \n> Congratulations on great results!\n> \n>> 4) after each pg_combinebackup run to pg_verifybackup, start the cluster\n>> to finish recovery, run pg_checksums --check (to check the patches don't\n>> produce something broken)\n> \n> I've performed some follow-up small testing on all patches mentioned\n> here (1..7), with the earlier developed nano-incremental-backup-tests\n> that helped detect some issues for Robert earlier during original\n> development. They all went fine in both cases:\n> - no special options when using pg_combinebackup\n> - using pg_combinebackup --copy-file-range --manifest-checksums=NONE\n> \n> Those were:\n> test_across_wallevelminimal.sh\n> test_full_pri__incr_stby__restore_on_pri.sh\n> test_full_pri__incr_stby__restore_on_stby.sh\n> test_full_stby__incr_stby__restore_on_pri.sh\n> test_full_stby__incr_stby__restore_on_stby.sh\n> test_incr_after_timelineincrease.sh\n> test_incr_on_standby_after_promote.sh\n> test_many_incrementals_dbcreate_duplicateOID.sh\n> test_many_incrementals_dbcreate_filecopy_NOINCR.sh\n> test_many_incrementals_dbcreate_filecopy.sh\n> test_many_incrementals_dbcreate.sh\n> test_many_incrementals.sh\n> test_multixact.sh\n> test_pending_2pc.sh\n> test_reindex_and_vacuum_full.sh\n> test_repro_assert_RP.sh\n> test_repro_assert.sh\n> test_standby_incr_just_backup.sh\n> test_stuck_walsum.sh\n> test_truncaterollback.sh\n> test_unlogged_table.sh\n> \n>> Now to the findings ....\n>>\n\nThanks. Would be great if you could run this on the attached version of\nthe patches, ideally for each of them independently, so make sure it\ndoesn't get broken+fixed somewhere on the way.\n\n>>\n>> 1) block alignment\n> \n> [..]\n> \n>> And I think we probably want to do this now, because this affects all\n>> tools dealing with incremental backups - even if someone writes a custom\n>> version of pg_combinebackup, it will have to deal with misaligned data.\n>> Perhaps there might be something like pg_basebackup that \"transforms\"\n>> the data received from the server (and also the backup manifest), but\n>> that does not seem like a great direction.\n> \n> If anything is on the table, then I think in the far future\n> pg_refresh_standby_using_incremental_backup_from_primary would be the\n> only other tool using the format ?\n> \n\nPossibly, but I was thinking more about backup solutions using the same\nformat, but doing the client-side differently. Essentially, something\nthat would still use the server side to generate incremental backups,\nbut replace pg_combinebackup to do this differently (stream the data\nsomewhere else, index it somehow, or whatever).\n\n>> 2) prefetch\n>> -----------\n> [..]\n>> I think this means we may need a \"--prefetch\" option, that'd force\n>> prefetching, probably both before pread and copy_file_range. Otherwise\n>> people on ZFS are doomed and will have poor performance.\n> \n> Right, we could optionally cover in the docs later-on various options\n> to get the performance (on XFS use $this, but without $that and so\n> on). It's kind of madness dealing with all those performance\n> variations.\n> \n\nYeah, something like that. I'm not sure we want to talk too much about\nindividual filesystems in our docs, because those things evolve over\ntime too. And also this depends on how large the increment is. If it\nonly modifies 1% of the blocks, then 99% will come from the full backup,\nand the sequential prefetch should do OK (well, maybe not on ZFS). But\nas the incremental backup gets larger / is more random, the prefetch is\nmore and more important.\n\n> Another idea: remove that 128 posifx_fadvise() hardcode in 0002 and a\n> getopt variant like: --prefetch[=HOWMANY] with 128 being as default ?\n> \n\nI did think about that, but there's a dependency on the batching. If\nwe're prefetching ~1MB of data, we may need to prefetch up to ~1MB\nahead. Because otherwise we might want to read 1MB and only a tiny part\nof that would be prefetched. I was thinking maybe we could skip the\nsequential parts, but ZFS does need that.\n\nSo I don't think we can just allow users to set arbitrary values, at\nleast not without also tweaking the batch. Or maybe 1MB batches are too\nlarge, and we should use something smaller? I need to think about this a\nbit more ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Apr 2024 00:56:37 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On Thu, Apr 4, 2024 at 12:56 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Here's a much more polished and cleaned up version of the patches,\n> fixing all the issues I've been aware of, and with various parts merged\n> into much more cohesive parts (instead of keeping them separate to make\n> the changes/evolution more obvious).\n\nOK, so three runs of incrementalbackupstests - as stated earlier -\nalso passed with OK for v20240403 (his time even with\n--enable-casserts)\n\npg_combinebackup flags tested were:\n1) --copy-file-range --manifest-checksums=CRC32C\n2) --copy-file-range --manifest-checksums=NONE\n3) default, no flags (no copy-file-range)\n\n> I changed how I think about this a bit - I don't really see the CoW copy\n> methods as necessary faster than the regular copy (even though it can be\n> with (5)). I think the main benefits are the space savings, enabled by\n> patches (1)-(3). If (4) and (5) get it, that's a bonus, but even without\n> that I don't think the performance is an issue - everything has a cost.\n\nI take i differently: incremental backups without CoW fs would be clearly :\n- inefficient in terms of RTO (SANs are often a key thing due to\nhaving fast ability to \"restore\" the clone rather than copying the\ndata from somewhere else)\n- pg_basebackup without that would be unusuable without space savings\n(e.g. imagine daily backups @ 10+TB DWHs)\n\n> On 4/3/24 15:39, Jakub Wartak wrote:\n> > On Mon, Apr 1, 2024 at 9:46 PM Tomas Vondra\n> > <[email protected]> wrote:\n[..]\n> Thanks. Would be great if you could run this on the attached version of\n> the patches, ideally for each of them independently, so make sure it\n> doesn't get broken+fixed somewhere on the way.\n\nThose are semi-manual test runs (~30 min? per run), the above results\nare for all of them applied at once. So my take is all of them work\neach one does individually too.\n\nFWIW, I'm also testing your other offlist incremental backup\ncorruption issue, but that doesnt seem to be related in any way to\ncopy_file_range() patches here.\n\n> >> 2) prefetch\n> >> -----------\n> > [..]\n> >> I think this means we may need a \"--prefetch\" option, that'd force\n> >> prefetching, probably both before pread and copy_file_range. Otherwise\n> >> people on ZFS are doomed and will have poor performance.\n> >\n> > Right, we could optionally cover in the docs later-on various options\n> > to get the performance (on XFS use $this, but without $that and so\n> > on). It's kind of madness dealing with all those performance\n> > variations.\n> >\n>\n> Yeah, something like that. I'm not sure we want to talk too much about\n> individual filesystems in our docs, because those things evolve over\n> time too.\n\nSounds like Wiki then.\n\nBTW, after a quick review: could we in 05 have something like common\nvalue then (to keep those together via some .h?)\n\n#define BATCH_SIZE PREFETCH_TARGET ?\n\n-J.\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:25:46 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 4/4/24 12:25, Jakub Wartak wrote:\n> On Thu, Apr 4, 2024 at 12:56 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> Here's a much more polished and cleaned up version of the patches,\n>> fixing all the issues I've been aware of, and with various parts merged\n>> into much more cohesive parts (instead of keeping them separate to make\n>> the changes/evolution more obvious).\n> \n> OK, so three runs of incrementalbackupstests - as stated earlier -\n> also passed with OK for v20240403 (his time even with\n> --enable-casserts)\n> \n> pg_combinebackup flags tested were:\n> 1) --copy-file-range --manifest-checksums=CRC32C\n> 2) --copy-file-range --manifest-checksums=NONE\n> 3) default, no flags (no copy-file-range)\n> \n\nThanks!\n\n>> I changed how I think about this a bit - I don't really see the CoW copy\n>> methods as necessary faster than the regular copy (even though it can be\n>> with (5)). I think the main benefits are the space savings, enabled by\n>> patches (1)-(3). If (4) and (5) get it, that's a bonus, but even without\n>> that I don't think the performance is an issue - everything has a cost.\n> \n> I take i differently: incremental backups without CoW fs would be clearly :\n> - inefficient in terms of RTO (SANs are often a key thing due to\n> having fast ability to \"restore\" the clone rather than copying the\n> data from somewhere else)\n> - pg_basebackup without that would be unusuable without space savings\n> (e.g. imagine daily backups @ 10+TB DWHs)\n> \n\nRight, although this very much depends on the backup scheme. If you only\ntake incremental backups, and then also a full backup once in a while,\nthe CoW stuff probably does not help much. The alignment (the only thing\naffecting basebackups) may allow deduplication, but that's all I think.\n\nIf the scheme is more complex, and involves \"merging\" the increments\ninto the full backup, then this does help a lot. It'd even be possible\nto cheaply clone instances this way, I think. But I'm not sure how often\nwould people do that on the same volume, to benefit from the CoW.\n\n>> On 4/3/24 15:39, Jakub Wartak wrote:\n>>> On Mon, Apr 1, 2024 at 9:46 PM Tomas Vondra\n>>> <[email protected]> wrote:\n> [..]\n>> Thanks. Would be great if you could run this on the attached version of\n>> the patches, ideally for each of them independently, so make sure it\n>> doesn't get broken+fixed somewhere on the way.\n> \n> Those are semi-manual test runs (~30 min? per run), the above results\n> are for all of them applied at once. So my take is all of them work\n> each one does individually too.\n> \n\nCool, thanks.\n\n> FWIW, I'm also testing your other offlist incremental backup\n> corruption issue, but that doesnt seem to be related in any way to\n> copy_file_range() patches here.\n> \n\nYes, that's entirely independent, happens with master too.\n\n>>>> 2) prefetch\n>>>> -----------\n>>> [..]\n>>>> I think this means we may need a \"--prefetch\" option, that'd force\n>>>> prefetching, probably both before pread and copy_file_range. Otherwise\n>>>> people on ZFS are doomed and will have poor performance.\n>>>\n>>> Right, we could optionally cover in the docs later-on various options\n>>> to get the performance (on XFS use $this, but without $that and so\n>>> on). It's kind of madness dealing with all those performance\n>>> variations.\n>>>\n>>\n>> Yeah, something like that. I'm not sure we want to talk too much about\n>> individual filesystems in our docs, because those things evolve over\n>> time too.\n> \n> Sounds like Wiki then.\n> \n> BTW, after a quick review: could we in 05 have something like common\n> value then (to keep those together via some .h?)\n> \n> #define BATCH_SIZE PREFETCH_TARGET ?\n> \n\nYes, that's one of the things I'd like to refine a bit more. Making it\nmore consistent / clearer that these things are interdependent.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Apr 2024 12:51:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "Hi,\n\nI have pushed the three patches of this series - the one that aligns\nblocks, and the two adding clone/copy_file_range to pg_combinebackup.\nThe committed versions are pretty much the 2024/04/03 version, with\nvarious minor cleanups (e.g. I noticed the docs are still claiming the\ncopy methods work only without checksum calculations, but that's no\nlonger true). I also changed the parameter order to keep the dry_run and\ndebug parameters last, it seems nicer this way.\n\nThe buildfarm reported two compile-time problems, both of them entirely\navoidable (reported by cfbot but I failed to notice that). Should have\nknown better ...\n\nAnyway, with these patches committed, pg_combinebackup can use CoW stuff\nto combine backups cheaply (especially in disk-space terms).\n\nThe first patch (block alignment) however turned out to be important\neven for non-CoW filesystems, in some cases. I did a lot of benchmarks\nwith the standard block-by-block copying of data, and on a machine with\nSSD RAID storage the duration went from ~400 seconds for some runs to\nonly about 150 seconds (with aligned blocks). My explanation is that\nwith the misaligned blocks the RAID often has to access two devices to\nread a block, and the alignment makes that go away.\n\nIn the attached PDF with results (duration.pdf), showing the duration of\npg_combinebackup on an increment of a particular size (1%, 10% or 20%),\nthis is visible as a green square on the right. Those columns are\nresults relative to a baseline - which for \"copy\" is master before the\nblock alignment patch, and for \"copy_file_range\" it's the 3-reconstruct\n(adding copy_file_range to combining blocks from increments).\n\nFWIW the last three columns are a comparison with prefetching enabled.\n\nThere's a couple interesting observations from this, based on which I'm\nnot going to try to get the remaining patches (batching and prefetching)\ninto v17. It clearly needs more analysis to make the right tradeoff.\n\n From the table, I think it's clear that:\n\n0) The impact of block alignment on RAID storage, with regular copy.\n\n1) The batching (origina patch 0005) either does not help the regular\ncopy, or it actually makes it slower. The PDF is a bit misleading\nbecause it seems to suggest the i5 machine is unaffected, while the xeon\ngets ~30% slower. But that's just an illusion - the comparison is to\nmaster, but the alignment patch made i5 about 2x faster. So it's 200%\nslower when compared to \"current master\" with the alignment patch.\n\nThat's not great :-/ And also a bit strange - I would have expected the\nbatching to help the simple copy too. I haven't looked into why this\nhappens, so there's a chance I made some silly mistake, who knows.\n\nFor the copy_file_range case the batching is usually very beneficial,\nsometimes reducing the duration to a fraction of the non-batched case.\n\nMy interpretation is that (unless there's a bug in the patch) we may\nneed two variants of that code - a non-batched one for regular copy, and\na batched variant for copy_file_range.\n\n2) The prefetching is not a huge improvement, at least not for these\nthree filesystems (btrfs, ext4, xfs). From the color scale it might seem\nlike it helps, but those values are relative to the baseline, so when\nthe non-prefetching value is 5% and with prefetching 10%, that means the\nprefetching makes it slower. And that's very often true.\n\nThis is visible more clearly in prefetching.pdf, comparing the\nnon-prefetching and prefetching results for each patch, not to baseline.\nThat's makes it quite clear there's a lot of \"red\" where prefetching\nmakes it slower. It certainly does help for larger increments (which\nmakes sense, because the modified blocks are distributed randomly, and\nthus come from random files, making long streaks unlikely).\n\nI've imagined the prefetching could be made a bit smarter to ignore the\nstreaks (=sequential patterns), but once again - this only matters with\nthe batching, which we don't have. And without the batching it looks\nlike a net loss (that's the first column in the prefetching PDF).\n\nI did start thinking about prefetching because of ZFS, where it was\nnecessary to get decent performance. And that's still true. But (a) even\nwith the explicit prefetching it's still 2-3x slower than any of these\nfilesystems, so I assume performance-sensitive use cases won't use it.\nAnd (b) the prefetching seems necessary in all cases, no matter how\nlarge the increment is. Which goes directly against the idea of looking\nat how random the blocks are and prefetching only the sufficiently\nrandom patterns. That doesn't seem like a great thing.\n\n3) There's also the question of disk space usage. The size.pdf shows how\nthe patches affect space needed for the pg_combinebackup result. It does\ndepend a bit on the internal fs cleanup for each run, but it seems the\nbatching makes a difference - clearly copying 1MB blocks instead of 8kB\nallows lower overhead for some filesystems (e.g. btrfs, where we get\nfrom ~1.5GB to a couple MBs). But the space savings are quite negligible\ncompared to just using --copy-file-range option (where we get from 75GB\nto 1.5GB). I think the batching is interesting mostly because of the\nsubstantial duration reduction.\n\nI'm also attaching the benchmarking script I used (warning: ugly!), and\nresults for the three filesystems. For ZFS I only have partial results\nso far, because it's so slow, but in general - without prefetching it's\nslow (~1000s) with prefetching it's better but still slow (~250s).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 5 Apr 2024 21:43:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 4/5/24 21:43, Tomas Vondra wrote:\n> Hi,\n> \n> ...\n> \n> 2) The prefetching is not a huge improvement, at least not for these\n> three filesystems (btrfs, ext4, xfs). From the color scale it might seem\n> like it helps, but those values are relative to the baseline, so when\n> the non-prefetching value is 5% and with prefetching 10%, that means the\n> prefetching makes it slower. And that's very often true.\n> \n> This is visible more clearly in prefetching.pdf, comparing the\n> non-prefetching and prefetching results for each patch, not to baseline.\n> That's makes it quite clear there's a lot of \"red\" where prefetching\n> makes it slower. It certainly does help for larger increments (which\n> makes sense, because the modified blocks are distributed randomly, and\n> thus come from random files, making long streaks unlikely).\n> \n> I've imagined the prefetching could be made a bit smarter to ignore the\n> streaks (=sequential patterns), but once again - this only matters with\n> the batching, which we don't have. And without the batching it looks\n> like a net loss (that's the first column in the prefetching PDF).\n> \n> I did start thinking about prefetching because of ZFS, where it was\n> necessary to get decent performance. And that's still true. But (a) even\n> with the explicit prefetching it's still 2-3x slower than any of these\n> filesystems, so I assume performance-sensitive use cases won't use it.\n> And (b) the prefetching seems necessary in all cases, no matter how\n> large the increment is. Which goes directly against the idea of looking\n> at how random the blocks are and prefetching only the sufficiently\n> random patterns. That doesn't seem like a great thing.\n> \n\nI finally got a more complete ZFS results, and I also decided to get\nsome numbers without the ZFS tuning I did. And boy oh boy ...\n\nAll the tests I did with ZFS were tuned the way I've seen recommended\nwhen using ZFS for PostgreSQL, that is\n\n zfs set recordsize=8K logbias=throughput compression=none\n\nand this performed quite poorly - pg_combinebackup took 4-8x longer than\nwith the traditional filesystems (btrfs, xfs, ext4) and the only thing\nthat improved that measurably was prefetching.\n\nBut once I reverted back to the default recordsize of 128kB the\nperformance is waaaaaay better - entirely comparable to ext4/xfs, while\nbtrfs remains faster with --copy-file-range --no-manigest (by a factor\nof 2-3x).\n\nThis is quite clearly visible in the attached \"current.pdf\" which shows\nresults for the current master (i.e. filtered to the 3-reconstruct patch\nadding CoW stuff to write_reconstructed_file).\n\nThere's also some differences in the disk usage, where ZFS seems to need\nmore space than xfs/btrfs (as if there was no block sharing), but maybe\nthat's due to how I measure this using df ...\n\nI also tried also \"completely default\" ZFS configuration, with all\noptions left at the default (recordsize=128kB, compression=lz4, and\nlogbias=latency). That performs about the same, except that the disk\nusage is lower thanks to the compression.\n\nnote: Because I'm hip cool kid, I also ran the tests on bcachefs. The\nresults are included in the CSV/PDF attachments. In general it's much\nslower than xfs/btrfs/ext, and the disk space is somewhere in between\nbtrfs and xfs (for the CoW cases). We'll see how this improves as it\nmatures in the future.\n\nThe attachments are tables with the total duration / disk space usage,\nand impact of prefetching. The tables are similar to what I shared\nbefore, except that the color scale is applied to the values directly.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 7 Apr 2024 19:46:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
},
{
"msg_contents": "On 4/7/24 19:46, Tomas Vondra wrote:\n> On 4/5/24 21:43, Tomas Vondra wrote:\n>> Hi,\n>>\n>> ...\n>>\n>> 2) The prefetching is not a huge improvement, at least not for these\n>> three filesystems (btrfs, ext4, xfs). From the color scale it might seem\n>> like it helps, but those values are relative to the baseline, so when\n>> the non-prefetching value is 5% and with prefetching 10%, that means the\n>> prefetching makes it slower. And that's very often true.\n>>\n>> This is visible more clearly in prefetching.pdf, comparing the\n>> non-prefetching and prefetching results for each patch, not to baseline.\n>> That's makes it quite clear there's a lot of \"red\" where prefetching\n>> makes it slower. It certainly does help for larger increments (which\n>> makes sense, because the modified blocks are distributed randomly, and\n>> thus come from random files, making long streaks unlikely).\n>>\n>> I've imagined the prefetching could be made a bit smarter to ignore the\n>> streaks (=sequential patterns), but once again - this only matters with\n>> the batching, which we don't have. And without the batching it looks\n>> like a net loss (that's the first column in the prefetching PDF).\n>>\n>> I did start thinking about prefetching because of ZFS, where it was\n>> necessary to get decent performance. And that's still true. But (a) even\n>> with the explicit prefetching it's still 2-3x slower than any of these\n>> filesystems, so I assume performance-sensitive use cases won't use it.\n>> And (b) the prefetching seems necessary in all cases, no matter how\n>> large the increment is. Which goes directly against the idea of looking\n>> at how random the blocks are and prefetching only the sufficiently\n>> random patterns. That doesn't seem like a great thing.\n>>\n> \n> I finally got a more complete ZFS results, and I also decided to get\n> some numbers without the ZFS tuning I did. And boy oh boy ...\n> \n> All the tests I did with ZFS were tuned the way I've seen recommended\n> when using ZFS for PostgreSQL, that is\n> \n> zfs set recordsize=8K logbias=throughput compression=none\n> \n> and this performed quite poorly - pg_combinebackup took 4-8x longer than\n> with the traditional filesystems (btrfs, xfs, ext4) and the only thing\n> that improved that measurably was prefetching.\n> \n> But once I reverted back to the default recordsize of 128kB the\n> performance is waaaaaay better - entirely comparable to ext4/xfs, while\n> btrfs remains faster with --copy-file-range --no-manigest (by a factor\n> of 2-3x).\n> \n\nI forgot to explicitly say that I think confirms the decision to not\npush the patch adding the explicit prefetching to pg_combinebackup. It's\nnot needed/beneficial even for ZFS, when using a suitable configuration.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 7 Apr 2024 19:53:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_combinebackup --copy-file-range"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am facing very unusual behaviour in our production postgres system\nrunning on pg15 and deployed on AWS.\npgwal is mounted on a separate EBS drive with size of 100 GBs (on disk size\nas seen from df -hT).\nmax_wal_size = 95 GB, min_wal_size = 1 GB, checkpoint_completion_target =\n0.9\n\nIssue is that pgwal size increased beyond 95 GB limits, and reached max\ndrive capacity of 100 GBs while ingesting huge data via copy insert.\nAlthough the root cause of the problem to me looks like ebs volume size is\ntoo close to max wal size and by the time limit was breached and check\npoint started, it was too late as WAL writing continued to happen and\nreached max limit of drive, would like to know if someone has any ideas of\nthis.\nAlso, should we add a suggestion in doc regarding disk space and max wal\nsize as max wal size looks more like an upper limit which WAL is unlikely\nto surpass(by large margin).\n\nHappy to revert with more details if required.\n\nThanks,\nAnkit\n\nHi all,I am facing very unusual behaviour in our production postgres system running on pg15 and deployed on AWS.pgwal is mounted on a separate EBS drive with size of 100 GBs (on disk size as seen from df -hT).max_wal_size = 95 GB, min_wal_size = 1 GB, checkpoint_completion_target = 0.9Issue is that pgwal size increased beyond 95 GB limits, and reached max drive capacity of 100 GBs while ingesting huge data via copy insert.Although the root cause of the problem to me looks like ebs volume size is too close to max wal size and by the time limit was breached and check point started, it was too late as WAL writing continued to happen and reached max limit of drive, would like to know if someone has any ideas of this.Also, should we add a suggestion in doc regarding disk space and max wal size as max wal size looks more like an upper limit which WAL is unlikely to surpass(by large margin).Happy to revert with more details if required.Thanks,Ankit",
"msg_date": "Sat, 3 Jun 2023 01:12:32 +0530",
"msg_from": "Ankit Pandey <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Question] pgwal increasing over max_wal_size"
}
] |
[
{
"msg_contents": "Hello hackers\n\nAttached is my first patch for PostgreSQL, which is a simple one-liner\nthat I believe can improve the code.\n\nIn the \"join_search_one_level\" function, I noticed that the variable\n\"other_rels_list\" always refers to \"joinrels[1]\" even when the (level\n== 2) condition is met. I propose changing:\n\n\"other_rels_list = joinrels[level - 1]\" to \"other_rels_list = joinrels[1]\"\n\nThis modification aims to enhance clarity and avoid unnecessary instructions.\n\nI would greatly appreciate any review and feedback on this patch as I\nam a newcomer to PostgreSQL contributions. Your input will help me\nimprove and contribute effectively to the project.\n\nI have followed the excellent guide \"How to submit a patch by email,\n2023 edition\" by Peter Eisentraut.\n\nAdditionally, if anyone has any tips on ensuring that Gmail recognizes\nmy attached patches as the \"text/x-patch\" MIME type when sending them\nfrom the Chrome client, I would be grateful for the advice.\n\nOr maybe the best practice is to use Git send-email ?\n\nThank you for your time.\n\nBest regards\nAlex Hsieh",
"msg_date": "Sat, 3 Jun 2023 17:24:43 +0800",
"msg_from": "=?UTF-8?B?6Kyd5p2x6ZyW?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jun 03, 2023 at 05:24:43PM +0800, 謝東霖 wrote:\n>\n> Attached is my first patch for PostgreSQL, which is a simple one-liner\n> that I believe can improve the code.\n\nWelcome!\n\n> In the \"join_search_one_level\" function, I noticed that the variable\n> \"other_rels_list\" always refers to \"joinrels[1]\" even when the (level\n> == 2) condition is met. I propose changing:\n>\n> \"other_rels_list = joinrels[level - 1]\" to \"other_rels_list = joinrels[1]\"\n>\n> This modification aims to enhance clarity and avoid unnecessary instructions.\n\nAgreed. It looks like it was originally introduced as mechanical changes in a\nbigger patch. It would probably be better to move the other_rels_list\ninitialization out of the if instruction and put it with the variable\ndeclaration, to make it even clearer. I'm not that familiar with this area of\nthe code so hopefully someone else will comment.\n\n> I would greatly appreciate any review and feedback on this patch as I\n> am a newcomer to PostgreSQL contributions. Your input will help me\n> improve and contribute effectively to the project.\n\nI think you did everything as needed! You should consider adding you patch to\nthe next opened commitfest (https://commitfest.postgresql.org/43/) if you\nhaven't already, to make sure it won't be forgotten, even if it's a one-liner.\nIt will then also be regularly tested by the cfbot (http://cfbot.cputube.org/).\n\nIf needed, you can also test the same CI jobs (covering multiple OS) using your\npersonal github account, see\nhttps://github.com/postgres/postgres/blob/master/src/tools/ci/README on details\nto set it up.\n\nBest practice is also to review a patch of similar difficulty when sending one.\nYou can look at the same commit fest entry if anything interests you, and\nregister as a reviewer.\n\n> Additionally, if anyone has any tips on ensuring that Gmail recognizes\n> my attached patches as the \"text/x-patch\" MIME type when sending them\n> from the Chrome client, I would be grateful for the advice.\n\nI don't see any problem with the attachment. You can always check looking at\nthe online archive for that, for instance for your email:\nhttps://www.postgresql.org/message-id/CANWNU8x9P9aCXGF=aT-A_8mLTAT0LkcZ_ySYrGbcuHzMQw2-1g@mail.gmail.com\n\nAs far as I know only apple mail is problematic with that regards, as it\ndoesn't send attachments as attachments.\n\n> Or maybe the best practice is to use Git send-email ?\n\nI don't think anyone uses git send-email on this mailing list. We usually\nprefer to manually attach patch(es), possibly compressing them if they're big,\nand then keep all the discussion and new revisions on the same thread.\n\n\n",
"msg_date": "Sun, 4 Jun 2023 14:02:49 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "謝東霖 <[email protected]> 于2023年6月3日周六 23:21写道:\n\n> Hello hackers\n>\n> Attached is my first patch for PostgreSQL, which is a simple one-liner\n> that I believe can improve the code.\n>\n> In the \"join_search_one_level\" function, I noticed that the variable\n> \"other_rels_list\" always refers to \"joinrels[1]\" even when the (level\n> == 2) condition is met. I propose changing:\n>\n> \"other_rels_list = joinrels[level - 1]\" to \"other_rels_list = joinrels[1]\"\n>\n> This modification aims to enhance clarity and avoid unnecessary\n> instructions.\n>\n\nI guess compiler can make that code more efficiency. But from the point of\ncode readibilty, I agree with you.\nAs Julien Rouhaud said, it had better to to move the other_rels_list\ninitialization out of the if instruction and put it with the variable\ndeclaration.\n\nI would greatly appreciate any review and feedback on this patch as I\n> am a newcomer to PostgreSQL contributions. Your input will help me\n> improve and contribute effectively to the project.\n>\n> I have followed the excellent guide \"How to submit a patch by email,\n> 2023 edition\" by Peter Eisentraut.\n>\n> Additionally, if anyone has any tips on ensuring that Gmail recognizes\n> my attached patches as the \"text/x-patch\" MIME type when sending them\n> from the Chrome client, I would be grateful for the advice.\n>\n> Or maybe the best practice is to use Git send-email ?\n>\n> Thank you for your time.\n>\n> Best regards\n> Alex Hsieh\n>\n\n謝東霖 <[email protected]> 于2023年6月3日周六 23:21写道:Hello hackers\n\nAttached is my first patch for PostgreSQL, which is a simple one-liner\nthat I believe can improve the code.\n\nIn the \"join_search_one_level\" function, I noticed that the variable\n\"other_rels_list\" always refers to \"joinrels[1]\" even when the (level\n== 2) condition is met. I propose changing:\n\n\"other_rels_list = joinrels[level - 1]\" to \"other_rels_list = joinrels[1]\"\n\nThis modification aims to enhance clarity and avoid unnecessary instructions. I guess compiler can make that code more efficiency. But from the point of code readibilty, I agree with you.As Julien Rouhaud said, it had better to to move the other_rels_listinitialization out of the if instruction and put it with the variable declaration.\nI would greatly appreciate any review and feedback on this patch as I\nam a newcomer to PostgreSQL contributions. Your input will help me\nimprove and contribute effectively to the project.\n\nI have followed the excellent guide \"How to submit a patch by email,\n2023 edition\" by Peter Eisentraut.\n\nAdditionally, if anyone has any tips on ensuring that Gmail recognizes\nmy attached patches as the \"text/x-patch\" MIME type when sending them\nfrom the Chrome client, I would be grateful for the advice.\n\nOr maybe the best practice is to use Git send-email ?\n\nThank you for your time.\n\nBest regards\nAlex Hsieh",
"msg_date": "Tue, 6 Jun 2023 10:09:12 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Thank you to Julien Rouhaud and Tender Wang for the reviews.\n\nJulien's detailed guide has proven to be incredibly helpful, and I am\ntruly grateful for it.\nThank you so much for providing such valuable guidance!\n\nI have initiated a new commitfest:\nhttps://commitfest.postgresql.org/43/4346/\n\nFurthermore, I have attached a patch that improves the code by moving\nthe initialization of \"other_rels_list\" outside the if branching.\n\nPerhaps Tom Lane would be interested in reviewing this minor change as well?",
"msg_date": "Tue, 6 Jun 2023 16:18:30 +0800",
"msg_from": "=?UTF-8?B?6Kyd5p2x6ZyW?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On Tue, 6 Jun 2023, 16:18 謝東霖, <[email protected]> wrote:\n\n> Thank you to Julien Rouhaud and Tender Wang for the reviews.\n>\n> Julien's detailed guide has proven to be incredibly helpful, and I am\n> truly grateful for it.\n> Thank you so much for providing such valuable guidance!\n>\n> I have initiated a new commitfest:\n> https://commitfest.postgresql.org/43/4346/\n>\n> Furthermore, I have attached a patch that improves the code by moving\n> the initialization of \"other_rels_list\" outside the if branching.\n>\n\nI'm glad I could help! Thanks for creating the cf entry. Note however that\nthe cfbot ignores files with a .txt extension (I don't think it's\ndocumented but it will mostly handle files with diff, patch, gz(ip), tar\nextensions IIRC, processing them as needed depending on the extension), so\nyou should send v2 again with a supported extension, otherwise the cfbot\nwill keep testing your original patch.\n\n>\n\nOn Tue, 6 Jun 2023, 16:18 謝東霖, <[email protected]> wrote:Thank you to Julien Rouhaud and Tender Wang for the reviews.\n\nJulien's detailed guide has proven to be incredibly helpful, and I am\ntruly grateful for it.\nThank you so much for providing such valuable guidance!\n\nI have initiated a new commitfest:\nhttps://commitfest.postgresql.org/43/4346/\n\nFurthermore, I have attached a patch that improves the code by moving\nthe initialization of \"other_rels_list\" outside the if branching.I'm glad I could help! Thanks for creating the cf entry. Note however that the cfbot ignores files with a .txt extension (I don't think it's documented but it will mostly handle files with diff, patch, gz(ip), tar extensions IIRC, processing them as needed depending on the extension), so you should send v2 again with a supported extension, otherwise the cfbot will keep testing your original patch.",
"msg_date": "Wed, 7 Jun 2023 10:48:37 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Julien Rouhaud <[email protected]> writes:\n> I'm glad I could help! Thanks for creating the cf entry. Note however that\n> the cfbot ignores files with a .txt extension (I don't think it's\n> documented but it will mostly handle files with diff, patch, gz(ip), tar\n> extensions IIRC, processing them as needed depending on the extension),\n\nDocumented in the cfbot FAQ:\n\nhttps://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F\n\nwhich admittedly is a page a lot of people don't know about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Jun 2023 22:58:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Thank you, Julien, for letting me know that cfbot doesn't test txt files.\nMuch appreciated!",
"msg_date": "Wed, 7 Jun 2023 11:02:09 +0800",
"msg_from": "=?UTF-8?B?6Kyd5p2x6ZyW?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On 04.06.23 08:02, Julien Rouhaud wrote:\n>> Additionally, if anyone has any tips on ensuring that Gmail recognizes\n>> my attached patches as the \"text/x-patch\" MIME type when sending them\n>> from the Chrome client, I would be grateful for the advice.\n> I don't see any problem with the attachment. You can always check looking at\n> the online archive for that, for instance for your email:\n> https://www.postgresql.org/message-id/CANWNU8x9P9aCXGF=aT-A_8mLTAT0LkcZ_ySYrGbcuHzMQw2-1g@mail.gmail.com\n\nThat shows exactly the problem being complained about.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:25:21 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Peter Eisentraut <[email protected]>\n> That shows exactly the problem being complained about.\n\nI apologize for not using the correct MIME type in my previous email\nto the pg-hackers mailing list. Upon sending the first email, I\nrealized that my patch was labeled as \"application/x-patch\" instead of\n\"text/x-patch.\"\n\nTo make it more convenient for others to read the patch in the mail\narchives, I changed the file extension of my v2 patch to \".txt.\" (\nhttps://www.postgresql.org/message-id/CANWNU8xm07jYUHxGh3XNHtcY37z%2B56-6bDD4piPt6%3DKidiHshQ%40mail.gmail.com\n)\n\nHowever, I encountered an issue where cfbot did not apply the \".txt\"\nfile. I am still interested in learning **how to submit patches with\nthe proper \"text/x-patch\" MIME type using Gmail.**\n\n I have noticed that some individuals have successfully used Gmail to\nsubmit patches with the correct MIME type.\n\nIf anyone can provide assistance with this matter, I would be\ngrateful. I am willing to contribute any helpful tips regarding using\nGmail for patch submission to the Postgres wiki.\n\nI believe it is something like Mutt, right?\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:05:19 +0800",
"msg_from": "=?UTF-8?B?6Kyd5p2x6ZyW?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On Wed Jun 7, 2023 at 10:05 AM CDT, 謝東霖 wrote:\n> Peter Eisentraut <[email protected]>\n> > That shows exactly the problem being complained about.\n>\n> I apologize for not using the correct MIME type in my previous email\n> to the pg-hackers mailing list. Upon sending the first email, I\n> realized that my patch was labeled as \"application/x-patch\" instead of\n> \"text/x-patch.\"\n>\n> To make it more convenient for others to read the patch in the mail\n> archives, I changed the file extension of my v2 patch to \".txt.\" (\n> https://www.postgresql.org/message-id/CANWNU8xm07jYUHxGh3XNHtcY37z%2B56-6bDD4piPt6%3DKidiHshQ%40mail.gmail.com\n> )\n>\n> However, I encountered an issue where cfbot did not apply the \".txt\"\n> file. I am still interested in learning **how to submit patches with\n> the proper \"text/x-patch\" MIME type using Gmail.**\n>\n> I have noticed that some individuals have successfully used Gmail to\n> submit patches with the correct MIME type.\n>\n> If anyone can provide assistance with this matter, I would be\n> grateful. I am willing to contribute any helpful tips regarding using\n> Gmail for patch submission to the Postgres wiki.\n>\n> I believe it is something like Mutt, right?\n\nMight be a good idea to reach out to the people directly about how they\ndo it.\n\nI am using Gmail with this work account, but I am using aerc to interact\nwith the list. I could rant all day about Gmail :). All of my patches\nseem to get the right MIME types. Could be worth looking into if you\nlike terminal-based workflows at all.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 07 Jun 2023 10:26:59 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jun 07, 2023 at 11:02:09AM +0800, 謝東霖 wrote:\n> Thank you, Julien, for letting me know that cfbot doesn't test txt files.\n> Much appreciated!\n\nThanks for posting this v2!\n\nSo unsurprisingly the cfbot is happy with this patch, since it doesn't change\nthe behavior at all. I just have some nitpicking:\n\n@@ -109,14 +109,14 @@ join_search_one_level(PlannerInfo *root, int level)\n \t\t\tList\t *other_rels_list;\n \t\t\tListCell *other_rels;\n\n+\t\t\tother_rels_list = joinrels[1];\n+\n \t\t\tif (level == 2)\t\t/* consider remaining initial rels */\n \t\t\t{\n-\t\t\t\tother_rels_list = joinrels[level - 1];\n \t\t\t\tother_rels = lnext(other_rels_list, r);\n \t\t\t}\n \t\t\telse\t\t\t\t/* consider all initial rels */\n \t\t\t{\n-\t\t\t\tother_rels_list = joinrels[1];\n \t\t\t\tother_rels = list_head(other_rels_list);\n \t\t\t}\n\n\nSince each branch only has a single instruction after the change the curly\nbraces aren't needed anymore. The only reason keep them is if it helps\nreadability (like if there's a big comment associated), but that's not the case\nhere so it would be better to get rid of them.\n\nApart from that +1 from me for the patch, I think it helps focusing the\nattention on what actually matters here.\n\n\n",
"msg_date": "Mon, 31 Jul 2023 21:48:37 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On Tue, 1 Aug 2023 at 01:48, Julien Rouhaud <[email protected]> wrote:\n> Apart from that +1 from me for the patch, I think it helps focusing the\n> attention on what actually matters here.\n\nI think it's worth doing something to improve this code. However, I\nthink we should go a bit further than what the proposed patch does.\n\nIn 1cff1b95a, Tom changed the signature of make_rels_by_clause_joins\nto pass the List that the given ListCell belongs to. This was done\nbecause lnext(), as of that commit, requires the owning List to be\npassed to the function, where previously when Lists were linked lists,\nthat wasn't required.\n\nThe whole lnext() stuff all feels a bit old now that Lists are arrays.\nI think we'd be better adjusting the code to pass the List index where\nwe start from rather than the ListCell to start from. That way we can\nuse for_each_from() to iterate rather than for_each_cell(). What's\nthere today feels a bit crufty and there's some element of danger that\nthe given ListCell does not even belong to the given List.\n\nDoing this seems to shrink down the assembly a bit:\n\n$ wc -l joinrels*\n 3344 joinrels_tidyup.s\n 3363 joinrels_unpatched.s\n\nI also see a cmovne in joinrels_tidyup.s, which means that there are\nfewer jumps which makes things a little better for the branch\npredictor as there's fewer jumps. I doubt this is going to be\nperformance critical, but it's a nice extra bonus to go along with the\ncleanup.\n\nold:\ncmpl $2, 24(%rsp)\nje .L616\n\nnew:\ncmpl $2, 16(%rsp)\ncmovne %edx, %eax\n\nDavid",
"msg_date": "Fri, 4 Aug 2023 14:36:15 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On Fri, Aug 4, 2023 at 10:36 AM David Rowley <[email protected]> wrote:\n\n> The whole lnext() stuff all feels a bit old now that Lists are arrays.\n> I think we'd be better adjusting the code to pass the List index where\n> we start from rather than the ListCell to start from. That way we can\n> use for_each_from() to iterate rather than for_each_cell(). What's\n> there today feels a bit crufty and there's some element of danger that\n> the given ListCell does not even belong to the given List.\n\n\nI think we can go even further to do the same for 'bushy plans' case,\nlike the attached.\n\nThanks\nRichard",
"msg_date": "Fri, 4 Aug 2023 12:05:14 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
},
{
"msg_contents": "On Fri, 4 Aug 2023 at 16:05, Richard Guo <[email protected]> wrote:\n>\n>\n> On Fri, Aug 4, 2023 at 10:36 AM David Rowley <[email protected]> wrote:\n>>\n>> The whole lnext() stuff all feels a bit old now that Lists are arrays.\n>> I think we'd be better adjusting the code to pass the List index where\n>> we start from rather than the ListCell to start from. That way we can\n>> use for_each_from() to iterate rather than for_each_cell(). What's\n>> there today feels a bit crufty and there's some element of danger that\n>> the given ListCell does not even belong to the given List.\n>\n>\n> I think we can go even further to do the same for 'bushy plans' case,\n> like the attached.\n\nSeems like a good idea to me. I've pushed that patch.\n\nAlex, many thanks for highlighting this and posting a patch to fix it.\nCongratulations on your first patch being committed.\n\nDavid\n\n\n",
"msg_date": "Sun, 6 Aug 2023 21:55:18 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve join_search_one_level readibilty (one line change)"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is for Postgres 17 (head).\n\nPer Coverity.\nAt function print_unaligned_text, variable \"need_recordsep\", is\nunnecessarily set to true and false.\n\nAttached a trivial fix patch.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 3 Jun 2023 07:14:57 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Hello Ranier,\n\n03.06.2023 13:14, Ranier Vilela wrote:\n> Hi,\n>\n> This is for Postgres 17 (head).\n>\n> Per Coverity.\n> At function print_unaligned_text, variable \"need_recordsep\", is\n> unnecessarily set to true and false.\n\nClang' scan-build detects 58 errors \"Dead assignment\", including that one.\nMaybe it would be more sensible to eliminate all errors of this class?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 3 Jun 2023 15:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em sáb., 3 de jun. de 2023 às 09:00, Alexander Lakhin <[email protected]>\nescreveu:\n\n> Hello Ranier,\n>\n> 03.06.2023 13:14, Ranier Vilela wrote:\n> > Hi,\n> >\n> > This is for Postgres 17 (head).\n> >\n> > Per Coverity.\n> > At function print_unaligned_text, variable \"need_recordsep\", is\n> > unnecessarily set to true and false.\n>\n> Clang' scan-build detects 58 errors \"Dead assignment\", including that one.\n> Maybe it would be more sensible to eliminate all errors of this class?\n>\nHi Alexander,\n\nSure.\nI hope that when you or I are a committer,\nwe can fix a whole class of bugs together.\n\nbest regards,\nRanier Vilela\n\nEm sáb., 3 de jun. de 2023 às 09:00, Alexander Lakhin <[email protected]> escreveu:Hello Ranier,\n\n03.06.2023 13:14, Ranier Vilela wrote:\n> Hi,\n>\n> This is for Postgres 17 (head).\n>\n> Per Coverity.\n> At function print_unaligned_text, variable \"need_recordsep\", is\n> unnecessarily set to true and false.\n\nClang' scan-build detects 58 errors \"Dead assignment\", including that one.\nMaybe it would be more sensible to eliminate all errors of this class?Hi Alexander,Sure. I hope that when you or I are a committer, we can fix a whole class of bugs together.best regards,Ranier Vilela",
"msg_date": "Sat, 3 Jun 2023 09:46:04 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "On Sat, Jun 03, 2023 at 03:00:01PM +0300, Alexander Lakhin wrote:\n> Clang' scan-build detects 58 errors \"Dead assignment\", including that one.\n> Maybe it would be more sensible to eliminate all errors of this class?\n\nDepends on if this makes any code changed a bit easier to understand I\nguess, so that would be a case-by-case analysis. Saying that, the\nproposed patch seems right while it makes slightly easier to\nunderstand the footer print part.\n--\nMichael",
"msg_date": "Sat, 3 Jun 2023 18:42:57 -0400",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Hi Michael,\n\n04.06.2023 01:42, Michael Paquier wrote:\n> On Sat, Jun 03, 2023 at 03:00:01PM +0300, Alexander Lakhin wrote:\n>> Clang' scan-build detects 58 errors \"Dead assignment\", including that one.\n>> Maybe it would be more sensible to eliminate all errors of this class?\n> Depends on if this makes any code changed a bit easier to understand I\n> guess, so that would be a case-by-case analysis. Saying that, the\n> proposed patch seems right while it makes slightly easier to\n> understand the footer print part.\n\nIt also aligns the code with print_unaligned_vertical(), but I can't see why\nneed_recordsep = true;\nis a no-op here (scan-build dislikes only need_recordsep = false;).\nI suspect that removing that line will change the behaviour in cases when\nneed_recordsep = false after the loop \"print cells\" and the loop\n\"for (footers)\" is executed.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 4 Jun 2023 07:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em dom., 4 de jun. de 2023 às 01:00, Alexander Lakhin <[email protected]>\nescreveu:\n\n> Hi Michael,\n>\n> 04.06.2023 01:42, Michael Paquier wrote:\n> > On Sat, Jun 03, 2023 at 03:00:01PM +0300, Alexander Lakhin wrote:\n> >> Clang' scan-build detects 58 errors \"Dead assignment\", including that\n> one.\n> >> Maybe it would be more sensible to eliminate all errors of this class?\n> > Depends on if this makes any code changed a bit easier to understand I\n> > guess, so that would be a case-by-case analysis. Saying that, the\n> > proposed patch seems right while it makes slightly easier to\n> > understand the footer print part.\n>\n> It also aligns the code with print_unaligned_vertical(), but I can't see\n> why\n> need_recordsep = true;\n> is a no-op here (scan-build dislikes only need_recordsep = false;).\n> I suspect that removing that line will change the behaviour in cases when\n> need_recordsep = false after the loop \"print cells\" and the loop\n> \"for (footers)\" is executed.\n>\nNot sure about this.\nI tested with patch and the results is:\n\npsql -U postgres --no-align\npsql (16beta1)\nWARNING: Console code page (850) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\npostgres=# select * from pgbench_accounts limit 100;\naid|bid|abalance|filler\n1|1|0|\n2|1|0|\n3|1|0|\n4|1|0|\n5|1|0|\n6|1|0|\n7|1|0|\n8|1|0|\n9|1|0|\netc, etc\n\npsql -U postgres --no-align -P recordsep=\";\"\npsql (16beta1)\nWARNING: Console code page (850) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\nType \"help\" for help.\n\npostgres=# select * from pgbench_accounts limit 100;\naid|bid|abalance|filler;1|1|0|\n ;2|1|0|\n ;3|1|0|\n\n ;4|1|0|\n ;5|1|0|\n ;6|1|0|\n ;7|1|0|\n ;8|1|0|\n\n ;9|1|0|\n ;10|1|0|\n ;11|1|0|\n ;12|1|0|\n\n ;13|1|0|\n ;14|1|0|\n ;15|1|0|\n ;16|1|0|\n ;17|1|0|\netc, etc\n\nregards,\nRanier Vilela\n\n\n>\n> Best regards,\n> Alexander\n>\n\nEm dom., 4 de jun. de 2023 às 01:00, Alexander Lakhin <[email protected]> escreveu:Hi Michael,\n\r\n04.06.2023 01:42, Michael Paquier wrote:\r\n> On Sat, Jun 03, 2023 at 03:00:01PM +0300, Alexander Lakhin wrote:\r\n>> Clang' scan-build detects 58 errors \"Dead assignment\", including that one.\r\n>> Maybe it would be more sensible to eliminate all errors of this class?\r\n> Depends on if this makes any code changed a bit easier to understand I\r\n> guess, so that would be a case-by-case analysis. Saying that, the\r\n> proposed patch seems right while it makes slightly easier to\r\n> understand the footer print part.\n\r\nIt also aligns the code with print_unaligned_vertical(), but I can't see why\r\nneed_recordsep = true;\r\nis a no-op here (scan-build dislikes only need_recordsep = false;).\r\nI suspect that removing that line will change the behaviour in cases when\r\nneed_recordsep = false after the loop \"print cells\" and the loop\r\n\"for (footers)\" is executed.Not sure about this.I tested with patch and the results is:psql -U postgres --no-alignpsql (16beta1)WARNING: Console code page (850) differs from Windows code page (1252) 8-bit characters might not work correctly. See psql reference page \"Notes for Windows users\" for details.Type \"help\" for help.postgres=# select * from pgbench_accounts limit 100;aid|bid|abalance|filler1|1|0|2|1|0|3|1|0|4|1|0|5|1|0|6|1|0|7|1|0|8|1|0|9|1|0|etc, etcpsql -U postgres --no-align -P recordsep=\";\"psql (16beta1)WARNING: Console code page (850) differs from Windows code page (1252) 8-bit characters might not work correctly. See psql reference page \"Notes for Windows users\" for details.Type \"help\" for help.postgres=# select * from pgbench_accounts limit 100;aid|bid|abalance|filler;1|1|0| ;2|1|0| ;3|1|0| ;4|1|0| ;5|1|0| ;6|1|0| ;7|1|0| ;8|1|0| ;9|1|0| ;10|1|0| ;11|1|0| ;12|1|0| ;13|1|0| ;14|1|0| ;15|1|0| ;16|1|0| ;17|1|0|etc, etcregards,Ranier Vilela \n\r\nBest regards,\r\nAlexander",
"msg_date": "Sun, 4 Jun 2023 08:59:00 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Hi,\n\nAlexander wrote:\n\n> It also aligns the code with print_unaligned_vertical(), but I can't see why\n> need_recordsep = true;\n> is a no-op here (scan-build dislikes only need_recordsep = false;).\n> I suspect that removing that line will change the behaviour in cases when\n> need_recordsep = false after the loop \"print cells\" and the loop\n> \"for (footers)\" is executed.\n\nAs I understand cont->cells is supoused to have all cont->ncolumns * cont->nrows\nentries filled so the loop \"print cells\" always assigns need_recordsep = true,\nexcept when there are no cells at all or cancel_pressed == true.\nIf cancel_pressed == true then footers are not printed. So to have\nneed_recordsep == false before the loop \"for (footers)\" table should be empty,\nand need_recordsep should be false before the loop \"print cells\". It can only\nbe false there when cont->opt->start_table == true and opt_tuples_only == true\nso that headers are not printed. But when opt_tuples_only == true footers are\nnot printed either.\n\nSo technically removing \"need_recordsep = true;\" won't change the outcome. But\nit's not obvious, so I also have doubts about removing this line. If someday\nprint options are changed, for example to support printing footers and not\nprinting headers, or anything else changes in this function, the output might\nbe unexpected with this line removed.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\n\n",
"msg_date": "Fri, 30 Jun 2023 17:25:49 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Hello Karina,\n\n30.06.2023 17:25, Karina Litskevich wrote:\n> Hi,\n>\n> Alexander wrote:\n>\n>> It also aligns the code with print_unaligned_vertical(), but I can't see why\n>> need_recordsep = true;\n>> is a no-op here (scan-build dislikes only need_recordsep = false;).\n>> I suspect that removing that line will change the behaviour in cases when\n>> need_recordsep = false after the loop \"print cells\" and the loop\n>> \"for (footers)\" is executed.\n> As I understand cont->cells is supoused to have all cont->ncolumns * cont->nrows\n> entries filled so the loop \"print cells\" always assigns need_recordsep = true,\n> except when there are no cells at all or cancel_pressed == true.\n> If cancel_pressed == true then footers are not printed. So to have\n> need_recordsep == false before the loop \"for (footers)\" table should be empty,\n> and need_recordsep should be false before the loop \"print cells\". It can only\n> be false there when cont->opt->start_table == true and opt_tuples_only == true\n> so that headers are not printed. But when opt_tuples_only == true footers are\n> not printed either.\n>\n> So technically removing \"need_recordsep = true;\" won't change the outcome. But\n> it's not obvious, so I also have doubts about removing this line. If someday\n> print options are changed, for example to support printing footers and not\n> printing headers, or anything else changes in this function, the output might\n> be unexpected with this line removed.\n\nI think that the question that should be answered before moving forward\nhere is: what this discussion is expected to result in?\nIf the goal is to avoid an unused value to make Coverity/clang`s scan-build\na little happier, then maybe just don't touch other line, that is not\nrecognized as dead (at least by scan-build; I wonder what Coverity says\nabout that line).\nOtherwise, if the goal is to do review the code and make it cleaner, then\nwhy not get rid of \"if (need_recordsep)\" there?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 30 Jun 2023 19:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em sex., 30 de jun. de 2023 às 11:26, Karina Litskevich <\[email protected]> escreveu:\n\n> Hi,\n>\n> Alexander wrote:\n>\n> > It also aligns the code with print_unaligned_vertical(), but I can't see\n> why\n> > need_recordsep = true;\n> > is a no-op here (scan-build dislikes only need_recordsep = false;).\n> > I suspect that removing that line will change the behaviour in cases when\n> > need_recordsep = false after the loop \"print cells\" and the loop\n> > \"for (footers)\" is executed.\n>\n> Hi Karina,\n\n\n> As I understand cont->cells is supoused to have all cont->ncolumns *\n> cont->nrows\n> entries filled so the loop \"print cells\" always assigns need_recordsep =\n> true,\n> except when there are no cells at all or cancel_pressed == true.\n> If cancel_pressed == true then footers are not printed. So to have\n> need_recordsep == false before the loop \"for (footers)\" table should be\n> empty,\n> and need_recordsep should be false before the loop \"print cells\". It can\n> only\n> be false there when cont->opt->start_table == true and opt_tuples_only ==\n> true\n> so that headers are not printed. But when opt_tuples_only == true footers\n> are\n> not printed either.\n>\n> So technically removing \"need_recordsep = true;\" won't change the outcome.\n\nThanks for the confirmation.\n\n\n> But\n> it's not obvious, so I also have doubts about removing this line. If\n> someday\n> print options are changed, for example to support printing footers and not\n> printing headers, or anything else changes in this function, the output\n> might\n> be unexpected with this line removed.\n\n\nThat part I didn't understand.\nHow are we going to make this function less readable by removing the\ncomplicating part.\n\nbest regards,\nRanier Vilela\n\nEm sex., 30 de jun. de 2023 às 11:26, Karina Litskevich <[email protected]> escreveu:Hi,\n\nAlexander wrote:\n\n> It also aligns the code with print_unaligned_vertical(), but I can't see why\n> need_recordsep = true;\n> is a no-op here (scan-build dislikes only need_recordsep = false;).\n> I suspect that removing that line will change the behaviour in cases when\n> need_recordsep = false after the loop \"print cells\" and the loop\n> \"for (footers)\" is executed.\nHi Karina, \nAs I understand cont->cells is supoused to have all cont->ncolumns * cont->nrows\nentries filled so the loop \"print cells\" always assigns need_recordsep = true,\nexcept when there are no cells at all or cancel_pressed == true.\nIf cancel_pressed == true then footers are not printed. So to have\nneed_recordsep == false before the loop \"for (footers)\" table should be empty,\nand need_recordsep should be false before the loop \"print cells\". It can only\nbe false there when cont->opt->start_table == true and opt_tuples_only == true\nso that headers are not printed. But when opt_tuples_only == true footers are\nnot printed either.\n\nSo technically removing \"need_recordsep = true;\" won't change the outcome.Thanks for the confirmation. But\nit's not obvious, so I also have doubts about removing this line. If someday\nprint options are changed, for example to support printing footers and not\nprinting headers, or anything else changes in this function, the output might\nbe unexpected with this line removed. That part I didn't understand.How are we going to make this function less readable by removing the complicating part.best regards,Ranier Vilela",
"msg_date": "Fri, 30 Jun 2023 13:15:48 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em sex., 30 de jun. de 2023 às 13:00, Alexander Lakhin <[email protected]>\nescreveu:\n\n> Hello Karina,\n>\n> 30.06.2023 17:25, Karina Litskevich wrote:\n> > Hi,\n> >\n> > Alexander wrote:\n> >\n> >> It also aligns the code with print_unaligned_vertical(), but I can't\n> see why\n> >> need_recordsep = true;\n> >> is a no-op here (scan-build dislikes only need_recordsep = false;).\n> >> I suspect that removing that line will change the behaviour in cases\n> when\n> >> need_recordsep = false after the loop \"print cells\" and the loop\n> >> \"for (footers)\" is executed.\n> > As I understand cont->cells is supoused to have all cont->ncolumns *\n> cont->nrows\n> > entries filled so the loop \"print cells\" always assigns need_recordsep =\n> true,\n> > except when there are no cells at all or cancel_pressed == true.\n> > If cancel_pressed == true then footers are not printed. So to have\n> > need_recordsep == false before the loop \"for (footers)\" table should be\n> empty,\n> > and need_recordsep should be false before the loop \"print cells\". It can\n> only\n> > be false there when cont->opt->start_table == true and opt_tuples_only\n> == true\n> > so that headers are not printed. But when opt_tuples_only == true\n> footers are\n> > not printed either.\n> >\n> > So technically removing \"need_recordsep = true;\" won't change the\n> outcome. But\n> > it's not obvious, so I also have doubts about removing this line. If\n> someday\n> > print options are changed, for example to support printing footers and\n> not\n> > printing headers, or anything else changes in this function, the output\n> might\n> > be unexpected with this line removed.\n>\n> Hi Alexander,\n\n\n> I think that the question that should be answered before moving forward\n> here is: what this discussion is expected to result in?\n>\nI hope to make the function more readable and maintainable.\n\n\n> If the goal is to avoid an unused value to make Coverity/clang`s scan-build\n> a little happier, then maybe just don't touch other line, that is not\n> recognized as dead (at least by scan-build;\n\n\n\n> I wonder what Coverity says\n> about that line).\n>\n if (cont->opt->stop_table)\n477 {\n478 printTableFooter *footers = footers_with_default(cont);\n479\n480 if (!opt_tuples_only && footers != NULL && !\ncancel_pressed)\n481 {\n482 printTableFooter *f;\n483\n484 for (f = footers; f; f = f->next)\n485 {\n486 if (need_recordsep)\n487 {\n488 print_separator(cont->opt->\nrecordSep, fout);\n\nCID 1512766 (#1 of 1): Unused value (UNUSED_VALUE)assigned_value: Assigning\nvalue false to need_recordsep here, but that stored value is overwritten\nbefore it can be used.\n489 need_recordsep = false;\n490 }\n491 fputs(f->data, fout);\n\nvalue_overwrite: Overwriting previous write to need_recordsep with value\ntrue.\n492 need_recordsep = true;\n493 }\n494 }\n495\n\n\n> Otherwise, if the goal is to do review the code and make it cleaner, then\n> why not get rid of \"if (need_recordsep)\" there?\n>\nI don't agree with removing this line, as it is essential to print the\nseparators, in the footers.\n\nbest regards,\nRanier Vilela\n\nEm sex., 30 de jun. de 2023 às 13:00, Alexander Lakhin <[email protected]> escreveu:Hello Karina,\n\n30.06.2023 17:25, Karina Litskevich wrote:\n> Hi,\n>\n> Alexander wrote:\n>\n>> It also aligns the code with print_unaligned_vertical(), but I can't see why\n>> need_recordsep = true;\n>> is a no-op here (scan-build dislikes only need_recordsep = false;).\n>> I suspect that removing that line will change the behaviour in cases when\n>> need_recordsep = false after the loop \"print cells\" and the loop\n>> \"for (footers)\" is executed.\n> As I understand cont->cells is supoused to have all cont->ncolumns * cont->nrows\n> entries filled so the loop \"print cells\" always assigns need_recordsep = true,\n> except when there are no cells at all or cancel_pressed == true.\n> If cancel_pressed == true then footers are not printed. So to have\n> need_recordsep == false before the loop \"for (footers)\" table should be empty,\n> and need_recordsep should be false before the loop \"print cells\". It can only\n> be false there when cont->opt->start_table == true and opt_tuples_only == true\n> so that headers are not printed. But when opt_tuples_only == true footers are\n> not printed either.\n>\n> So technically removing \"need_recordsep = true;\" won't change the outcome. But\n> it's not obvious, so I also have doubts about removing this line. If someday\n> print options are changed, for example to support printing footers and not\n> printing headers, or anything else changes in this function, the output might\n> be unexpected with this line removed.\nHi Alexander, \nI think that the question that should be answered before moving forward\nhere is: what this discussion is expected to result in?I hope to make the function more readable and maintainable. \nIf the goal is to avoid an unused value to make Coverity/clang`s scan-build\na little happier, then maybe just don't touch other line, that is not\nrecognized as dead (at least by scan-build; I wonder what Coverity says\nabout that line).\n if (cont->opt->stop_table)\n477 {\n478 printTableFooter *footers = footers_with_default(cont);\n479\n480 if (!opt_tuples_only && footers != NULL && !cancel_pressed)\n481 {\n482 printTableFooter *f;\n483\n484 for (f = footers; f; f = f->next)\n485 {\n486 if (need_recordsep)\n487 {\n488 print_separator(cont->opt->recordSep, fout);\n CID 1512766 (#1 of 1): Unused value (UNUSED_VALUE)assigned_value: Assigning value false to need_recordsep here, but that stored value is overwritten before it can be used.489 need_recordsep = false;\n490 }\n491 fputs(f->data, fout);\n value_overwrite: Overwriting previous write to need_recordsep with value true.492 need_recordsep = true;\n493 }\n494 }\n495\n \nOtherwise, if the goal is to do review the code and make it cleaner, then\nwhy not get rid of \"if (need_recordsep)\" there?I don't agree with removing this line, as it is essential to print the separators, in the footers. best regards,Ranier Vilela",
"msg_date": "Fri, 30 Jun 2023 13:20:10 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": ">\n>\n>\n>> But\n>> it's not obvious, so I also have doubts about removing this line. If\n>> someday\n>> print options are changed, for example to support printing footers and not\n>> printing headers, or anything else changes in this function, the output\n>> might\n>> be unexpected with this line removed.\n>\n>\n> That part I didn't understand.\n> How are we going to make this function less readable by removing the\n> complicating part.\n>\n\nMy point is, technically right now you won't see any difference in output\nif you remove the line. Because if we get to that line the need_recordsep\nis already true. However, understanding why it is true is complicated.\nThat's\nwhy if you remove the line people who read the code will wonder why we don't\nneed a separator after \"fputs\"ing a footer. So keeping that line will make\nthe code more readable.\nMoreover, removing the line will possibly complicate the future maintenance.\nAs I wrote in the part you just quoted, if the function changes in the way\nthat need_recordsep is not true right before printing footers any more, then\noutput will be unexpected.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\n But\nit's not obvious, so I also have doubts about removing this line. If someday\nprint options are changed, for example to support printing footers and not\nprinting headers, or anything else changes in this function, the output might\nbe unexpected with this line removed. That part I didn't understand.How are we going to make this function less readable by removing the complicating part.My point is, technically right now you won't see any difference in outputif you remove the line. Because if we get to that line the need_recordsepis already true. However, understanding why it is true is complicated. That'swhy if you remove the line people who read the code will wonder why we don'tneed a separator after \"fputs\"ing a footer. So keeping that line will makethe code more readable.Moreover, removing the line will possibly complicate the future maintenance.As I wrote in the part you just quoted, if the function changes in the waythat need_recordsep is not true right before printing footers any more, thenoutput will be unexpected.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Thu, 6 Jul 2023 18:37:16 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 5:37 PM Karina Litskevich\n<[email protected]> wrote:\n> My point is, technically right now you won't see any difference in output\n> if you remove the line. Because if we get to that line the need_recordsep\n> is already true. However, understanding why it is true is complicated. That's\n> why if you remove the line people who read the code will wonder why we don't\n> need a separator after \"fputs\"ing a footer. So keeping that line will make\n> the code more readable.\n> Moreover, removing the line will possibly complicate the future maintenance.\n> As I wrote in the part you just quoted, if the function changes in the way\n> that need_recordsep is not true right before printing footers any more, then\n> output will be unexpected.\n\nI agree with Karina here. Either this patch should keep the\n\"need_recordsep = true;\" line, thus removing the no-op assignment to\nfalse and making the code slightly less unreadable; or the entire\nfunction should be refactored for readability.\n\n\n.m\n\n\n",
"msg_date": "Wed, 12 Jul 2023 00:33:51 +0200",
"msg_from": "Marko Tiikkaja <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em ter., 11 de jul. de 2023 às 19:34, Marko Tiikkaja <[email protected]>\nescreveu:\n\n> On Thu, Jul 6, 2023 at 5:37 PM Karina Litskevich\n> <[email protected]> wrote:\n> > My point is, technically right now you won't see any difference in output\n> > if you remove the line. Because if we get to that line the need_recordsep\n> > is already true. However, understanding why it is true is complicated.\n> That's\n> > why if you remove the line people who read the code will wonder why we\n> don't\n> > need a separator after \"fputs\"ing a footer. So keeping that line will\n> make\n> > the code more readable.\n> > Moreover, removing the line will possibly complicate the future\n> maintenance.\n> > As I wrote in the part you just quoted, if the function changes in the\n> way\n> > that need_recordsep is not true right before printing footers any more,\n> then\n> > output will be unexpected.\n>\n> I agree with Karina here. Either this patch should keep the\n> \"need_recordsep = true;\" line, thus removing the no-op assignment to\n> false and making the code slightly less unreadable; or the entire\n> function should be refactored for readability.\n>\n As there is consensus to keep the no-op assignment,\nI will go ahead and reject the patch.\n\nregards,\nRanier Vilela\n\nEm ter., 11 de jul. de 2023 às 19:34, Marko Tiikkaja <[email protected]> escreveu:On Thu, Jul 6, 2023 at 5:37 PM Karina Litskevich\n<[email protected]> wrote:\n> My point is, technically right now you won't see any difference in output\n> if you remove the line. Because if we get to that line the need_recordsep\n> is already true. However, understanding why it is true is complicated. That's\n> why if you remove the line people who read the code will wonder why we don't\n> need a separator after \"fputs\"ing a footer. So keeping that line will make\n> the code more readable.\n> Moreover, removing the line will possibly complicate the future maintenance.\n> As I wrote in the part you just quoted, if the function changes in the way\n> that need_recordsep is not true right before printing footers any more, then\n> output will be unexpected.\n\nI agree with Karina here. Either this patch should keep the\n\"need_recordsep = true;\" line, thus removing the no-op assignment to\nfalse and making the code slightly less unreadable; or the entire\nfunction should be refactored for readability. As there is consensus to keep the no-op assignment,I will go ahead and reject the patch.regards,Ranier Vilela",
"msg_date": "Tue, 11 Jul 2023 19:46:16 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 1:46 AM Ranier Vilela <[email protected]> wrote:\n\n> As there is consensus to keep the no-op assignment,\n> I will go ahead and reject the patch.\n>\n\nIn your patch you suggest removing two assignments, and we only have\nconsensus to keep one of them. The other one is an obvious no-op.\n\nI attached a patch that removes only one assignment. Could you please try\nit and check whether Coverity is still complaining about need_recordsep\nvariable?\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Fri, 21 Jul 2023 15:13:02 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "Em sex., 21 de jul. de 2023 às 09:13, Karina Litskevich <\[email protected]> escreveu:\n\n>\n>\n> On Wed, Jul 12, 2023 at 1:46 AM Ranier Vilela <[email protected]> wrote:\n>\n>> As there is consensus to keep the no-op assignment,\n>> I will go ahead and reject the patch.\n>>\n>\n> In your patch you suggest removing two assignments, and we only have\n> consensus to keep one of them. The other one is an obvious no-op.\n>\n> I attached a patch that removes only one assignment. Could you please try\n> it and check whether Coverity is still complaining about need_recordsep\n> variable?\n>\nYeah.\nChecked today, Coverity does not complain about need_recordsep.\n\nbest regards,\nRanier Vilela\n\nEm sex., 21 de jul. de 2023 às 09:13, Karina Litskevich <[email protected]> escreveu:On Wed, Jul 12, 2023 at 1:46 AM Ranier Vilela <[email protected]> wrote: As there is consensus to keep the no-op assignment,I will go ahead and reject the patch.In your patch you suggest removing two assignments, and we only haveconsensus to keep one of them. The other one is an obvious no-op.I attached a patch that removes only one assignment. Could you please tryit and check whether Coverity is still complaining about need_recordsepvariable?Yeah.Checked today, Coverity does not complain about need_recordsep.best regards,Ranier Vilela",
"msg_date": "Mon, 24 Jul 2023 19:04:27 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 1:04 AM Ranier Vilela <[email protected]> wrote:\n\n> Checked today, Coverity does not complain about need_recordsep.\n>\n\nGreat! Thanks.\nSo v2 patch makes Coverity happy, and as for me doesn't make the code\nless readable. Does anyone have any objections?\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nOn Tue, Jul 25, 2023 at 1:04 AM Ranier Vilela <[email protected]> wrote:Checked today, Coverity does not complain about need_recordsep.Great! Thanks.So v2 patch makes Coverity happy, and as for me doesn't make the codeless readable. Does anyone have any objections?Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Fri, 28 Jul 2023 11:53:09 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unused value (src/fe_utils/print.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nI tried to use `REGRESS_OPTS = --temp-config` in order to test a 3rd\nparty extension with a custom .conf file similarly to how PostgreSQL\ndoes it for src/test/modules/test_slru. It didn't work and \"38.18.\nExtension Building Infrastructure\" [1] doesn't seem to be much help.\n\nHere is my Makefile:\n\n```\nEXTENSION = experiment\nMODULES = experiment\nDATA = experiment--1.0.sql experiment.conf\nREGRESS_OPTS = --temp-config\n$(top_srcdir)/../../../share/postgresql/extension/experiment.conf\nREGRESS = experiment\n\nPG_CPPFLAGS = -g -O0\nSHLIB_LINK =\n\nifndef PG_CONFIG\n PG_CONFIG := pg_config\nendif\nPGXS := $(shell $(PG_CONFIG) --pgxs)\ninclude $(PGXS)\n```\n\nAnd the result I get:\n\n```\n$ make clean && make install && make installcheck\n...\n\n# note: experiment.conf is copied according to DATA value:\n\n/bin/sh /Users/eax/pginstall/lib/postgresql/pgxs/src/makefiles/../../config/install-sh\n-c -m 644 .//experiment--1.0.sql .//experiment.conf\n'/Users/eax/pginstall/share/postgresql/extension/'\n\n# note: --temp-conf path is correct\n\necho \"# +++ regress install-check in +++\" &&\n/Users/eax/pginstall/lib/postgresql/pgxs/src/makefiles/../../src/test/regress/pg_regress\n--inputdir=./ --bindir='/Users/eax/pginstall/bin' --temp-config\n/Users/eax/pginstall/lib/postgresql/pgxs/src/makefiles/../../../../../share/postgresql/extension/experiment.conf\n--dbname=contrib_regression experiment\n# +++ regress install-check in +++\n# using postmaster on Unix socket, default port\nnot ok 1 - experiment 382 ms\n\n# note: shared_preload_libraries had no effect and I got elog() from\nthe extension:\n\n$ cat /Users/eax/projects/c/postgresql-extensions/007-gucs/regression.diffs\n...\n+FATAL: Please use shared_preload_libraries\n```\n\nThis comment in Makefile for test_slru seems to explain why this happens:\n\n```\n# Disabled because these tests require \"shared_preload_libraries=test_slru\",\n# which typical installcheck users do not have (e.g. buildfarm clients).\nNO_INSTALLCHECK = 1\n```\n\nThe complete example is available on GitHub [2].\n\nIs it accurate to say that the author of a 3rd party extension that\nuses shared_preload_libraries can't be using SQL tests and has to use\nTAP tests instead? If not then what did I miss?\n\n[1]: https://www.postgresql.org/docs/current/extend-pgxs.html\n[2]: https://github.com/afiskon/postgresql-extensions/tree/temp_config_experiment/007-gucs\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sat, 3 Jun 2023 14:56:27 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should \"REGRESS_OPTS = --temp-config\" be working for 3rd party\n extensions?"
},
{
"msg_contents": "On Sat, Jun 03, 2023 at 02:56:27PM +0300, Aleksander Alekseev wrote:\n>\n> I tried to use `REGRESS_OPTS = --temp-config` in order to test a 3rd\n> party extension with a custom .conf file similarly to how PostgreSQL\n> does it for src/test/modules/test_slru. It didn't work and \"38.18.\n> Extension Building Infrastructure\" [1] doesn't seem to be much help.\n>\n> Is it accurate to say that the author of a 3rd party extension that\n> uses shared_preload_libraries can't be using SQL tests and has to use\n> TAP tests instead? If not then what did I miss?\n\ntemp-config can only be used when bootstrapping a temporary environment, so\nwhen using e.g. make check. PGXS / third-party extension can only use\ninstallcheck, so if you need specific config like shared_preload_libraries you\nneed to manually configure your instance beforehand, or indeed rely on TAP\ntests. Most projects properly setup their instance in the CI jobs, and at\nleast the Debian packaging infrastructure has a way to configure it too.\n\n\n",
"msg_date": "Sat, 3 Jun 2023 20:50:16 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should \"REGRESS_OPTS = --temp-config\" be working for 3rd party\n extensions?"
},
{
"msg_contents": "Hi Julien,\n\n> temp-config can only be used when bootstrapping a temporary environment, so\n> when using e.g. make check. PGXS / third-party extension can only use\n> installcheck, so if you need specific config like shared_preload_libraries you\n> need to manually configure your instance beforehand, or indeed rely on TAP\n> tests. Most projects properly setup their instance in the CI jobs, and at\n> least the Debian packaging infrastructure has a way to configure it too.\n\nMany thanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Sat, 3 Jun 2023 18:09:16 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should \"REGRESS_OPTS = --temp-config\" be working for 3rd party\n extensions?"
}
] |
[
{
"msg_contents": "I posted earlier in pgsql-general, that I realised there’s no greek.stop under $(pg_config —sharedir)/tsearch_data\n\nAnd indeed looks like stop words are maintained with to_tsvector(‘greek’, ..). \n\nI wrote an extension https://github.com/Florents-Tselai/pg_fts_greek that adds another ‘greek_ext’ regconfig \n\nHere’s how the results compare\n\n\nt\tto_tsvector('greek', t)\tto_tsvector('greek_ext', t)\n'το τετράγωνο της υποτείνουσας ενός ορθογωνίου τριγώνου'\t'εν':5 'ορθογων':6 'τ':3 'τετραγων':2 'το':1 'τριγων':7 'υποτεινουσ':4\t'εν':5 'ορθογων':6 'τετραγων':2 'τριγων':7 'υποτεινουσ':4\n'ο γιώργος είναι πονηρός'\t'γιωργ':2 'εινα':3 'ο':1 'πονηρ':4\t'γιωργ':2 'πονηρ':4\n'ο ήλιος ο πράσινος o ήλιος που ανατέλλει'\t'o':5 'ανατελλ':8 'ηλι':2,6 'ο':1,3 'π':7 'πρασιν':4\t'ανατελλ':8 'ηλι':2,6 'πρασιν':4\n\nThere’s another previous relevant patch [0] but was never merged. I’ve included these stop words and added some more (info in README.md).\n\nFor my personal projects looks like it yields much better results.\n\nI’d like some feedback on the extension ; particularly on the installation infra (I’m not sure I’ve handled properly the permissions in the .sql files) \n\nI’ll then try to make a .patch for this. \n\n\n\n[0] https://www.postgresql.org/message-id/flat/e1c79330-48a5-abef-c309-8d4499e3180b%402ndquadrant.com#7431fdb9ae24b694155aef3f040b7b60\nI posted earlier in pgsql-general, that I realised there’s no greek.stop under $(pg_config —sharedir)/tsearch_dataAnd indeed looks like stop words are maintained with to_tsvector(‘greek’, ..). I wrote an extension https://github.com/Florents-Tselai/pg_fts_greek that adds another ‘greek_ext’ regconfig Here’s how the results comparetto_tsvector('greek', t)to_tsvector('greek_ext', t)'το τετράγωνο της υποτείνουσας ενός ορθογωνίου τριγώνου''εν':5 'ορθογων':6 'τ':3 'τετραγων':2 'το':1 'τριγων':7 'υποτεινουσ':4'εν':5 'ορθογων':6 'τετραγων':2 'τριγων':7 'υποτεινουσ':4'ο γιώργος είναι πονηρός''γιωργ':2 'εινα':3 'ο':1 'πονηρ':4'γιωργ':2 'πονηρ':4'ο ήλιος ο πράσινος o ήλιος που ανατέλλει''o':5 'ανατελλ':8 'ηλι':2,6 'ο':1,3 'π':7 'πρασιν':4'ανατελλ':8 'ηλι':2,6 'πρασιν':4There’s another previous relevant patch [0] but was never merged. I’ve included these stop words and added some more (info in README.md).For my personal projects looks like it yields much better results.I’d like some feedback on the extension ; particularly on the installation infra (I’m not sure I’ve handled properly the permissions in the .sql files) I’ll then try to make a .patch for this. [0] https://www.postgresql.org/message-id/flat/e1c79330-48a5-abef-c309-8d4499e3180b%402ndquadrant.com#7431fdb9ae24b694155aef3f040b7b60",
"msg_date": "Sat, 3 Jun 2023 20:47:12 +0300",
"msg_from": "Florents Tselai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving FTS for Greek"
},
{
"msg_contents": "On 03.06.23 19:47, Florents Tselai wrote:\n> There’s another previous relevant patch [0] but was never merged. I’ve \n> included these stop words and added some more (info in README.md).\n> \n> For my personal projects looks like it yields much better results.\n> \n> I’d like some feedback on the extension ; particularly on the \n> installation infra (I’m not sure I’ve handled properly the permissions \n> in the .sql files)\n> \n> I’ll then try to make a .patch for this.\n\nThe open question at the previous attempt was that it wasn't clear what \nthe upstream source or long-term maintenance of the stop words list \nwould be. If it's just a personally composed list, then it's okay if \nyou use it yourself, but for including it into PostgreSQL it ought to \ncome from a reputable non-individual source like snowball.\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 23:13:26 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving FTS for Greek"
},
{
"msg_contents": "> On 7 Jun 2023, at 12:13 AM, Peter Eisentraut <[email protected]> wrote:\n> \n> On 03.06.23 19:47, Florents Tselai wrote:\n>> There’s another previous relevant patch [0] but was never merged. I’ve included these stop words and added some more (info in README.md).\n>> For my personal projects looks like it yields much better results.\n>> I’d like some feedback on the extension ; particularly on the installation infra (I’m not sure I’ve handled properly the permissions in the .sql files)\n>> I’ll then try to make a .patch for this.\n> \n> The open question at the previous attempt was that it wasn't clear what the upstream source or long-term maintenance of the stop words list would be. If it's just a personally composed list, then it's okay if you use it yourself, but for including it into PostgreSQL it ought to come from a reputable non-individual source like snowball.\n\nI’ve used the NLTK list [0] as my base of stopwords; Wouldn’t this be considered reputable enough ? \n\n0 https://github.com/nltk/nltk_data/blob/gh-pages/packages/corpora/stopwords.zip (see greek.stop file in the archive)\n\n> \n\n\nOn 7 Jun 2023, at 12:13 AM, Peter Eisentraut <[email protected]> wrote:On 03.06.23 19:47, Florents Tselai wrote:There’s another previous relevant patch [0] but was never merged. I’ve included these stop words and added some more (info in README.md).For my personal projects looks like it yields much better results.I’d like some feedback on the extension ; particularly on the installation infra (I’m not sure I’ve handled properly the permissions in the .sql files)I’ll then try to make a .patch for this.The open question at the previous attempt was that it wasn't clear what the upstream source or long-term maintenance of the stop words list would be. If it's just a personally composed list, then it's okay if you use it yourself, but for including it into PostgreSQL it ought to come from a reputable non-individual source like snowball.I’ve used the NLTK list [0] as my base of stopwords; Wouldn’t this be considered reputable enough ? 0 https://github.com/nltk/nltk_data/blob/gh-pages/packages/corpora/stopwords.zip (see greek.stop file in the archive)",
"msg_date": "Wed, 7 Jun 2023 01:30:55 +0300",
"msg_from": "Florents Tselai <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving FTS for Greek"
},
{
"msg_contents": "On 07.06.23 00:30, Florents Tselai wrote:\n> \n> \n>> On 7 Jun 2023, at 12:13 AM, Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 03.06.23 19:47, Florents Tselai wrote:\n>>> There’s another previous relevant patch [0] but was never merged. \n>>> I’ve included these stop words and added some more (info in README.md).\n>>> For my personal projects looks like it yields much better results.\n>>> I’d like some feedback on the extension ; particularly on the \n>>> installation infra (I’m not sure I’ve handled properly the \n>>> permissions in the .sql files)\n>>> I’ll then try to make a .patch for this.\n>>\n>> The open question at the previous attempt was that it wasn't clear \n>> what the upstream source or long-term maintenance of the stop words \n>> list would be. If it's just a personally composed list, then it's \n>> okay if you use it yourself, but for including it into PostgreSQL it \n>> ought to come from a reputable non-individual source like snowball.\n> \n> I’ve used the NLTK list [0] as my base of stopwords; Wouldn’t this be \n> considered reputable enough ?\n> \n> 0 \n> https://github.com/nltk/nltk_data/blob/gh-pages/packages/corpora/stopwords.zip <https://github.com/nltk/nltk_data/blob/gh-pages/packages/corpora/stopwords.zip> (see greek.stop file in the archive)\n\nWho is NLTK, where did they get their stopwords file from, what is their \nopen source license, how do we know when to pull updates, what is the \nmechanical process for pulling in those updates?\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 08:11:42 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving FTS for Greek"
}
] |
[
{
"msg_contents": "Hi all,\n\nDuring the PGCon Unconference session about Table Access Method one missing\nitem pointed out is that currently we lack documentation and examples of\nTAM.\n\nSo in order to improve things a bit in this area I'm proposing to add a\ntest module for Table Access Method similar what we already have for Index\nAccess Method.\n\nThis code is based on the \"blackhole_am\" implemented by Michael Paquier:\nhttps://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n\nRegards,\n\n-- \nFabrízio de Royes Mello",
"msg_date": "Sat, 3 Jun 2023 19:42:36 -0400",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add test module for Table Access Method"
},
{
"msg_contents": "On Sat, Jun 3, 2023 at 7:42 PM Fabrízio de Royes Mello <\[email protected]> wrote:\n>\n>\n> Hi all,\n>\n> During the PGCon Unconference session about Table Access Method one\nmissing item pointed out is that currently we lack documentation and\nexamples of TAM.\n>\n> So in order to improve things a bit in this area I'm proposing to add a\ntest module for Table Access Method similar what we already have for Index\nAccess Method.\n>\n> This code is based on the \"blackhole_am\" implemented by Michael Paquier:\nhttps://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n>\n\nJust added some more tests, ran pgindent and also organized a bit some\ncomments and README.txt.\n\nRegards,\n\n-- \nFabrízio de Royes Mello",
"msg_date": "Mon, 5 Jun 2023 12:24:38 -0400",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 1:24 PM Fabrízio de Royes Mello <\[email protected]> wrote:\n>\n> On Sat, Jun 3, 2023 at 7:42 PM Fabrízio de Royes Mello <\[email protected]> wrote:\n> >\n> >\n> > Hi all,\n> >\n> > During the PGCon Unconference session about Table Access Method one\nmissing item pointed out is that currently we lack documentation and\nexamples of TAM.\n> >\n> > So in order to improve things a bit in this area I'm proposing to add a\ntest module for Table Access Method similar what we already have for Index\nAccess Method.\n> >\n> > This code is based on the \"blackhole_am\" implemented by Michael\nPaquier: https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n> >\n>\n> Just added some more tests, ran pgindent and also organized a bit some\ncomments and README.txt.\n>\n\nRebased version.\n\n-- \nFabrízio de Royes Mello",
"msg_date": "Tue, 26 Sep 2023 10:00:08 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Sat, Jun 03, 2023 at 07:42:36PM -0400, Fabrízio de Royes Mello wrote:\n> So in order to improve things a bit in this area I'm proposing to add a\n> test module for Table Access Method similar what we already have for Index\n> Access Method.\n> \n> This code is based on the \"blackhole_am\" implemented by Michael Paquier:\n> https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n\ndummy_index_am has included from the start additional coverage for the\nvarious internal add_*_reloption routines, that were never covered in\nthe core tree. Except if I am missing something, I am not seeing some\nof the extra usefulness for the patch you've sent here.\n--\nMichael",
"msg_date": "Thu, 28 Sep 2023 10:08:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Thu, 28 Sept 2023 at 10:23, Michael Paquier <[email protected]> wrote:\n>\n> On Sat, Jun 03, 2023 at 07:42:36PM -0400, Fabrízio de Royes Mello wrote:\n> > So in order to improve things a bit in this area I'm proposing to add a\n> > test module for Table Access Method similar what we already have for Index\n> > Access Method.\n> >\n> > This code is based on the \"blackhole_am\" implemented by Michael Paquier:\n> > https://github.com/michaelpq/pg_plugins/tree/main/blackhole_am\n>\n> dummy_index_am has included from the start additional coverage for the\n> various internal add_*_reloption routines, that were never covered in\n> the core tree. Except if I am missing something, I am not seeing some\n> of the extra usefulness for the patch you've sent here.\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as Michael's comments have not yet been resolved. Please\nhandle the comments and update the commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 16:39:18 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Thu, 28 Sept 2023 at 03:08, Michael Paquier <[email protected]> wrote:\n> dummy_index_am has included from the start additional coverage for the\n> various internal add_*_reloption routines, that were never covered in\n> the core tree. Except if I am missing something, I am not seeing some\n> of the extra usefulness for the patch you've sent here.\n\nWhen trying to implement a table access method in the past I remember\nvery well that I was having a really hard time finding an example of\none. I remember seeing the dummy_index_am module and being quite\ndisappointed that there wasn't a similar one for table access methods.\nI believe that I eventually found blackhole_am, but it took me quite a\nbit of mailing list spelunking to get there. So I think purely for\ndocumentation purposes this addition would already be useful.\n\n\n",
"msg_date": "Mon, 15 Jan 2024 10:36:49 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "Hi,\n\n> When trying to implement a table access method in the past I remember\n> very well that I was having a really hard time finding an example of\n> one.\n\nTo be fair, Postgres uses TAM internally, so there is at least one\ncomplete and up-to-date real-life example. Learning curve for TAMs is\nindeed steep, and I wonder if we could do a better job in this respect\ne.g. by providing a simpler example. This being said, I know several\npeople who learned TAM successfully (so far only for R&D tasks) which\nindicates that its difficulty is adequate.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 15 Jan 2024 16:26:08 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Mon, 15 Jan 2024 at 14:26, Aleksander Alekseev\n<[email protected]> wrote:\n> To be fair, Postgres uses TAM internally, so there is at least one\n> complete and up-to-date real-life example.\n\nSure, but that one is quite hard to follow if you don't already know\nlots of details of the heap storage. At least for me, having a minimal\nexample was extremely helpful and it made for a great code skeleton to\nstart from.\n\n\n",
"msg_date": "Mon, 15 Jan 2024 15:40:30 +0100",
"msg_from": "Jelte Fennema-Nio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Mon, Jan 15, 2024 at 03:40:30PM +0100, Jelte Fennema-Nio wrote:\n> On Mon, 15 Jan 2024 at 14:26, Aleksander Alekseev\n> <[email protected]> wrote:\n>> To be fair, Postgres uses TAM internally, so there is at least one\n>> complete and up-to-date real-life example.\n> \n> Sure, but that one is quite hard to follow if you don't already know\n> lots of details of the heap storage. At least for me, having a minimal\n> example was extremely helpful and it made for a great code skeleton to\n> start from.\n\nHmm. I'd rather have it do something useful in terms of test coverage\nrather than being just an empty skull.\n\nHow about adding the same kind of coverage as dummy_index_am with a\ncouple of reloptions then? That can serve as a point of reference\nwhen a table AM needs a few custom options. A second idea would be to\nshow how to use toast relations when implementing your new AM, where a\ntoast table could be created even in cases where we did not want one\nwith heap, when it comes to size limitations with char and/or varchar,\nand that makes for a simpler needs_toast_table callback.\n--\nMichaxel",
"msg_date": "Tue, 16 Jan 2024 13:58:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 10:28 AM Michael Paquier <[email protected]> wrote:\n>\n> Hmm. I'd rather have it do something useful in terms of test coverage\n> rather than being just an empty skull.\n>\n> How about adding the same kind of coverage as dummy_index_am with a\n> couple of reloptions then? That can serve as a point of reference\n> when a table AM needs a few custom options. A second idea would be to\n> show how to use toast relations when implementing your new AM, where a\n> toast table could be created even in cases where we did not want one\n> with heap, when it comes to size limitations with char and/or varchar,\n> and that makes for a simpler needs_toast_table callback.\n\nI think a test module for a table AM will really help developers. Just\nto add to the above list - how about the table AM implementing a\nsimple in-memory (columnar if possible) database storing tables\nin-memory and subsequently providing readers with the access to the\ntables?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 10:45:25 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "\nOn Tue, 16 Jan 2024 at 13:15, Bharath Rupireddy <[email protected]> wrote:\n> On Tue, Jan 16, 2024 at 10:28 AM Michael Paquier <[email protected]> wrote:\n>>\n>> Hmm. I'd rather have it do something useful in terms of test coverage\n>> rather than being just an empty skull.\n>>\n>> How about adding the same kind of coverage as dummy_index_am with a\n>> couple of reloptions then? That can serve as a point of reference\n>> when a table AM needs a few custom options. A second idea would be to\n>> show how to use toast relations when implementing your new AM, where a\n>> toast table could be created even in cases where we did not want one\n>> with heap, when it comes to size limitations with char and/or varchar,\n>> and that makes for a simpler needs_toast_table callback.\n>\n> I think a test module for a table AM will really help developers. Just\n> to add to the above list - how about the table AM implementing a\n> simple in-memory (columnar if possible) database storing tables\n> in-memory and subsequently providing readers with the access to the\n> tables?\n\nThat's a good idea.\n\n\n",
"msg_date": "Tue, 16 Jan 2024 14:15:21 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "Hi,\n\n> > I think a test module for a table AM will really help developers. Just\n> > to add to the above list - how about the table AM implementing a\n> > simple in-memory (columnar if possible) database storing tables\n> > in-memory and subsequently providing readers with the access to the\n> > tables?\n>\n> That's a good idea.\n\nPersonally I would be careful with this idea.\n\nPractice shows that when you show the first incomplete, limited and\nbuggy PoC it ends up being in the production environment the next day\n:) In other words sooner or later there will be users demanding a full\nin-memory columnar storage support from Postgres. I believe it would\nbe a problem. Last time I checked TAM was not extremely good for\nimplementing proper columnar storages, and there are lots of open\nquestions when it comes to in-memory tables (e.g. what to do with\nforeign keys, inherited tables, etc).\n\nAll in all I don't think we should provide something that can look /\nbe interpreted as first-class alternative storage but in fact is not.\n\n> How about adding the same kind of coverage as dummy_index_am with a\n> couple of reloptions then? That can serve as a point of reference\n> when a table AM needs a few custom options. A second idea would be to\n> show how to use toast relations when implementing your new AM, where a\n> toast table could be created even in cases where we did not want one\n> with heap, when it comes to size limitations with char and/or varchar,\n> and that makes for a simpler needs_toast_table callback.\n\nGood ideas. Additionally we could provide a proxy TAM for a heap TAM\nwhich does nothing but logging used TAM methods, its arguments and\nreturn values. This would be a good example and also potentially can\nbe used as a debugging tool.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 16 Jan 2024 12:39:42 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "Hi all,\n\nOn Tue, Jan 16, 2024 at 10:40 AM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi,\n>\n> > > I think a test module for a table AM will really help developers. Just\n> > > to add to the above list - how about the table AM implementing a\n> > > simple in-memory (columnar if possible) database storing tables\n> > > in-memory and subsequently providing readers with the access to the\n> > > tables?\n> >\n> > That's a good idea.\n>\n> Personally I would be careful with this idea.\n>\n> Practice shows that when you show the first incomplete, limited and\n> buggy PoC it ends up being in the production environment the next day\n> :) In other words sooner or later there will be users demanding a full\n> in-memory columnar storage support from Postgres. I believe it would\n> be a problem. Last time I checked TAM was not extremely good for\n> implementing proper columnar storages, and there are lots of open\n> questions when it comes to in-memory tables (e.g. what to do with\n> foreign keys, inherited tables, etc).\n>\n> All in all I don't think we should provide something that can look /\n> be interpreted as first-class alternative storage but in fact is not.\n>\n\nI tossed together a table access method for in-memory storage in column\nformat for experimental purposes over the holidays (I actually have a\nrow-based one as well, but that is in no shape to share at this point).\nIt's available under https://github.com/mkindahl/pg_arrow. The intention\nwas mostly to have something simple to play and experiment with. It is\nloosely based on the Apache Arrow Columnar format, but the normal data\nstructures are not suitable for storing in shared memory so I have tweaked\nit a little.\n\n\n> > How about adding the same kind of coverage as dummy_index_am with a\n> > couple of reloptions then? That can serve as a point of reference\n> > when a table AM needs a few custom options. A second idea would be to\n> > show how to use toast relations when implementing your new AM, where a\n> > toast table could be created even in cases where we did not want one\n> > with heap, when it comes to size limitations with char and/or varchar,\n> > and that makes for a simpler needs_toast_table callback.\n>\n> Good ideas. Additionally we could provide a proxy TAM for a heap TAM\n> which does nothing but logging used TAM methods, its arguments and\n> return values. This would be a good example and also potentially can\n> be used as a debugging tool.\n>\n\nWe wrote a table access method for experimenting with and to be able to\ntrace what happens while executing various statements. It is available\nunder https://github.com/timescale/pg_traceam for anybody who is interested.\n\nBest wishes,\nMats Kindahl\n\n\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\nHi all,On Tue, Jan 16, 2024 at 10:40 AM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> > I think a test module for a table AM will really help developers. Just\n> > to add to the above list - how about the table AM implementing a\n> > simple in-memory (columnar if possible) database storing tables\n> > in-memory and subsequently providing readers with the access to the\n> > tables?\n>\n> That's a good idea.\n\nPersonally I would be careful with this idea.\n\nPractice shows that when you show the first incomplete, limited and\nbuggy PoC it ends up being in the production environment the next day\n:) In other words sooner or later there will be users demanding a full\nin-memory columnar storage support from Postgres. I believe it would\nbe a problem. Last time I checked TAM was not extremely good for\nimplementing proper columnar storages, and there are lots of open\nquestions when it comes to in-memory tables (e.g. what to do with\nforeign keys, inherited tables, etc).\n\nAll in all I don't think we should provide something that can look /\nbe interpreted as first-class alternative storage but in fact is not.I tossed together a table access method for in-memory storage in column format for experimental purposes over the holidays (I actually have a row-based one as well, but that is in no shape to share at this point). It's available under https://github.com/mkindahl/pg_arrow. The intention was mostly to have something simple to play and experiment with. It is loosely based on the Apache Arrow Columnar format, but the normal data structures are not suitable for storing in shared memory so I have tweaked it a little. \n> How about adding the same kind of coverage as dummy_index_am with a\n> couple of reloptions then? That can serve as a point of reference\n> when a table AM needs a few custom options. A second idea would be to\n> show how to use toast relations when implementing your new AM, where a\n> toast table could be created even in cases where we did not want one\n> with heap, when it comes to size limitations with char and/or varchar,\n> and that makes for a simpler needs_toast_table callback.\n\nGood ideas. Additionally we could provide a proxy TAM for a heap TAM\nwhich does nothing but logging used TAM methods, its arguments and\nreturn values. This would be a good example and also potentially can\nbe used as a debugging tool.We wrote a table access method for experimenting with and to be able to trace what happens while executing various statements. It is available under https://github.com/timescale/pg_traceam for anybody who is interested.Best wishes,Mats Kindahl \n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 16 Jan 2024 13:12:52 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 6:15 AM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Tue, Jan 16, 2024 at 10:28 AM Michael Paquier <[email protected]>\n> wrote:\n> >\n> > Hmm. I'd rather have it do something useful in terms of test coverage\n> > rather than being just an empty skull.\n> >\n> > How about adding the same kind of coverage as dummy_index_am with a\n> > couple of reloptions then? That can serve as a point of reference\n> > when a table AM needs a few custom options. A second idea would be to\n> > show how to use toast relations when implementing your new AM, where a\n> > toast table could be created even in cases where we did not want one\n> > with heap, when it comes to size limitations with char and/or varchar,\n> > and that makes for a simpler needs_toast_table callback.\n>\n> I think a test module for a table AM will really help developers. Just\n> to add to the above list - how about the table AM implementing a\n> simple in-memory (columnar if possible) database storing tables\n> in-memory and subsequently providing readers with the access to the\n> tables?\n>\n\nHi,\n\nOne idea I wanted to implement is a table access method that you can use to\ntest the interface, something like a \"mock TAM\" where you can\nprogrammatically decide on the responses to unit-test the API. I was\nthinking that you could implement a framework that allows you to implement\nthe TAM in some scripting language like Perl, Python, or (horrors) Tcl for\neasy prototyping.\n\nBest wishes,\nMats Kindahl\n\n\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n>\n\nOn Tue, Jan 16, 2024 at 6:15 AM Bharath Rupireddy <[email protected]> wrote:On Tue, Jan 16, 2024 at 10:28 AM Michael Paquier <[email protected]> wrote:\n>\n> Hmm. I'd rather have it do something useful in terms of test coverage\n> rather than being just an empty skull.\n>\n> How about adding the same kind of coverage as dummy_index_am with a\n> couple of reloptions then? That can serve as a point of reference\n> when a table AM needs a few custom options. A second idea would be to\n> show how to use toast relations when implementing your new AM, where a\n> toast table could be created even in cases where we did not want one\n> with heap, when it comes to size limitations with char and/or varchar,\n> and that makes for a simpler needs_toast_table callback.\n\nI think a test module for a table AM will really help developers. Just\nto add to the above list - how about the table AM implementing a\nsimple in-memory (columnar if possible) database storing tables\nin-memory and subsequently providing readers with the access to the\ntables?Hi,One idea I wanted to implement is a table access method that you can use to test the interface, something like a \"mock TAM\" where you can programmatically decide on the responses to unit-test the API. I was thinking that you could implement a framework that allows you to implement the TAM in some scripting language like Perl, Python, or (horrors) Tcl for easy prototyping. Best wishes,Mats Kindahl\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 16 Jan 2024 13:16:27 +0100",
"msg_from": "Mats Kindahl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add test module for Table Access Method"
}
] |
[
{
"msg_contents": "Hi,\n\nPer Coverity.\n\nAt function ExtendBufferedRelShared, has a always true test.\neb.rel was dereferenced one line above, so in\nif (eb.rel) is always true.\n\nI think it's worth removing the test, because Coverity raises dozens of\nalerts thinking eb.rel might be NULL.\nBesides, one less test is one less branch.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 4 Jun 2023 09:42:22 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:\n\n> Hi,\n>\n> Per Coverity.\n>\n> At function ExtendBufferedRelShared, has a always true test.\n> eb.rel was dereferenced one line above, so in\n> if (eb.rel) is always true.\n>\n> I think it's worth removing the test, because Coverity raises dozens of\n> alerts thinking eb.rel might be NULL.\n> Besides, one less test is one less branch.\n>\n\nThis also happens in ExtendBufferedRelTo, and the comment there explains\nthat the eb.rel 'could have been closed while waiting for lock'. So for\nthe same consideration, the test in ExtendBufferedRelShared might be\nstill needed? But I'm not familiar with the arounding codes, so need\nsomeone else to confirm that.\n\nThanks\nRichard\n\nOn Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:Hi,Per Coverity.At function ExtendBufferedRelShared, has a always true test.eb.rel was dereferenced one line above, so in if (eb.rel) is always true.I think it's worth removing the test, because Coverity raises dozens of alerts thinking eb.rel might be NULL.Besides, one less test is one less branch.This also happens in ExtendBufferedRelTo, and the comment there explainsthat the eb.rel 'could have been closed while waiting for lock'. So forthe same consideration, the test in ExtendBufferedRelShared might bestill needed? But I'm not familiar with the arounding codes, so needsomeone else to confirm that.ThanksRichard",
"msg_date": "Mon, 5 Jun 2023 10:37:10 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em dom., 4 de jun. de 2023 às 23:37, Richard Guo <[email protected]>\nescreveu:\n\n>\n> On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> Per Coverity.\n>>\n>> At function ExtendBufferedRelShared, has a always true test.\n>> eb.rel was dereferenced one line above, so in\n>> if (eb.rel) is always true.\n>>\n>> I think it's worth removing the test, because Coverity raises dozens of\n>> alerts thinking eb.rel might be NULL.\n>> Besides, one less test is one less branch.\n>>\n>\n> This also happens in ExtendBufferedRelTo, and the comment there explains\n> that the eb.rel 'could have been closed while waiting for lock'.\n>\nWell, RelationGetSmgr also dereferences eb.rel.\nIf eb.rel could be closed while waiting for lock,\nanyone who references eb.rel below takes a risk?\n\nstatic inline SMgrRelation\nRelationGetSmgr(Relation rel)\n{\nif (unlikely(rel->rd_smgr == NULL))\nsmgrsetowner(&(rel->rd_smgr), smgropen(rel->rd_locator, rel->rd_backend));\nreturn rel->rd_smgr;\n}\n\nregards,\nRanier Vilela\n\nEm dom., 4 de jun. de 2023 às 23:37, Richard Guo <[email protected]> escreveu:On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:Hi,Per Coverity.At function ExtendBufferedRelShared, has a always true test.eb.rel was dereferenced one line above, so in if (eb.rel) is always true.I think it's worth removing the test, because Coverity raises dozens of alerts thinking eb.rel might be NULL.Besides, one less test is one less branch.This also happens in ExtendBufferedRelTo, and the comment there explainsthat the eb.rel 'could have been closed while waiting for lock'.Well, RelationGetSmgr also dereferences eb.rel.If eb.rel could be closed while waiting for lock,anyone who references eb.rel below takes a risk? static inline SMgrRelationRelationGetSmgr(Relation rel){\tif (unlikely(rel->rd_smgr == NULL))\t\tsmgrsetowner(&(rel->rd_smgr), smgropen(rel->rd_locator, rel->rd_backend));\treturn rel->rd_smgr;}regards,Ranier Vilela",
"msg_date": "Mon, 5 Jun 2023 08:06:26 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em seg., 5 de jun. de 2023 às 08:06, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em dom., 4 de jun. de 2023 às 23:37, Richard Guo <[email protected]>\n> escreveu:\n>\n>>\n>> On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> Per Coverity.\n>>>\n>>> At function ExtendBufferedRelShared, has a always true test.\n>>> eb.rel was dereferenced one line above, so in\n>>> if (eb.rel) is always true.\n>>>\n>>> I think it's worth removing the test, because Coverity raises dozens of\n>>> alerts thinking eb.rel might be NULL.\n>>> Besides, one less test is one less branch.\n>>>\n>>\n>> This also happens in ExtendBufferedRelTo, and the comment there explains\n>> that the eb.rel 'could have been closed while waiting for lock'.\n>>\n> Well, RelationGetSmgr also dereferences eb.rel.\n> If eb.rel could be closed while waiting for lock,\n> anyone who references eb.rel below takes a risk?\n>\n> static inline SMgrRelation\n> RelationGetSmgr(Relation rel)\n> {\n> if (unlikely(rel->rd_smgr == NULL))\n> smgrsetowner(&(rel->rd_smgr), smgropen(rel->rd_locator, rel->rd_backend));\n> return rel->rd_smgr;\n> }\n>\nSorry Richard, nevermind.\n\nMy fault, I withdraw this patch.\n\nregards,\nRanier Vilela\n\nEm seg., 5 de jun. de 2023 às 08:06, Ranier Vilela <[email protected]> escreveu:Em dom., 4 de jun. de 2023 às 23:37, Richard Guo <[email protected]> escreveu:On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:Hi,Per Coverity.At function ExtendBufferedRelShared, has a always true test.eb.rel was dereferenced one line above, so in if (eb.rel) is always true.I think it's worth removing the test, because Coverity raises dozens of alerts thinking eb.rel might be NULL.Besides, one less test is one less branch.This also happens in ExtendBufferedRelTo, and the comment there explainsthat the eb.rel 'could have been closed while waiting for lock'.Well, RelationGetSmgr also dereferences eb.rel.If eb.rel could be closed while waiting for lock,anyone who references eb.rel below takes a risk? static inline SMgrRelationRelationGetSmgr(Relation rel){\tif (unlikely(rel->rd_smgr == NULL))\t\tsmgrsetowner(&(rel->rd_smgr), smgropen(rel->rd_locator, rel->rd_backend));\treturn rel->rd_smgr;}Sorry Richard, nevermind.My fault, I withdraw this patch.regards,Ranier Vilela",
"msg_date": "Mon, 5 Jun 2023 08:24:13 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 4:24 AM Ranier Vilela <[email protected]> wrote:\n> Em seg., 5 de jun. de 2023 às 08:06, Ranier Vilela <[email protected]> escreveu:\n>> Em dom., 4 de jun. de 2023 às 23:37, Richard Guo <[email protected]> escreveu:\n>>> On Sun, Jun 4, 2023 at 8:42 PM Ranier Vilela <[email protected]> wrote:\n>>>> Hi,\n>>>>\n>>>> Per Coverity.\n>>>>\n>>>> At function ExtendBufferedRelShared, has a always true test.\n>>>> eb.rel was dereferenced one line above, so in\n>>>> if (eb.rel) is always true.\n>>>>\n>>>> I think it's worth removing the test, because Coverity raises dozens of alerts thinking eb.rel might be NULL.\n>>>> Besides, one less test is one less branch.\n>>>\n>>>\n>>> This also happens in ExtendBufferedRelTo, and the comment there explains\n>>> that the eb.rel 'could have been closed while waiting for lock'.\n>>\n>> Well, RelationGetSmgr also dereferences eb.rel.\n>> If eb.rel could be closed while waiting for lock,\n>> anyone who references eb.rel below takes a risk?\n>>\n>> static inline SMgrRelation\n>> RelationGetSmgr(Relation rel)\n>> {\n>> if (unlikely(rel->rd_smgr == NULL))\n>> smgrsetowner(&(rel->rd_smgr), smgropen(rel->rd_locator, rel->rd_backend));\n>> return rel->rd_smgr;\n>> }\n>\n> Sorry Richard, nevermind.\n>\n> My fault, I withdraw this patch.\n\nI'm not quite convinced of the reasoning provided; either the reason\nis not good enough, or my C is rusty. In either case, I'd like a\nresolution.\n\nThe code in question:\n\n> LockRelationForExtension(eb.rel, ExclusiveLock);\n>\n> /* could have been closed while waiting for lock */\n> if (eb.rel)\n> eb.smgr = RelationGetSmgr(eb.rel);\n\neb.rel is being passed by-value at line 1, so even if the relation is\nclosed, the value of the eb.rel cannot change between line 1 and line\n3. So a code verification tool complaining that the 'if' condition\nwill always be true is quite right, IMO.\n\nTo verify my assumptions, I removed those checks and ran `make\n{check,check-world,installcheck}`, and all those tests passed.\n\nThe only way, that I can think of, the value of eb.rel can change\nbetween lines 1 and 3 is if 'eb' is a shared-memory data structure,\nand some other process changed the 'rel' member in shared-memory. And\nI don't think 'eb' is in shared memory in this case.\n\nTo me, it looks like these checks are a result of code being\ncopy-pasted from somewhere else, where this check might have been\nnecessary. The checks are sure not necessary at these spots.\n\nPlease see attached v2 of the patch; it includes both occurrences of\nthe spurious checks identified in this thread.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 12 Jun 2023 22:51:24 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 10:51:24PM -0700, Gurjeet Singh wrote:\n> To me, it looks like these checks are a result of code being\n> copy-pasted from somewhere else, where this check might have been\n> necessary. The checks are sure not necessary at these spots.\n\nI am not completely sure based on my read of the code, but isn't this\ncheck needed to avoid some kind of race condition with a concurrent\nbackend may have worked on the relation when attempting to get the\nlock?\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 15:11:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "At Tue, 13 Jun 2023 15:11:26 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Jun 12, 2023 at 10:51:24PM -0700, Gurjeet Singh wrote:\n> > To me, it looks like these checks are a result of code being\n> > copy-pasted from somewhere else, where this check might have been\n> > necessary. The checks are sure not necessary at these spots.\n> \n> I am not completely sure based on my read of the code, but isn't this\n> check needed to avoid some kind of race condition with a concurrent\n> backend may have worked on the relation when attempting to get the\n> lock?\n\nGurjeet has mentioned that eb.rel cannot be modified by another\nprocess since the value or memory is in the local stack, and I believe\nhe's correct.\n\nIf the pointed Relation had been blown out, eb.rel would be left\ndangling, not nullified. However, I don't believe this situation\nhappens (or it shouldn't happen) as the entire relation should already\nbe locked.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:39:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em ter., 13 de jun. de 2023 às 02:51, Gurjeet Singh <[email protected]>\nescreveu:\n\n> Please see attached v2 of the patch; it includes both occurrences of\n> the spurious checks identified in this thread.\n>\n+1\nLGTM.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de jun. de 2023 às 02:51, Gurjeet Singh <[email protected]> escreveu:\nPlease see attached v2 of the patch; it includes both occurrences of\nthe spurious checks identified in this thread.+1LGTM.regards,Ranier Vilela",
"msg_date": "Tue, 13 Jun 2023 08:21:20 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "At Tue, 13 Jun 2023 08:21:20 -0300, Ranier Vilela <[email protected]> wrote in \n> Em ter., 13 de jun. de 2023 às 02:51, Gurjeet Singh <[email protected]>\n> escreveu:\n> \n> > Please see attached v2 of the patch; it includes both occurrences of\n> > the spurious checks identified in this thread.\n> >\n> +1\n> LGTM.\n\n\n> \t\tLockRelationForExtension(eb.rel, ExclusiveLock);\n> \n> -\t\t/* could have been closed while waiting for lock */\n> -\t\tif (eb.rel)\n> -\t\t\teb.smgr = RelationGetSmgr(eb.rel);\n> +\t\teb.smgr = RelationGetSmgr(eb.rel);\n \n(It seems to me) The removed comment does refer to smgr, so we could\nconsider keeping it. There are places instances where the function\ncalls are accompanied by similar comments and others where they\naren't. However, personally, I inclined towards its removal. That's\nbecause our policy is to call RelationGetSmgr() each time before using\nsmgr, and this is well documented in the function's comment.\n\nIf we decide to remove it, the preceding blank line seems to be a\nseparator from the previous function call. So, we might want to\nconsider removing that blank line, too.\n\nOtherwise it LGTM.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Jun 2023 10:01:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "At Wed, 14 Jun 2023 10:01:59 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> If we decide to remove it, the preceding blank line seems to be a\n> separator from the previous function call. So, we might want to\n\nMmm. that is a bit short. Anyway I meant that the blank will become\nuseless after removing the comment.\n\n> consider removing that blank line, too.\n> \n> Otherwise it LGTM.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Jun 2023 10:05:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n> Gurjeet has mentioned that eb.rel cannot be modified by another\n> process since the value or memory is in the local stack, and I believe\n> he's correct.\n>\n> If the pointed Relation had been blown out, eb.rel would be left\n> dangling, not nullified. However, I don't believe this situation\n> happens (or it shouldn't happen) as the entire relation should already\n> be locked.\n\n\nYeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULL\npointer in any case. And as we've acquired the lock for it, it should\nnot have been closed. So I think we can remove the check for eb.rel in\nthe two places.\n\nThanks\nRichard\n\nOn Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <[email protected]> wrote:\nGurjeet has mentioned that eb.rel cannot be modified by another\nprocess since the value or memory is in the local stack, and I believe\nhe's correct.\n\nIf the pointed Relation had been blown out, eb.rel would be left\ndangling, not nullified. However, I don't believe this situation\nhappens (or it shouldn't happen) as the entire relation should already\nbe locked.Yeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULLpointer in any case. And as we've acquired the lock for it, it shouldnot have been closed. So I think we can remove the check for eb.rel inthe two places.ThanksRichard",
"msg_date": "Wed, 14 Jun 2023 17:50:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em qua., 14 de jun. de 2023 às 06:51, Richard Guo <[email protected]>\nescreveu:\n\n>\n> On Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <[email protected]>\n> wrote:\n>\n>> Gurjeet has mentioned that eb.rel cannot be modified by another\n>> process since the value or memory is in the local stack, and I believe\n>> he's correct.\n>>\n>> If the pointed Relation had been blown out, eb.rel would be left\n>> dangling, not nullified. However, I don't believe this situation\n>> happens (or it shouldn't happen) as the entire relation should already\n>> be locked.\n>\n>\n> Yeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULL\n> pointer in any case. And as we've acquired the lock for it, it should\n> not have been closed. So I think we can remove the check for eb.rel in\n> the two places.\n>\nOk,\nAs there is a consensus on removing the tests and the comment is still\nrelevant,\nhere is a new version for analysis.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 14 Jun 2023 09:11:51 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 5:12 AM Ranier Vilela <[email protected]> wrote:\n>\n> Em qua., 14 de jun. de 2023 às 06:51, Richard Guo <[email protected]> escreveu:\n>>\n>>\n>> On Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <[email protected]> wrote:\n>>>\n>>> Gurjeet has mentioned that eb.rel cannot be modified by another\n>>> process since the value or memory is in the local stack, and I believe\n>>> he's correct.\n>>>\n>>> If the pointed Relation had been blown out, eb.rel would be left\n>>> dangling, not nullified. However, I don't believe this situation\n>>> happens (or it shouldn't happen) as the entire relation should already\n>>> be locked.\n>>\n>>\n>> Yeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULL\n>> pointer in any case. And as we've acquired the lock for it, it should\n>> not have been closed. So I think we can remove the check for eb.rel in\n>> the two places.\n>\n> Ok,\n> As there is a consensus on removing the tests and the comment is still relevant,\n> here is a new version for analysis.\n\nLGTM.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 14 Jun 2023 09:32:12 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em qua., 14 de jun. de 2023 às 13:32, Gurjeet Singh <[email protected]>\nescreveu:\n\n> On Wed, Jun 14, 2023 at 5:12 AM Ranier Vilela <[email protected]> wrote:\n> >\n> > Em qua., 14 de jun. de 2023 às 06:51, Richard Guo <\n> [email protected]> escreveu:\n> >>\n> >>\n> >> On Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <\n> [email protected]> wrote:\n> >>>\n> >>> Gurjeet has mentioned that eb.rel cannot be modified by another\n> >>> process since the value or memory is in the local stack, and I believe\n> >>> he's correct.\n> >>>\n> >>> If the pointed Relation had been blown out, eb.rel would be left\n> >>> dangling, not nullified. However, I don't believe this situation\n> >>> happens (or it shouldn't happen) as the entire relation should already\n> >>> be locked.\n> >>\n> >>\n> >> Yeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULL\n> >> pointer in any case. And as we've acquired the lock for it, it should\n> >> not have been closed. So I think we can remove the check for eb.rel in\n> >> the two places.\n> >\n> > Ok,\n> > As there is a consensus on removing the tests and the comment is still\n> relevant,\n> > here is a new version for analysis.\n>\n> LGTM.\n>\nCreated an entry in commitfest to track this.\nhttps://commitfest.postgresql.org/43/4371/\n\nregards,\nRanier Vilela\n\nEm qua., 14 de jun. de 2023 às 13:32, Gurjeet Singh <[email protected]> escreveu:On Wed, Jun 14, 2023 at 5:12 AM Ranier Vilela <[email protected]> wrote:\n>\n> Em qua., 14 de jun. de 2023 às 06:51, Richard Guo <[email protected]> escreveu:\n>>\n>>\n>> On Tue, Jun 13, 2023 at 3:39 PM Kyotaro Horiguchi <[email protected]> wrote:\n>>>\n>>> Gurjeet has mentioned that eb.rel cannot be modified by another\n>>> process since the value or memory is in the local stack, and I believe\n>>> he's correct.\n>>>\n>>> If the pointed Relation had been blown out, eb.rel would be left\n>>> dangling, not nullified. However, I don't believe this situation\n>>> happens (or it shouldn't happen) as the entire relation should already\n>>> be locked.\n>>\n>>\n>> Yeah, Gurjeet is right. I had a thinko here. eb.rel should not be NULL\n>> pointer in any case. And as we've acquired the lock for it, it should\n>> not have been closed. So I think we can remove the check for eb.rel in\n>> the two places.\n>\n> Ok,\n> As there is a consensus on removing the tests and the comment is still relevant,\n> here is a new version for analysis.\n\nLGTM.Created an entry in commitfest to track this.https://commitfest.postgresql.org/43/4371/regards,Ranier Vilela",
"msg_date": "Thu, 15 Jun 2023 10:39:17 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Hi,\n\nGood catch. Indeed, eb.rel shouldn't be NULL there and the tests should be\nunnecessary. However, it doesn't follow from the code of these functions.\n From what I can see eb.rel can be NULL in both of these functions. There is\nthe following Assert in the beginning of the ExtendBufferedRelTo() function:\n\nAssert((eb.rel != NULL) != (eb.smgr != NULL));\n\nAnd ExtendBufferedRelShared() is only called from ExtendBufferedRelCommon()\nwhich can be called from ExtendBufferedRelTo() or ExtendBufferedRelBy() that\nalso has the same Assert(). And none of these functions assigns eb.rel, so\nit can be NULL from the very beginning and it stays the same.\n\n\nAnd there is the following call in xlogutils.c, which is exactly the case\nwhen\neb.rel is NULL:\n\nbuffer = ExtendBufferedRelTo(EB_SMGR(smgr, RELPERSISTENCE_PERMANENT),\n forknum,\n NULL,\n EB_PERFORMING_RECOVERY |\n EB_SKIP_EXTENSION_LOCK,\n blkno + 1,\n mode);\n\n\nSo as for me calling LockRelationForExtension() and\nUnlockRelationForExtension()\nwithout testing eb.rel first looks more like a bug here. However, they are\nnever\nactually called with eb.rel=NULL because of the EB_* flags, so there is no\nbug\nhere. I believe we should add Assert checking that when eb.rel is NULL,\nflags\nare such that we won't use eb.rel. And yes, we can remove unnecessary checks\nwhere the flags value guaranty us that eb.rel is not NULL.\n\n\nAnd another minor observation. It seems to me that we don't need a \"could\nhave\nbeen closed while waiting for lock\" in ExtendBufferedRelShared(), because I\nbelieve the comment above already explains why updating eb.smgr:\n\n * Note that another backend might have extended the relation by the time\n * we get the lock.\n\n\nI attached the new version of the patch as I see it.\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Fri, 30 Jun 2023 21:48:28 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em sex., 30 de jun. de 2023 às 15:48, Karina Litskevich <\[email protected]> escreveu:\n\n> Hi,\n>\n> Good catch. Indeed, eb.rel shouldn't be NULL there and the tests should be\n> unnecessary.\n>\nThanks for the confirmation.\n\n\n> However, it doesn't follow from the code of these functions.\n> From what I can see eb.rel can be NULL in both of these functions. There is\n> the following Assert in the beginning of the ExtendBufferedRelTo()\n> function:\n>\n> Assert((eb.rel != NULL) != (eb.smgr != NULL));\n>\n> And ExtendBufferedRelShared() is only called from ExtendBufferedRelCommon()\n> which can be called from ExtendBufferedRelTo() or ExtendBufferedRelBy()\n> that\n> also has the same Assert(). And none of these functions assigns eb.rel, so\n> it can be NULL from the very beginning and it stays the same.\n>\n>\n> And there is the following call in xlogutils.c, which is exactly the case\n> when\n> eb.rel is NULL:\n>\n> buffer = ExtendBufferedRelTo(EB_SMGR(smgr, RELPERSISTENCE_PERMANENT),\n> forknum,\n> NULL,\n> EB_PERFORMING_RECOVERY |\n> EB_SKIP_EXTENSION_LOCK,\n> blkno + 1,\n> mode);\n>\nEB_SMGR and EB_REL are macros for making new structs.\nIMO these are buggy, once make new structs without initializing all fields.\nAttached a patch to fix this and make more clear when rel or smgr is NULL.\n\n\n>\n>\n>\n> So as for me calling LockRelationForExtension() and\n> UnlockRelationForExtension()\n> without testing eb.rel first looks more like a bug here. However, they are\n> never\n> actually called with eb.rel=NULL because of the EB_* flags, so there is no\n> bug\n> here. I believe we should add Assert checking that when eb.rel is NULL,\n> flags\n> are such that we won't use eb.rel. And yes, we can remove unnecessary\n> checks\n> where the flags value guaranty us that eb.rel is not NULL.\n>\nNot against these Asserts, but It is very confusing and difficult to\nunderstand them without some comment.\n\n\n>\n> And another minor observation. It seems to me that we don't need a \"could\n> have\n> been closed while waiting for lock\" in ExtendBufferedRelShared(), because I\n> believe the comment above already explains why updating eb.smgr:\n>\n> * Note that another backend might have extended the relation by the time\n> * we get the lock.\n>\nOk, but the first comment still ambiguous, I think that could be:\n\"/* eb.smgr could have been closed while waiting for lock */\"\n\nbest regards,\nRanier Vilela",
"msg_date": "Mon, 3 Jul 2023 08:26:26 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "> EB_SMGR and EB_REL are macros for making new structs.\n> IMO these are buggy, once make new structs without initializing all fields.\n> Attached a patch to fix this and make more clear when rel or smgr is NULL.\n>\n\nAs long as a structure is initialized, its fields that are not present in\ninitialization are initialized to zeros and NULLs depending on their types.\nSee C99 Standard 6.7.8.21 and 6.7.8.10. This behaviour is quite well known,\nso I don't think this place is buggy. Anyway, if someone else says the code\nis more readable with these fields initialized explicitly, then go on.\n\n\n\n> Not against these Asserts, but It is very confusing and difficult to\n> understand them without some comment.\n>\n\nI'm not familiar enough with the code to write any comment that makes any\nadditional meaning. Assert by itself means \"when the function is called with\neb.rel == NULL, then flags are supposed to contain EB_SKIP_EXTENSION_LOCK\nand\nto not contain EB_CREATE_FORK_IF_NEEDED\". I can guess that it's because with\nany of these flags we have to lock the relation and we can't do it if we\ndon't\nknow what is the relation. But it's my speculation, I won't write a comment\nbased on it. We better wait for someone who knows this code.\n\n\nAnd another minor observation. It seems to me that we don't need a \"could\n>> have\n>> been closed while waiting for lock\" in ExtendBufferedRelShared(), because\n>> I\n>> believe the comment above already explains why updating eb.smgr:\n>>\n>> * Note that another backend might have extended the relation by the time\n>> * we get the lock.\n>>\n> Ok, but the first comment still ambiguous, I think that could be:\n> \"/* eb.smgr could have been closed while waiting for lock */\"\n>\n\nIt doesn't make a big difference for me, so you can add \"eb.smgr\" if you\nwant to.\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nEB_SMGR and EB_REL are macros for making new structs.IMO these are buggy, once make new structs without initializing all fields.Attached a patch to fix this and make more clear when rel or smgr is NULL.As long as a structure is initialized, its fields that are not present ininitialization are initialized to zeros and NULLs depending on their types.See C99 Standard 6.7.8.21 and 6.7.8.10. This behaviour is quite well known,so I don't think this place is buggy. Anyway, if someone else says the codeis more readable with these fields initialized explicitly, then go on. \nNot against these Asserts, but It is very confusing and difficult to understand them without some comment.I'm not familiar enough with the code to write any comment that makes anyadditional meaning. Assert by itself means \"when the function is called witheb.rel == NULL, then flags are supposed to contain EB_SKIP_EXTENSION_LOCK andto not contain EB_CREATE_FORK_IF_NEEDED\". I can guess that it's because withany of these flags we have to lock the relation and we can't do it if we don'tknow what is the relation. But it's my speculation, I won't write a commentbased on it. We better wait for someone who knows this code.And another minor observation. It seems to me that we don't need a \"could havebeen closed while waiting for lock\" in ExtendBufferedRelShared(), because Ibelieve the comment above already explains why updating eb.smgr: * Note that another backend might have extended the relation by the time * we get the lock.Ok, but the first comment still ambiguous, I think that could be:\"/* eb.smgr could have been closed while waiting for lock */\"It doesn't make a big difference for me, so you can add \"eb.smgr\" if you want to. Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Thu, 6 Jul 2023 18:01:06 +0300",
"msg_from": "Karina Litskevich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 8:01 AM Karina Litskevich\n<[email protected]> wrote:\n>\n>\n>> EB_SMGR and EB_REL are macros for making new structs.\n>> IMO these are buggy, once make new structs without initializing all fields.\n>> Attached a patch to fix this and make more clear when rel or smgr is NULL.\n>\n>\n> As long as a structure is initialized, its fields that are not present in\n> initialization are initialized to zeros and NULLs depending on their types.\n> See C99 Standard 6.7.8.21 and 6.7.8.10. This behaviour is quite well known,\n> so I don't think this place is buggy. Anyway, if someone else says the code\n> is more readable with these fields initialized explicitly, then go on.\n\nEven though I am not a fan of the Designated Initializers feature, I\nagree with Karina. Per the standard, the unmentioned fields get\ninitialized to zeroes/NULLs, so the explicit initialization to\nzero/null that this additional patch does is unnecessary. Moreover, I\nfeel that it makes the code less pleasant to read.\n\nC99, 6.7.8.21:\n> If there are fewer initializers in a brace-enclosed list than there are\n> elements or members of an aggregate, or fewer characters in a string literal\n> used to initialize an array of known size than there are elements in the array,\n> the remainder of the aggregate shall be initialized implicitly the same as\n> objects that have static storage duration.\n\nC99, 6.7.8.10:\n> If an object that has automatic storage duration is not initialized explicitly,\n> its value is indeterminate. If an object that has static storage duration is\n> not initialized explicitly, then:\n> - if it has pointer type, it is initialized to a null pointer;\n> - if it has arithmetic type, it is initialized to (positive or unsigned) zero;\n> - if it is an aggregate, every member is initialized (recursively) according to these rules;\n> - if it is a union, the first named member is initialized (recursively) according to these rules.\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 6 Jul 2023 12:06:20 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em qui., 6 de jul. de 2023 às 16:06, Gurjeet Singh <[email protected]>\nescreveu:\n\n> On Thu, Jul 6, 2023 at 8:01 AM Karina Litskevich\n> <[email protected]> wrote:\n> >\n> >\n> >> EB_SMGR and EB_REL are macros for making new structs.\n> >> IMO these are buggy, once make new structs without initializing all\n> fields.\n> >> Attached a patch to fix this and make more clear when rel or smgr is\n> NULL.\n> >\n> >\n> > As long as a structure is initialized, its fields that are not present in\n> > initialization are initialized to zeros and NULLs depending on their\n> types.\n> > See C99 Standard 6.7.8.21 and 6.7.8.10. This behaviour is quite well\n> known,\n> > so I don't think this place is buggy. Anyway, if someone else says the\n> code\n> > is more readable with these fields initialized explicitly, then go on.\n>\n> Even though I am not a fan of the Designated Initializers feature, I\n> agree with Karina. Per the standard, the unmentioned fields get\n> initialized to zeroes/NULLs, so the explicit initialization to\n> zero/null that this additional patch does is unnecessary. Moreover, I\n> feel that it makes the code less pleasant to read.\n>\n> C99, 6.7.8.21:\n> > If there are fewer initializers in a brace-enclosed list than there are\n> > elements or members of an aggregate, or fewer characters in a string\n> literal\n> > used to initialize an array of known size than there are elements in the\n> array,\n> > the remainder of the aggregate shall be initialized implicitly the same\n> as\n> > objects that have static storage duration.\n>\n> C99, 6.7.8.10:\n> > If an object that has automatic storage duration is not initialized\n> explicitly,\n> > its value is indeterminate.\n\nThe key points are here.\nThe object is not static storage duration.\nThe object is struct with \"automatic storage duration\".\n\nAnd not all compilers follow the standards,\nthey tend to vary quite a bit.\n\nregards,\nRanier Vilela\n\nEm qui., 6 de jul. de 2023 às 16:06, Gurjeet Singh <[email protected]> escreveu:On Thu, Jul 6, 2023 at 8:01 AM Karina Litskevich\n<[email protected]> wrote:\n>\n>\n>> EB_SMGR and EB_REL are macros for making new structs.\n>> IMO these are buggy, once make new structs without initializing all fields.\n>> Attached a patch to fix this and make more clear when rel or smgr is NULL.\n>\n>\n> As long as a structure is initialized, its fields that are not present in\n> initialization are initialized to zeros and NULLs depending on their types.\n> See C99 Standard 6.7.8.21 and 6.7.8.10. This behaviour is quite well known,\n> so I don't think this place is buggy. Anyway, if someone else says the code\n> is more readable with these fields initialized explicitly, then go on.\n\nEven though I am not a fan of the Designated Initializers feature, I\nagree with Karina. Per the standard, the unmentioned fields get\ninitialized to zeroes/NULLs, so the explicit initialization to\nzero/null that this additional patch does is unnecessary. Moreover, I\nfeel that it makes the code less pleasant to read.\n\nC99, 6.7.8.21:\n> If there are fewer initializers in a brace-enclosed list than there are\n> elements or members of an aggregate, or fewer characters in a string literal\n> used to initialize an array of known size than there are elements in the array,\n> the remainder of the aggregate shall be initialized implicitly the same as\n> objects that have static storage duration.\n\nC99, 6.7.8.10:\n> If an object that has automatic storage duration is not initialized explicitly,\n> its value is indeterminate.The key points are here.The object is not static storage duration.The object is struct with \"automatic storage duration\".And not all compilers follow the standards,they tend to vary quite a bit.regards,Ranier Vilela",
"msg_date": "Thu, 6 Jul 2023 16:22:23 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On 30.06.23 20:48, Karina Litskevich wrote:\n> So as for me calling LockRelationForExtension() and \n> UnlockRelationForExtension()\n> without testing eb.rel first looks more like a bug here. However, they \n> are never\n> actually called with eb.rel=NULL because of the EB_* flags, so there is \n> no bug\n> here. I believe we should add Assert checking that when eb.rel is NULL, \n> flags\n> are such that we won't use eb.rel. And yes, we can remove unnecessary checks\n> where the flags value guaranty us that eb.rel is not NULL.\n> \n> And another minor observation. It seems to me that we don't need a \n> \"could have\n> been closed while waiting for lock\" in ExtendBufferedRelShared(), because I\n> believe the comment above already explains why updating eb.smgr:\n> \n> * Note that another backend might have extended the relation by the time\n> * we get the lock.\n> \n> I attached the new version of the patch as I see it.\n\nThis patch version looks the most sensible to me. But as commented \nfurther downthread, some explanation around the added assertions would \nbe good.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 14:32:56 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Mon, 4 Sept 2023 at 21:40, Peter Eisentraut <[email protected]> wrote:\n>\n> On 30.06.23 20:48, Karina Litskevich wrote:\n> > So as for me calling LockRelationForExtension() and\n> > UnlockRelationForExtension()\n> > without testing eb.rel first looks more like a bug here. However, they\n> > are never\n> > actually called with eb.rel=NULL because of the EB_* flags, so there is\n> > no bug\n> > here. I believe we should add Assert checking that when eb.rel is NULL,\n> > flags\n> > are such that we won't use eb.rel. And yes, we can remove unnecessary checks\n> > where the flags value guaranty us that eb.rel is not NULL.\n> >\n> > And another minor observation. It seems to me that we don't need a\n> > \"could have\n> > been closed while waiting for lock\" in ExtendBufferedRelShared(), because I\n> > believe the comment above already explains why updating eb.smgr:\n> >\n> > * Note that another backend might have extended the relation by the time\n> > * we get the lock.\n> >\n> > I attached the new version of the patch as I see it.\n>\n> This patch version looks the most sensible to me. But as commented\n> further downthread, some explanation around the added assertions would\n> be good.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 2 Feb 2024 00:04:39 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "On Fri, Feb 02, 2024 at 12:04:39AM +0530, vignesh C wrote:\n> The patch which you submitted has been awaiting your attention for\n> quite some time now. As such, we have moved it to \"Returned with\n> Feedback\" and removed it from the reviewing queue. Depending on\n> timing, this may be reversible. Kindly address the feedback you have\n> received, and resubmit the patch to the next CommitFest.\n\nEven with that, it seems to me that this is not required now that\n21d9c3ee4ef7 outlines better how long SMgrRelation pointers should\nlive, no?\n--\nMichael",
"msg_date": "Fri, 2 Feb 2024 15:48:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
},
{
"msg_contents": "Em sex., 2 de fev. de 2024 às 03:48, Michael Paquier <[email protected]>\nescreveu:\n\n> On Fri, Feb 02, 2024 at 12:04:39AM +0530, vignesh C wrote:\n> > The patch which you submitted has been awaiting your attention for\n> > quite some time now. As such, we have moved it to \"Returned with\n> > Feedback\" and removed it from the reviewing queue. Depending on\n> > timing, this may be reversible. Kindly address the feedback you have\n> > received, and resubmit the patch to the next CommitFest.\n>\n> Even with that, it seems to me that this is not required now that\n> 21d9c3ee4ef7 outlines better how long SMgrRelation pointers should\n> live, no?\n>\nCorrect Micheal, the best thing would be to remove the patch now.\nSince it has completely lost its meaning.\n\nBest regards,\nRanier Vilela\n\nEm sex., 2 de fev. de 2024 às 03:48, Michael Paquier <[email protected]> escreveu:On Fri, Feb 02, 2024 at 12:04:39AM +0530, vignesh C wrote:\n> The patch which you submitted has been awaiting your attention for\n> quite some time now. As such, we have moved it to \"Returned with\n> Feedback\" and removed it from the reviewing queue. Depending on\n> timing, this may be reversible. Kindly address the feedback you have\n> received, and resubmit the patch to the next CommitFest.\n\nEven with that, it seems to me that this is not required now that\n21d9c3ee4ef7 outlines better how long SMgrRelation pointers should\nlive, no?Correct Micheal, the best thing would be to remove the patch now.Since it has completely lost its meaning.Best regards,Ranier Vilela",
"msg_date": "Fri, 2 Feb 2024 08:20:30 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unncessary always true test\n (src/backend/storage/buffer/bufmgr.c)"
}
] |
[
{
"msg_contents": "Looking at the \"collation settings\" table in the v16 docs, I think some\nreaders may have a little difficulty understanding what each row means.\n\nhttps://www.postgresql.org/docs/devel/collation.html#ICU-COLLATION-SETTINGS\n\nThe \"Key\" column isn't meaningful and it's a bit arduous to read the\nwhole description column for every row in the table, just to understand\nwhich keys I might be interested in.\n\nI like how Peter's recent blog used the alias to organize the keys.\n\nhttp://peter.eisentraut.org/blog/2023/05/16/overview-of-icu-collation-settings\n\nI'd suggest that we add a column to this table with the alias, and also\nhave a consistent first sentence of each description, generally aligned\nwith the description from the upstream XML that Peter also referenced in\nhis blog.\n\nI have an example below (and patch attached); I think this would make\nthe table a bit more understandable.\n\n-Jeremy\n\n\n===\n\nKey: ka\nAlias: colAlternate **<-added**\nValues: noignore, shifted\nDefault: noignore\nDescription: **Collation parameter key for alternate handling.** If set\nto shifted, causes some characters (e.g. punctuation or space) to be\nignored in comparison. Key ks must be set to level3 or lower to take\neffect. Set key kv to control which character classes are ignored.\n\n===\n\nKey: kb\nAlias: colBackwards **<-added**\nValues: true, false\nDefault: false\nDescription: **Collation parameter key for backwards comparison of** the\nlevel 2 differences. For example, locale und-u-kb sorts 'àe' before 'aé'.\n\n\n\n-- \nhttp://about.me/jeremy_schneider",
"msg_date": "Sun, 4 Jun 2023 19:31:33 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": true,
"msg_subject": "collation settings table in v16 docs"
}
] |
[
{
"msg_contents": "hi,\n\n\nin our test enviroment, if one database's have major update operations, autovacuum does not work and cause major performance degradation.\nif found this issue may be resolved by revert this Skip redundant anti-wraparound vacuums · postgres/postgres@2aa6e33 (github.com) commit.\n\n\nafter fetch some disccusion about this revert, i have some question as follow:\n1. i understand that anti-wraparound and no-aggressive autovacuum will be skipped for shared catalog tables, but why this can trigger autovacuum does not work for others tables ?\n2. \"this could cause autovacuum to lock down\", this lock down implict that autovacuum can make a dead lock problem ?\n3. how to reproduce this lock down or autovacuum invalid issue, must be cluster enviroment ?\n\n\nso is there any body know these issuse or commits can give me some suggestion about my confusion.\n\n\n\n\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n\n\n\n\n\n\n\nhi,\n in our test enviroment, if one database's have major update operations, autovacuum does not work and cause major performance degradation.if found this issue may be resolved by revert this Skip redundant anti-wraparound vacuums · postgres/postgres@2aa6e33 (github.com) commit.after fetch some disccusion about this revert, i have some question as follow:1. i understand that anti-wraparound and no-aggressive autovacuum will be skipped for shared catalog tables, but why this can trigger autovacuum does not work for others tables ?2. \"this could cause autovacuum to lock down\", this lock down implict that autovacuum can make a dead lock problem ?3. how to reproduce this lock down or autovacuum invalid issue, must be cluster enviroment ?so is there any body know these issuse or commits can give me some suggestion about my confusion.\n\n\n\n\n\n\[email protected]",
"msg_date": "Mon, 5 Jun 2023 11:30:48 +0800 (GMT+08:00)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "confusion about this commit \"Revert \"Skip redundant anti-wraparound\n vacuums\"\""
},
{
"msg_contents": "jiye <[email protected]> writes:\n> in our test enviroment, if one database's have major update operations, autovacuum does not work and cause major performance degradation.\n> if found this issue may be resolved by revert this Skip redundant anti-wraparound vacuums · postgres/postgres@2aa6e33 (github.com) commit.\n\nPlease provide a self-contained test case illustrating this report.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Jun 2023 23:37:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "we can not get determinate test case as this issue reproduce only once, and currently autovaccum can works as we using vacuum freeze for each tables of each database.\n\n\nour client's application is real online bank business, and have serveral customer database, do a majority of update opertaion as result trigger some table dead_tup_ratio nealy 100%, but can not find any autovacuum process work for a very long time before we do vacuum freeze manally.\n\n\nand out autovacuum params as follow:\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n---- Replied Message ----\n| From | Tom Lane<[email protected]> |\n| Date | 6/5/2023 11:37 |\n| To | jiye<[email protected]> |\n| Cc | [email protected]<[email protected]> |\n| Subject | Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\" |\njiye <[email protected]> writes:\nin our test enviroment, if one database's have major update operations, autovacuum does not work and cause major performance degradation.\nif found this issue may be resolved by revert this Skip redundant anti-wraparound vacuums · postgres/postgres@2aa6e33 (github.com) commit.\n\nPlease provide a self-contained test case illustrating this report.\n\nregards, tom lane",
"msg_date": "Mon, 5 Jun 2023 13:50:20 +0800 (GMT+08:00)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "i attach our another case, many tables have over 200 million xid age, and these relation can not be autovacuum for long time until we freeze them manually.\n\n\n\n\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n---- Replied Message ----\n| From | jiye<[email protected]> |\n| Date | 6/5/2023 13:50 |\n| To | [email protected]<[email protected]> |\n| Cc | [email protected]<[email protected]> |\n| Subject | Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\" |\nwe can not get determinate test case as this issue reproduce only once, and currently autovaccum can works as we using vacuum freeze for each tables of each database.\n\n\nour client's application is real online bank business, and have serveral customer database, do a majority of update opertaion as result trigger some table dead_tup_ratio nealy 100%, but can not find any autovacuum process work for a very long time before we do vacuum freeze manally.\n\n\nand out autovacuum params as follow:\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n---- Replied Message ----\n| From | Tom Lane<[email protected]> |\n| Date | 6/5/2023 11:37 |\n| To | jiye<[email protected]> |\n| Cc | [email protected]<[email protected]> |\n| Subject | Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\" |\njiye <[email protected]> writes:\nin our test enviroment, if one database's have major update operations, autovacuum does not work and cause major performance degradation.\nif found this issue may be resolved by revert this Skip redundant anti-wraparound vacuums · postgres/postgres@2aa6e33 (github.com) commit.\n\nPlease provide a self-contained test case illustrating this report.\n\nregards, tom lane",
"msg_date": "Tue, 6 Jun 2023 10:34:35 +0800 (GMT+08:00)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 1:50 AM jiye <[email protected]> wrote:\n\n> we can not get determinate test case as this issue reproduce only once,\n> and currently autovaccum can works as we using vacuum freeze for each\n> tables of each database.\n>\n> our client's application is real online bank business, and have serveral\n> customer database, do a majority of update opertaion as result trigger\n> some table dead_tup_ratio nealy 100%, but can not find any autovacuum\n> process work for a very long time before we do vacuum freeze manally.\n>\n\nI tend to doubt that this is caused by the commit you're blaming, because\nthat commit purports to skip autovacuum operations only if some other\nvacuum has already done the work. Here you are saying that you see no\nautovacuum tasks at all.\n\nThe screenshot that you posted of XID ages exceeding 200 million is not\nevidence of a problem. It's pretty normal for some table XID ages\nto temporarily exceed autovacuum_freeze_max_age, especially if you have a\nlot of tables with about the same XID age, as seems to be the case here.\nWhen a table's XID age reaches autovacuum_freeze_max_age, the system will\nstart trying harder to reduce the XID age, but that process isn't\ninstantaneous.\n\nOn the other hand, your statement that you have very high numbers of dead\ntuples *is* evidence of a problem. It's very likely caused by vacuum not\nrunning aggressively enough. Remember that autovacuum is limited by the\nnumber of workers (autovacuum_max_workers) but even more importantly by the\ncost delay system. It's *extremely* common to need to raise\nvacuum_cost_limit on large or busy database systems, often by large\nmultiples (e.g. 10x or more).\n\nI'd strongly suggest that you carefully monitor how many autovacuum\nprocesses are running and what they are doing. If I were a betting man, I'd\nbet that you'd find that in the situation where you had this problem, the\nnumber of running processes was always 3 -- which is the configured maximum\n-- and if you looked at the wait event in pg_stat_activity I bet you would\nsee VacuumDelay showing up a lot. If so, raise vacuum_cost_limit\nconsiderably and over time the problem should get better. It won't be\ninstantaneous.\n\nOr maybe I'm wrong and you'd see something else, but whatever you did see\nwould probably give a hint as to what the problem here is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jun 5, 2023 at 1:50 AM jiye <[email protected]> wrote:\n\n\nwe can not get determinate test case as this issue reproduce only once, and currently autovaccum can works as we using vacuum freeze for each tables of each database.our client's application is real online bank business, and have serveral customer database, do a majority of update opertaion as result trigger some table dead_tup_ratio nealy 100%, but can not find any autovacuum process work for a very long time before we do vacuum freeze manally.I tend to doubt that this is caused by the commit you're blaming, because that commit purports to skip autovacuum operations only if some other vacuum has already done the work. Here you are saying that you see no autovacuum tasks at all.The screenshot that you posted of XID ages exceeding 200 million is not evidence of a problem. It's pretty normal for some table XID ages to temporarily exceed autovacuum_freeze_max_age, especially if you have a lot of tables with about the same XID age, as seems to be the case here. When a table's XID age reaches autovacuum_freeze_max_age, the system will start trying harder to reduce the XID age, but that process isn't instantaneous.On the other hand, your statement that you have very high numbers of dead tuples *is* evidence of a problem. It's very likely caused by vacuum not running aggressively enough. Remember that autovacuum is limited by the number of workers (autovacuum_max_workers) but even more importantly by the cost delay system. It's *extremely* common to need to raise vacuum_cost_limit on large or busy database systems, often by large multiples (e.g. 10x or more).I'd strongly suggest that you carefully monitor how many autovacuum processes are running and what they are doing. If I were a betting man, I'd bet that you'd find that in the situation where you had this problem, the number of running processes was always 3 -- which is the configured maximum -- and if you looked at the wait event in pg_stat_activity I bet you would see VacuumDelay showing up a lot. If so, raise vacuum_cost_limit considerably and over time the problem should get better. It won't be instantaneous.Or maybe I'm wrong and you'd see something else, but whatever you did see would probably give a hint as to what the problem here is.-- Robert HaasEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Jun 2023 15:30:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 03:30:02PM -0400, Robert Haas wrote:\n> On Mon, Jun 5, 2023 at 1:50 AM jiye <[email protected]> wrote:\n>\n> > we can not get determinate test case as this issue reproduce only once,\n> > and currently autovaccum can works as we using vacuum freeze for each\n> > tables of each database.\n> >\n> > our client's application is real online bank business, and have serveral\n> > customer database, do a majority of update opertaion as result trigger\n> > some table dead_tup_ratio nealy 100%, but can not find any autovacuum\n> > process work for a very long time before we do vacuum freeze manally.\n> >\n>\n> I tend to doubt that this is caused by the commit you're blaming, because\n> that commit purports to skip autovacuum operations only if some other\n> vacuum has already done the work. Here you are saying that you see no\n> autovacuum tasks at all.\n\nI'm a bit confused about what commit is actually being discussed here.\n\nIs it commit 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913? FTR this commit was\nindeed problematic and eventually reverted in 12.3\n(3ec8576a02b2b06aa214c8f3c2c3303c8a67639f), as it was leading to exactly the\nproblem described here (autovacuum kept triggering the same jobs that were\nsilently ignored, leading to absolutely no visible activity from a user point\nof view).\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:00:00 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "actually out test instance include 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this commit, not yet reverted this commit. \n\n\n| |\njiye\n|\n|\[email protected]\n|\n---- Replied Message ----\n| From | Julien Rouhaud<[email protected]> |\n| Date | 6/7/2023 14:00 |\n| To | Robert Haas<[email protected]> |\n| Cc | jiye<[email protected]> ,\[email protected]<[email protected]> ,\[email protected]<[email protected]> |\n| Subject | Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\" |\nOn Tue, Jun 06, 2023 at 03:30:02PM -0400, Robert Haas wrote:\nOn Mon, Jun 5, 2023 at 1:50 AM jiye <[email protected]> wrote:\n\nwe can not get determinate test case as this issue reproduce only once,\nand currently autovaccum can works as we using vacuum freeze for each\ntables of each database.\n\nour client's application is real online bank business, and have serveral\ncustomer database, do a majority of update opertaion as result trigger\nsome table dead_tup_ratio nealy 100%, but can not find any autovacuum\nprocess work for a very long time before we do vacuum freeze manally.\n\n\nI tend to doubt that this is caused by the commit you're blaming, because\nthat commit purports to skip autovacuum operations only if some other\nvacuum has already done the work. Here you are saying that you see no\nautovacuum tasks at all.\n\nI'm a bit confused about what commit is actually being discussed here.\n\nIs it commit 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913? FTR this commit was\nindeed problematic and eventually reverted in 12.3\n(3ec8576a02b2b06aa214c8f3c2c3303c8a67639f), as it was leading to exactly the\nproblem described here (autovacuum kept triggering the same jobs that were\nsilently ignored, leading to absolutely no visible activity from a user point\nof view).\n\n\n\n\n\n\n\n\nactually out test instance include 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this commit, not yet reverted this commit. \n\n\n\n\n\n\[email protected]\n\n\n\n\n\n ---- Replied Message ----\n \n\n\n\n\n From \n \n\nJulien Rouhaud<[email protected]>\n \n\n\n\n\n Date \n \n\n 6/7/2023 14:00\n \n\n\n\n To \n \n\n\n Robert Haas<[email protected]>\n \n \n\n\n\n\n Cc \n \n\n\n jiye<[email protected]>\n ,\n\n\n [email protected]<[email protected]>\n ,\n\n\n [email protected]<[email protected]>\n \n \n\n\n\n\n Subject \n \n\n Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\"\n \n\n\n\nOn Tue, Jun 06, 2023 at 03:30:02PM -0400, Robert Haas wrote: On Mon, Jun 5, 2023 at 1:50 AM jiye <[email protected]> wrote: we can not get determinate test case as this issue reproduce only once, and currently autovaccum can works as we using vacuum freeze for each tables of each database. our client's application is real online bank business, and have serveral customer database, do a majority of update opertaion as result trigger some table dead_tup_ratio nealy 100%, but can not find any autovacuum process work for a very long time before we do vacuum freeze manally. I tend to doubt that this is caused by the commit you're blaming, because that commit purports to skip autovacuum operations only if some other vacuum has already done the work. Here you are saying that you see no autovacuum tasks at all.I'm a bit confused about what commit is actually being discussed here.Is it commit 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913? FTR this commit wasindeed problematic and eventually reverted in 12.3(3ec8576a02b2b06aa214c8f3c2c3303c8a67639f), as it was leading to exactly theproblem described here (autovacuum kept triggering the same jobs that weresilently ignored, leading to absolutely no visible activity from a user pointof view).",
"msg_date": "Wed, 7 Jun 2023 15:12:44 +0800 (GMT+08:00)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 03:12:44PM +0800, jiye wrote:\n> actually out test instance include 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913\n> this commit, not yet reverted this commit.\n\nAre you saying that you're doing tests relying on a version that's missing\nabout 3 years of security and bug fixes? You should definitely update to the\nlatest minor version (currently 12.15) and keep applying all minor versions as\nthey get released.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:36:11 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "we will update all commits with latest version certaintly, but we must confirm that this issue is same with it currently we\ncan not confirm this issue can be fixed by revert 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this commit,\nso i just query about how this commit can trigger autovacuum lock down or does not work.\n\n\n| |\njiye\n|\n|\[email protected]\n|\n---- Replied Message ----\n| From | Julien Rouhaud<[email protected]> |\n| Date | 6/7/2023 15:36 |\n| To | jiye<[email protected]> |\n| Cc | [email protected]<[email protected]> ,\[email protected]<[email protected]> ,\[email protected]<[email protected]> |\n| Subject | Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\" |\nOn Wed, Jun 07, 2023 at 03:12:44PM +0800, jiye wrote:\nactually out test instance include 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913\nthis commit, not yet reverted this commit.\n\nAre you saying that you're doing tests relying on a version that's missing\nabout 3 years of security and bug fixes? You should definitely update to the\nlatest minor version (currently 12.15) and keep applying all minor versions as\nthey get released.\n\n\n\n\n\n\n\nwe will update all commits with latest version certaintly, but we must confirm that this issue is same with it currently we\n can not confirm this issue can be fixed by revert 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this commit,so i just query about how this commit can trigger autovacuum lock down or does not work.\n\n\n\n\n\n\[email protected]\n\n\n\n\n\n ---- Replied Message ----\n \n\n\n\n\n From \n \n\nJulien Rouhaud<[email protected]>\n \n\n\n\n\n Date \n \n\n 6/7/2023 15:36\n \n\n\n\n To \n \n\n\n jiye<[email protected]>\n \n \n\n\n\n\n Cc \n \n\n\n [email protected]<[email protected]>\n ,\n\n\n [email protected]<[email protected]>\n ,\n\n\n [email protected]<[email protected]>\n \n \n\n\n\n\n Subject \n \n\n Re: confusion about this commit \"Revert \"Skip redundant anti-wraparound vacuums\"\"\n \n\n\n\nOn Wed, Jun 07, 2023 at 03:12:44PM +0800, jiye wrote: actually out test instance include 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this commit, not yet reverted this commit.Are you saying that you're doing tests relying on a version that's missingabout 3 years of security and bug fixes? You should definitely update to thelatest minor version (currently 12.15) and keep applying all minor versions asthey get released.",
"msg_date": "Wed, 7 Jun 2023 15:42:25 +0800 (GMT+08:00)",
"msg_from": "jiye <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 03:42:25PM +0800, jiye wrote:\n> we will update all commits with latest version certaintly, but we must\n> confirm that this issue is same with it currently we can not confirm this\n> issue can be fixed by revert 2aa6e331ead7f3ad080561495ad4bd3bc7cd8913 this\n> commit, so i just query about how this commit can trigger autovacuum lock\n> down or does not work.\n\nThe revert commit contains a description of the problem and a link to the\ndiscussion and analysis that led to that revert.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:48:52 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: confusion about this commit \"Revert \"Skip redundant\n anti-wraparound vacuums\"\""
}
] |
[
{
"msg_contents": "Hi, all\nI got a coredump when testing with the REL_14_STABLE branch, which is as below:\nI noticed that in commit 10520f4346876aad4941797c2255a21bdac74739, int delayChkpt has been changed back to bool delayChkpt + bool delayChkptEnd. However, the initialization to delayChkptEnd is missed in function InitProcess. When autovacuum_proc1 is in RelationTruncate, the delayChkptEnd will be set as true. If autovacuum_proc1 receives a cancel signal and handles it at this time, autovacuum_proc1 will exit without reseting delayChkptEnd in its error handling process. After that, if autovacuum_proc2 reuses this PGPROC structure, the above error will occur.\nI add a patch to fix this bug in the attachment, hope you can check it.\nThanks & Best Regard",
"msg_date": "Mon, 05 Jun 2023 19:44:05 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?Rml4IG1pc3NpbmcgaW5pdGlhbGl6YXRpb24gb2YgZGVsYXlDaGtwdEVuZA==?="
},
{
"msg_contents": "Hi, all. I updated the patch for this bugfix, the previous one missed the modification of function InitAuxiliaryProcess, please check the new patch.\nThanks & Best Regard\n------------------------------------------------------------------\n发件人:蔡梦娟(玊于) <[email protected]>\n发送时间:2023年6月5日(星期一) 19:44\n收件人:pgsql-hackers <[email protected]>\n抄 送:robertmhaas <[email protected]>\n主 题:Fix missing initialization of delayChkptEnd\nHi, all\nI got a coredump when testing with the REL_14_STABLE branch, which is as below:\nI noticed that in commit 10520f4346876aad4941797c2255a21bdac74739, int delayChkpt has been changed back to bool delayChkpt + bool delayChkptEnd. However, the initialization to delayChkptEnd is missed in function InitProcess. When autovacuum_proc1 is in RelationTruncate, the delayChkptEnd will be set as true. If autovacuum_proc1 receives a cancel signal and handles it at this time, autovacuum_proc1 will exit without reseting delayChkptEnd in its error handling process. After that, if autovacuum_proc2 reuses this PGPROC structure, the above error will occur.\nI add a patch to fix this bug in the attachment, hope you can check it.\nThanks & Best Regard",
"msg_date": "Tue, 06 Jun 2023 00:39:47 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaRml4IG1pc3NpbmcgaW5pdGlhbGl6YXRpb24gb2YgZGVsYXlDaGtwdEVuZA==?="
},
{
"msg_contents": "Good catch!\r\n\r\nAt Tue, 06 Jun 2023 00:39:47 +0800, \"蔡梦娟(玊于)\" <[email protected]> wrote in \r\n> Hi, all. I updated the patch for this bugfix, the previous one\r\n> missed the modification of function InitAuxiliaryProcess, please\r\n> check the new patch.\r\n\r\nAfter a quick check through the 14 tree, then compaing with the\r\ncorresponding parts of 15, it hit me that ProcArrayClearTransaction()\r\nneeds an assertion on the variable. Other than that, the patch looks\r\ngood to me.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Tue, 06 Jun 2023 15:13:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaRml4?= missing initialization of\n delayChkptEnd"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 03:13:14PM +0900, Kyotaro Horiguchi wrote:\n> After a quick check through the 14 tree, then compaing with the\n> corresponding parts of 15, it hit me that ProcArrayClearTransaction()\n> needs an assertion on the variable. Other than that, the patch looks\n> good to me.\n\nYeah, it feels wrong to check only after delayChkpt in this code\npath. I'll look at that tomorrow.\n--\nMichael",
"msg_date": "Tue, 6 Jun 2023 20:19:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaRmk=?= =?utf-8?Q?x?= missing\n initialization of delayChkptEnd"
},
{
"msg_contents": "In my new patch for pg14, I add the assertion on delayChkptEnd in ProcArrayClearTransaction, and also add patches for release branch from 10 to 13, please check.\nThanks & Best Regard\n------------------------------------------------------------------\n发件人:Kyotaro Horiguchi <[email protected]>\n发送时间:2023年6月6日(星期二) 14:13\n收件人:蔡梦娟(玊于) <[email protected]>\n抄 送:pgsql-hackers <[email protected]>; robertmhaas <[email protected]>\n主 题:Re: 回复:Fix missing initialization of delayChkptEnd\nGood catch!\nAt Tue, 06 Jun 2023 00:39:47 +0800, \"蔡梦娟(玊于)\" <[email protected]> wrote in \n> Hi, all. I updated the patch for this bugfix, the previous one\n> missed the modification of function InitAuxiliaryProcess, please\n> check the new patch.\nAfter a quick check through the 14 tree, then compaing with the\ncorresponding parts of 15, it hit me that ProcArrayClearTransaction()\nneeds an assertion on the variable. Other than that, the patch looks\ngood to me.\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 07 Jun 2023 10:25:25 +0800",
"msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77ya5Zue5aSN77yaRml4IG1pc3NpbmcgaW5pdGlhbGl6YXRpb24gb2YgZGVsYXlD?=\n =?UTF-8?B?aGtwdEVuZA==?="
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 10:25:25AM +0800, 蔡梦娟(玊于) wrote:\n> In my new patch for pg14, I add the assertion on delayChkptEnd in\n> ProcArrayClearTransaction, and also add patches for release branch\n> from 10 to 13, please check.\n\nThanks for the patches. I finally got back to that, double-checked \nall the spots where these flags are used on all the branches, and that\nseems right to me. So applied across 11~14.\n--\nMichael",
"msg_date": "Sun, 11 Jun 2023 10:36:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?utf-8?B?5Zue5aSN77ya5Zue5aSN77yaRmk=?= =?utf-8?Q?x?= missing\n initialization of delayChkptEnd"
}
] |
[
{
"msg_contents": "This makes \"make check\" work on Mac OS X. Without this patch, on Mac OS X a\ndefault \"./configure; make; make check\" fails with errors like:\n\ndyld[65265]: Library not loaded: /usr/local/pgsql/lib/libpq.5.dylib\n Referenced from: <59A2EAF9-6298-3112-BEDB-EA9A62A9DB53>\n/Users/evan.jones/postgresql-clean/tmp_install/usr/local/pgsql/bin/initdb\n Reason: tried: '/usr/local/pgsql/lib/libpq.5.dylib' (no such file),\n'/System/Volumes/Preboot/Cryptexes/OS/usr/local/pgsql/lib/libpq.5.dylib'\n(no such file), '/usr/local/pgsql/lib/libpq.5.dylib' (no such file),\n'/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no\nsuch file, not in dyld cache)\n\nThe reason is that at some point, Mac OS X started removing the\nDYLD_LIBRARY_PATH environment variable for \"untrusted\" executables [1]:\n\"Any dynamic linker (dyld) environment variables, such as\nDYLD_LIBRARY_PATH, are purged when launching protected processes.\"\n\n[1]\nhttps://developer.apple.com/library/archive/documentation/Security/Conceptual/System_Integrity_Protection_Guide/RuntimeProtections/RuntimeProtections.html\n\nOne solution is to explicitly pass the DYLD_LIBRARY_PATH environment\nvariable to to the sub-process shell scripts that are run by pg_regress. To\ndo this, I created an extra_envvars global variable which is set to the\nempty string \"\", but on Mac OS X, is filled in with \"DYLD_LIBRARY_PATH=%s\",\nwhere the %s is the current environment variable. The \"make check\" Makefile\nsets this environment variable to the temporary install directory, so this\nfixes the above errors.\n\nI tested this on Mac OS X and on Linux (Ubuntu 23.04).\n\nThanks!\n\nEvan Jones",
"msg_date": "Mon, 5 Jun 2023 09:47:30 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jun 05, 2023 at 09:47:30AM -0400, Evan Jones wrote:\n> This makes \"make check\" work on Mac OS X. Without this patch, on Mac OS X a\n> default \"./configure; make; make check\" fails with errors like:\n> \n> dyld[65265]: Library not loaded: /usr/local/pgsql/lib/libpq.5.dylib\n> Referenced from: <59A2EAF9-6298-3112-BEDB-EA9A62A9DB53>\n> /Users/evan.jones/postgresql-clean/tmp_install/usr/local/pgsql/bin/initdb\n> Reason: tried: '/usr/local/pgsql/lib/libpq.5.dylib' (no such file),\n> '/System/Volumes/Preboot/Cryptexes/OS/usr/local/pgsql/lib/libpq.5.dylib'\n> (no such file), '/usr/local/pgsql/lib/libpq.5.dylib' (no such file),\n> '/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no\n> such file, not in dyld cache)\n> \n> The reason is that at some point, Mac OS X started removing the\n> DYLD_LIBRARY_PATH environment variable for \"untrusted\" executables [1]:\n> \"Any dynamic linker (dyld) environment variables, such as\n> DYLD_LIBRARY_PATH, are purged when launching protected processes.\"\n> \n> [1]\n> https://developer.apple.com/library/archive/documentation/Security/Conceptual/System_Integrity_Protection_Guide/RuntimeProtections/RuntimeProtections.html\n> \n> One solution is to explicitly pass the DYLD_LIBRARY_PATH environment\n> variable to to the sub-process shell scripts that are run by pg_regress. To\n> do this, I created an extra_envvars global variable which is set to the\n> empty string \"\", but on Mac OS X, is filled in with \"DYLD_LIBRARY_PATH=%s\",\n> where the %s is the current environment variable. The \"make check\" Makefile\n> sets this environment variable to the temporary install directory, so this\n> fixes the above errors.\n\nNote that this is a known issue and a workaround is documented in the macos\nspecific notes at\nhttps://www.postgresql.org/docs/current/installation-platform-notes.html#INSTALLATION-NOTES-MACOS:\n\n> macOS's “System Integrity Protection” (SIP) feature breaks make check,\n> because it prevents passing the needed setting of DYLD_LIBRARY_PATH down to\n> the executables being tested. You can work around that by doing make install\n> before make check. Most PostgreSQL developers just turn off SIP, though.\n\n\n",
"msg_date": "Tue, 6 Jun 2023 10:25:08 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "Julien Rouhaud <[email protected]> writes:\n> On Mon, Jun 05, 2023 at 09:47:30AM -0400, Evan Jones wrote:\n>> This makes \"make check\" work on Mac OS X. Without this patch, on Mac OS X a\n>> default \"./configure; make; make check\" fails with errors like:\n>> ...\n>> The reason is that at some point, Mac OS X started removing the\n>> DYLD_LIBRARY_PATH environment variable for \"untrusted\" executables [1]:\n\n> Note that this is a known issue\n\nYeah. We have attempted to work around this before, but failed to find\na solution without more downsides than upsides. I will be interested\nto look at this patch, but lack time for it right now. Anybody else?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Jun 2023 22:33:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 10:33 PM Tom Lane <[email protected]> wrote:\n\n> > Note that this is a known issue\n> Yeah. We have attempted to work around this before, but failed to find\n> a solution without more downsides than upsides. I will be interested\n> to look at this patch, but lack time for it right now. Anybody else?\n>\n\nAh, I didn't find that mention in the documentation when I was trying to\nget this working. Sorry about that!\n\nMy argument in favour of considering this patch is that making\n\"./configure; make; make check\" work on current major operating systems\nmakes it easier for others to contribute in the future. I think the\ndisadvantage of this patch is it makes pg_regress harder to understand,\nbecause it requires an #ifdef for this OS specific behaviour, and obscures\nthe command lines of the child processes it spawns.\n\nThanks for considering it!\n\nEvan\n\nOn Mon, Jun 5, 2023 at 10:33 PM Tom Lane <[email protected]> wrote:\n> Note that this is a known issue\nYeah. We have attempted to work around this before, but failed to find\na solution without more downsides than upsides. I will be interested\nto look at this patch, but lack time for it right now. Anybody else?Ah, I didn't find that mention in the documentation when I was trying to get this working. Sorry about that!My argument in favour of considering this patch is that making \"./configure; make; make check\" work on current major operating systems makes it easier for others to contribute in the future. I think the disadvantage of this patch is it makes pg_regress harder to understand, because it requires an #ifdef for this OS specific behaviour, and obscures the command lines of the child processes it spawns.Thanks for considering it!Evan",
"msg_date": "Tue, 6 Jun 2023 10:24:45 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On 06.06.23 16:24, Evan Jones wrote:\n> On Mon, Jun 5, 2023 at 10:33 PM Tom Lane <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> > Note that this is a known issue\n> Yeah. We have attempted to work around this before, but failed to find\n> a solution without more downsides than upsides. I will be interested\n> to look at this patch, but lack time for it right now. Anybody else?\n> \n> \n> Ah, I didn't find that mention in the documentation when I was trying to \n> get this working. Sorry about that!\n> \n> My argument in favour of considering this patch is that making \n> \"./configure; make; make check\" work on current major operating systems \n> makes it easier for others to contribute in the future. I think the \n> disadvantage of this patch is it makes pg_regress harder to understand, \n> because it requires an #ifdef for this OS specific behaviour, and \n> obscures the command lines of the child processes it spawns.\n\nThis addresses only pg_regress. What about all the other test suites? \nPer the previous discussions, you'd need to patch up other places in a \nsimilar way, potentially everywhere system() is called.\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 23:23:50 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 5:23 PM Peter Eisentraut <[email protected]>\nwrote:\n\n> This addresses only pg_regress. What about all the other test suites?\n> Per the previous discussions, you'd need to patch up other places in a\n> similar way, potentially everywhere system() is called.\n>\n\nAre there instructions for how I can run these other test suites? The\ninstallation notes that Julien linked, and the Postgres wiki Developer FAQ\n[1] only seem to mention \"make check\". I would be happy to try to fix other\ntests on Mac OS X.\n\nThanks!\n\n[1]\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#How_do_I_test_my_changes.3F\n\nOn Tue, Jun 6, 2023 at 5:23 PM Peter Eisentraut <[email protected]> wrote:\nThis addresses only pg_regress. What about all the other test suites? \nPer the previous discussions, you'd need to patch up other places in a \nsimilar way, potentially everywhere system() is called.Are there instructions for how I can run these other test suites? The installation notes that Julien linked, and the Postgres wiki Developer FAQ [1] only seem to mention \"make check\". I would be happy to try to fix other tests on Mac OS X.Thanks![1] https://wiki.postgresql.org/wiki/Developer_FAQ#How_do_I_test_my_changes.3F",
"msg_date": "Tue, 6 Jun 2023 17:43:57 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "Evan Jones <[email protected]> writes:\n> On Tue, Jun 6, 2023 at 5:23 PM Peter Eisentraut <[email protected]>\n> wrote:\n>> This addresses only pg_regress. What about all the other test suites?\n\n> Are there instructions for how I can run these other test suites?\n\nconfigure with --enable-tap-tests, then do \"make check-world\".\n\nAlso, adding certain additional feature arguments such as\n--with-python enables more test cases. We aren't going to be\nsuper excited about a patch that doesn't handle all of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Jun 2023 21:48:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:44 AM Evan Jones <[email protected]> wrote:\n>\n> On Tue, Jun 6, 2023 at 5:23 PM Peter Eisentraut <[email protected]> wrote:\n>>\n>> This addresses only pg_regress. What about all the other test suites?\n>> Per the previous discussions, you'd need to patch up other places in a\n>> similar way, potentially everywhere system() is called.\n>\n>\n> Are there instructions for how I can run these other test suites? The installation notes that Julien linked, and the Postgres wiki Developer FAQ [1] only seem to mention \"make check\". I would be happy to try to fix other tests on Mac OS X.\n\nAFAIK there's no make rule that can really run everything. You can\nget most of it using make check-world (see\nhttps://www.postgresql.org/docs/current/regress-run.html#id-1.6.20.5.5)\nand making sure you added support for TAP tests (and probably also a\nlot of optional dependencies) when running configure. This won't run\neverything but hopefully will hit most of the relevant infrastructure.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 09:49:46 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 09:48:24PM -0400, Tom Lane wrote:\n> configure with --enable-tap-tests, then do \"make check-world\".\n> \n> Also, adding certain additional feature arguments such as\n> --with-python enables more test cases. We aren't going to be\n> super excited about a patch that doesn't handle all of them.\n\nThere is a bit more to this story. Mainly, see PG_TEST_EXTRA here:\nhttps://www.postgresql.org/docs/devel/regress-run.html\n--\nMichael",
"msg_date": "Mon, 12 Jun 2023 09:04:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "I have applied the patch to the latest master branch and successfully executed './configure && make && make check' on macOS Ventura. However, during the process, a warning was encountered: \"mixing declarations and code is incompatible with standards before C99 [-Wdeclaration-after-statement]\". Moving the declaration of 'result' to the beginning like below can resolve the warning, and it would be better to use a unique variable instead of 'result'. \r\n\r\n#ifdef __darwin__\r\nstatic char extra_envvars[4096];\r\n+int result = -1;\r\n... ...\r\n-int result = snprintf(extra_envvars, sizeof(extra_envvars), \"DYLD_LIBRARY_PATH=%s\",\r\n+result = snprintf(extra_envvars, sizeof(extra_envvars), \"DYLD_LIBRARY_PATH=%s\",",
"msg_date": "Fri, 16 Jun 2023 21:25:12 +0000",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "After conducting a further investigation into this issue, I have made \nsome discoveries. The previous patch successfully resolves the problem \nwhen running the commands `./configure && make && make check` (without \nany previous sudo make install or make install). However, it stops at \nthe 'isolation check' when using the commands `./configure \n--enable-tap-tests && make && make check-world`.\n\nTo address this, I attempted to apply a similar approach as the previous \npatch, resulting in an experimental patch (attached). This new patch \nhelps progress the 'make-world' process and passes the 'isolation \ncheck', but there are still several remaining issues that need to be \naddressed.\n\n\nCurrently, there is a description suggesting a workaround by running a \n'make install' command first, but I find it to be somewhat inaccurate. \nIt would be better to update the existing description to provide more \nprecise instructions on how to overcome this issue. Here are the changes \nI would suggest.\n\nfrom:\n\"You can work around that by doing make install before make check. Most \nPostgreSQL developers just turn off SIP, though.\"\n\nto:\n\"You can execute sudo make install if you do not specify a prefix during \nthe configure step, or make install without sudo if you do specify a \nprefix (assuming proper permissions) before make check. Most PostgreSQL \ndevelopers just turn off SIP, though.\"\n\nOtherwise, following the current description, if you run `./configure \n&& make install` you will get error: \"mkdir: /usr/local/pgsql: \nPermission denied\"\n\n\nBelow are the steps I took that led to the discovery of additional issues.\n\ngit apply pg_regress_mac_os_x_dyld.patch\n./configure\nmake\nmake check\n\n... ...\n# All 215 tests passed.\n\n\n./configure --enable-tap-tests\nmake\nmake check-world\n\n... ...\necho \"# +++ isolation check in src/test/isolation +++\"\n... ...\ndyld[32335]: Library not loaded: /usr/local/pgsql/lib/libpq.5.dylib\n Referenced from: <EB3758C5-A87B-36C5-AA29-C1E31AD89E70> \n/Users/david/hg/sandbox/postgres/src/test/isolation/isolationtester\n Reason: tried: '/usr/local/pgsql/lib/libpq.5.dylib' (no such file), \n'/System/Volumes/Preboot/Cryptexes/OS/usr/local/pgsql/lib/libpq.5.dylib' \n(no such file), '/usr/local/pgsql/lib/libpq.5.dylib' (no such file), \n'/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' \n(no such file, not in dyld cache)\nno data was returned by command \n\"\"/Users/david/hg/sandbox/postgres/src/test/isolation/isolationtester\" -V\"\n\n\n\ngit apply pg_regress_mac_os_x_dyld_isolation_check_only.patch\n./configure --enable-tap-tests\nmake\nmake check-world\n\n... ...\n# All 215 tests passed.\n... ...\n# +++ isolation check in src/test/isolation +++\n... ...\n# All 112 tests passed.\n\necho \"# +++ tap check in src/test/modules/brin +++\"\n... ...\n# +++ tap check in src/test/modules/brin +++\nt/01_workitems.pl ........ Bailout called. Further testing stopped: \ncommand \"initdb -D \n/Users/david/hg/sandbox/postgres/src/test/modules/brin/tmp_check/t_01_workitems_tango_data/pgdata \n-A trust -N\" died with signal 6\nt/01_workitems.pl ........ Dubious, test returned 255 (wstat 65280, 0xff00)\nNo subtests run\n\n\nAny thoughts ?\n\nThank you\n\nDavid\n\nOn 2023-06-16 2:25 p.m., David Zhang wrote:\n> I have applied the patch to the latest master branch and successfully executed './configure && make && make check' on macOS Ventura. However, during the process, a warning was encountered: \"mixing declarations and code is incompatible with standards before C99 [-Wdeclaration-after-statement]\". Moving the declaration of 'result' to the beginning like below can resolve the warning, and it would be better to use a unique variable instead of 'result'.\n>\n> #ifdef __darwin__\n> static char extra_envvars[4096];\n> +int result = -1;\n> ... ...\n> -int result = snprintf(extra_envvars, sizeof(extra_envvars), \"DYLD_LIBRARY_PATH=%s\",\n> +result = snprintf(extra_envvars, sizeof(extra_envvars), \"DYLD_LIBRARY_PATH=%s\",",
"msg_date": "Thu, 22 Jun 2023 12:08:05 -0700",
"msg_from": "David Zhang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "On 22.06.23 21:08, David Zhang wrote:\n> Currently, there is a description suggesting a workaround by running a \n> 'make install' command first, but I find it to be somewhat inaccurate. \n> It would be better to update the existing description to provide more \n> precise instructions on how to overcome this issue. Here are the changes \n> I would suggest.\n> \n> from:\n> \"You can work around that by doing make install before make check. Most \n> PostgreSQL developers just turn off SIP, though.\"\n> \n> to:\n> \"You can execute sudo make install if you do not specify a prefix during \n> the configure step, or make install without sudo if you do specify a \n> prefix (assuming proper permissions) before make check. Most PostgreSQL \n> developers just turn off SIP, though.\"\n> \n> Otherwise, following the current description, if you run `./configure && \n> make install` you will get error: \"mkdir: /usr/local/pgsql: Permission \n> denied\"\n\nI think you should interpret \"doing make install\" as \"running make \ninstall or a similar command as described earlier in this chapter\". \nNote also that the installation instructions don't use \"sudo\" anywhere \nright now, so throwing it in at this point would be weird.\n\n> echo \"# +++ tap check in src/test/modules/brin +++\"\n> ... ...\n> # +++ tap check in src/test/modules/brin +++\n> t/01_workitems.pl ........ Bailout called. Further testing stopped: \n> command \"initdb -D \n> /Users/david/hg/sandbox/postgres/src/test/modules/brin/tmp_check/t_01_workitems_tango_data/pgdata -A trust -N\" died with signal 6\n> t/01_workitems.pl ........ Dubious, test returned 255 (wstat 65280, 0xff00)\n> No subtests run\n\nAs I mentioned earlier, you would need to find all uses of system() in \nthe PostgreSQL source code and make your adjustments there. IIRC, the \nTAP tests require pg_ctl, so maybe look there.\n\n\n",
"msg_date": "Fri, 23 Jun 2023 23:05:52 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-05 22:33:16 -0400, Tom Lane wrote:\n> Julien Rouhaud <[email protected]> writes:\n> > On Mon, Jun 05, 2023 at 09:47:30AM -0400, Evan Jones wrote:\n> >> This makes \"make check\" work on Mac OS X. Without this patch, on Mac OS X a\n> >> default \"./configure; make; make check\" fails with errors like:\n> >> ...\n> >> The reason is that at some point, Mac OS X started removing the\n> >> DYLD_LIBRARY_PATH environment variable for \"untrusted\" executables [1]:\n> \n> > Note that this is a known issue\n> \n> Yeah. We have attempted to work around this before, but failed to find\n> a solution without more downsides than upsides. I will be interested\n> to look at this patch, but lack time for it right now. Anybody else?\n\nFWIW, I have a patch, which I posted originally as part of the meson thread,\nthat makes the meson build work correctly even with SIP enabled. The trick is\nbasically to change the absolute references to libraries to relative ones.\n\nExcept for a small amount of complexity during install, I don't think this has\na whole lot of downsides. Making the install relocatable imo is pretty nice.\n\nI guess I should repost that for 17...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Jun 2023 17:19:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_regress.c: Fix \"make check\" on Mac OS X: Pass\n DYLD_LIBRARY_PATH"
}
] |
[
{
"msg_contents": "I have reworked the case of BUG #17842 to include the data and the questions for further investigation.\n\n\nqualstest_data contais the data export with --insert (to test it on other DB systems)\n\nqualstest_query contains the failing query and a short introduction to the data.\n\n\nThe problem is NOT to correct the query to a working case, but to show a fundamental problem with qual pushdown.\n\n\nOn pg16b1 (same on 15.3) the explain of the second query produces:\n\n\nqualstest=#\nqualstest=#\nqualstest=# explain -- select * from ( -- select count(*) from ( -- select length(sel) from (\nqualstest-# select * from (\nqualstest(# select\nqualstest(# onum\nqualstest(# ,vname\nqualstest(# ,vlen\nqualstest(# ,nlen\nqualstest(# ,olen\nqualstest(# ,NULLIF(vlen-olen,0) as delta_len\nqualstest(# from (\nqualstest(# select *\nqualstest(# ,('0'||split_part(split_part(nline,'(',2),')',1))::smallint as nlen\nqualstest(# ,('0'||split_part(split_part(oline,'(',2),')',1))::smallint as olen\nqualstest(# from newcol\nqualstest(# join oldcol on onum=nnum\nqualstest(# join (\nqualstest(# select\nqualstest(# vnum\nqualstest(# ,split_part(vline,' ',1) as vname\nqualstest(# ,('0'||split_part(split_part(vline,'(',2),')',1))::smallint as vlen\nqualstest(# from varcol\nqualstest(# ) qv on nline like '%'||vname||'%'\nqualstest(# where nline not like '%KEY%'\nqualstest(# ) qj\nqualstest(# --limit 30\nqualstest(# where vlen!=olen\nqualstest(# ) qcomp\nqualstest-# where\nqualstest-# nlen > 0\nqualstest-# ;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=90.37..10257.60 rows=2188 width=44)\n Hash Cond: (newcol.nnum = oldcol.onum)\n Join Filter: ((('0'::text || split_part(split_part(varcol.vline, '('::text, 2), ')'::text, 1)))::smallint <> (('0'::text || split_part(split_part(oldcol.oline, '('::text, 2), ')'::text, 1)))::smallint)\n -> Nested Loop (cost=0.00..10008.26 rows=2199 width=73)\n Join Filter: (newcol.nline ~~ (('%'::text || split_part(varcol.vline, ' '::text, 1)) || '%'::text))\n -> Seq Scan on newcol (cost=0.00..98.23 rows=738 width=36)\n Filter: ((nline !~~ '%KEY%'::text) AND ((('0'::text || split_part(split_part(nline, '('::text, 2), ')'::text, 1)))::smallint > 0))\n -> Materialize (cost=0.00..14.94 rows=596 width=37)\n -> Seq Scan on varcol (cost=0.00..11.96 rows=596 width=37)\n -> Hash (cost=60.72..60.72 rows=2372 width=44)\n -> Seq Scan on oldcol (cost=0.00..60.72 rows=2372 width=44)\n(11 Zeilen)\n\n\nqualstest=#\nqualstest=# select version ();\n version\n---------------------------------------------------------------\n PostgreSQL 16beta1, compiled by Visual C++ build 1934, 64-bit\n(1 Zeile)\n\non execution:\nFEHLER: ungültige Eingabesyntax für Typ smallint: »08,2«\n\n\nANALYSIS:\n\nThe join conditions matches all rows from oldcol and newcol, which then are filtered by inner join with only the varchar columns from varcol. Therefore the lines\n\n,('0'||split_part(split_part(nline,'(',2),')',1))::smallint as nlen\n,('0'||split_part(split_part(oline,'(',2),')',1))::smallint as olen\n\nshould be applied only to filtered results known to have smallint values between the parentesis in varchar definitions.\n\nThis is done correctly in the first (full) query without the final where clause.\n\nWhen the where nlen > 0 comes into play, the plan is changed and the filter qual is applied to all lines.\nThere are other lines where the cast is not possible and the query fails with above error.\n\nThe fundamental problem is that quals should not be pushed down for tuples not in the result set, when the operation classes of these quals could error out.\n\nSome operator classes have no runtime errors (like cast from smallint to int), but when such an error is possible, they should not be applied to tuples not part of the joined result set!\n\nI stumbled over the error by adding this harmless where clause (where nlen > 0) to a just working query and got the error.\n\nOther where-clauses (where nnum < 100) cause the same error.\n\nOperator classes which could error out should not be applied for filtering columns from relations, which are not the outermost relation in joins and could be eliminated by another join.\n\nThese queries are syntactically and semantically correct but the postgre implementations causes them to error out.\nThis is very surprising for the SQL User!\n\nThe problem seems to exist also in certain backbranches.\n\nHans Buschmann",
"msg_date": "Mon, 5 Jun 2023 14:40:35 +0000",
"msg_from": "Hans Buschmann <[email protected]>",
"msg_from_op": true,
"msg_subject": "QUAL Pushdown causes ERROR on syntactically and semantically correct\n SQL Query"
},
{
"msg_contents": "On Mon, Jun 5, 2023, 07:40 Hans Buschmann <[email protected]> wrote:\n\n> I have reworked the case of BUG #17842 to include the data and the\n> questions for further investigation.\n>\n>\n> The problem is NOT to correct the query to a working case, but to show a\n> fundamental problem with qual pushdown.\n>\n\nThe optimization system operates with imperfect information, meaning it\nassumes expressions do not produce errors depending on the data. If you\nknow certain data can produce errors you need to add the relevant code to\navoid evaluating those expressions on those data.\n\nYes, or would nice if PostgreSQL could do better here. The cost of doing\nso is quite high though, and there is no interest in incurring that cost.\n\nDavid J.\n\nOn Mon, Jun 5, 2023, 07:40 Hans Buschmann <[email protected]> wrote:\n\n\nI have reworked the case of BUG #17842 to include the data and the questions for further investigation.\n\nThe problem is NOT to correct the query to a working case, but to show a fundamental problem with qual pushdown.The optimization system operates with imperfect information, meaning it assumes expressions do not produce errors depending on the data. If you know certain data can produce errors you need to add the relevant code to avoid evaluating those expressions on those data.Yes, or would nice if PostgreSQL could do better here. The cost of doing so is quite high though, and there is no interest in incurring that cost.David J.",
"msg_date": "Mon, 5 Jun 2023 07:55:57 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: QUAL Pushdown causes ERROR on syntactically and semantically\n correct SQL Query"
},
{
"msg_contents": "Hans Buschmann <[email protected]> writes:\n> I have reworked the case of BUG #17842 to include the data and the questions for further investigation.\n\nThis wasn't a bug before, and it still isn't. Postgres doesn't guarantee\nanything about the order of execution of a query's WHERE and JOIN clauses,\nand we do not intend to offer any such guarantee in future either. Doing\nso would make far more people unhappy than happy, since it'd be\ncatastrophic for performance in many cases.\n\nIf you really need to use error-prone qual clauses, you need an\noptimization fence. There are a couple of ways to do that but\nthe most recommendable is to use a materialized CTE:\n\nwith m as materialized\n (select ..., ('0'||split_part(split_part(nline,'(',2),')',1))::smallint\n as nlen, ...\n from ... where ...)\nselect * from m where nlen > 0;\n\nThe \"nlen > 0\" condition won't get pushed into the CTE.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Jun 2023 11:15:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: QUAL Pushdown causes ERROR on syntactically and semantically\n correct SQL Query"
}
] |
[
{
"msg_contents": "I spoke with some folks at PGCon about making PostgreSQL multi-threaded, \nso that the whole server runs in a single process, with multiple \nthreads. It has been discussed many times in the past, last thread on \npgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n\nI feel that there is now pretty strong consensus that it would be a good \nthing, more so than before. Lots of work to get there, and lots of \ndetails to be hashed out, but no objections to the idea at a high level.\n\nThe purpose of this email is to make that silent consensus explicit. If \nyou have objections to switching from the current multi-process \narchitecture to a single-process, multi-threaded architecture, please \nspeak up.\n\nIf there are no major objections, I'm going to update the developer FAQ, \nremoving the excuses there for why we don't use threads [1]. And we can \nstart to talk about the path to get there. Below is a list of some \nhurdles and proposed high-level solutions. This isn't an exhaustive \nlist, just some of the most obvious problems:\n\n# Transition period\n\nThe transition surely cannot be done fully in one release. Even if we \ncould pull it off in core, extensions will need more time to adapt. \nThere will be a transition period of at least one release, probably \nmore, where you can choose multi-process or multi-thread model using a \nGUC. Depending on how it goes, we can document it as experimental at first.\n\n# Thread per connection\n\nTo get started, it's most straightforward to have one thread per \nconnection, just replacing backend process with a backend thread. In the \nfuture, we might want to have a thread pool with some kind of a \nscheduler to assign active queries to worker threads. Or multiple \nthreads per connection, or spawn additional helper threads for specific \ntasks. But that's future work.\n\n# Global variables\n\nWe have a lot of global and static variables:\n\n$ objdump -t bin/postgres | grep -e \"\\.data\" -e \"\\.bss\" | grep -v \n\"data.rel.ro\" | wc -l\n1666\n\nSome of them are pointers to shared memory structures and can stay as \nthey are. But many of them are per-connection state. The most \nstraightforward conversion for those is to turn them into thread-local \nvariables, like Konstantin did in [0].\n\nIt might be good to have some kind of a Session context struct that we \npass everywhere, or maybe have a single thread-local variable to hold \nit. Many of the global variables would become fields in the Session. But \nthat's future work.\n\n# Extensions\n\nA lot of extensions also contain global variables or other things that \nbreak in a multi-threaded environment. We need a way to label extensions \nthat support multi-threading. And in the future, also extensions that \n*require* a multi-threaded server.\n\nLet's add flags to the control file to mark if the extension is \nthread-safe and/or process-safe. If you try to load an extension that's \nnot compatible with the server's mode, throw an error.\n\nWe might need new functions in addition _PG_init, called at connection \nstartup and shutdown. And background worker API probably needs some changes.\n\n# Exposed PIDs\n\nWe expose backend process PIDs to users in a few places. \npg_stat_activity.pid and pg_terminate_backend(), for example. They need \nto be replaced, or we can assign a fake PID to each connection when \nrunning in multi-threaded mode.\n\n# Signals\n\nWe use signals for communication between backends. SIGURG in latches, \nand SIGUSR1 in procsignal, for example. Those primitives need to be \nrewritten with some other signalling mechanism in multi-threaded mode. \nIn principle, it's possible to set per-thread signal handlers, and send \na signal to a particular thread (pthread_kill), but I think it's better \nto just rewrite them.\n\nWe also document that you can send SIGINT, SIGTERM or SIGHUP to an \nindividual backend process. I think we need to deprecate that, and maybe \ncome up with some convenient replacement. E.g. send a message with \nbackend ID to a unix domain socket, and a new pg_kill executable to send \nthose messages.\n\n# Restart on crash\n\nIf a backend process crashes, postmaster terminates all other backends \nand restarts the system. That's hard (impossible?) to do safely if \neverything runs in one process. We can continue have a separate \npostmaster process that just monitors the main process and restarts it \non crash.\n\n# Thread-safe libraries\n\nNeed to switch to thread-safe versions of library functions, e.g. \nuselocale() instead of setlocale().\n\nThe Python interpreter has a Global Interpreter Lock. It's not possible \nto create two completely independent Python interpreters in the same \nprocess, there will be some lock contention on the GIL. Fortunately, the \npython community just accepted https://peps.python.org/pep-0684/. That's \nexactly what we need: it makes it possible for separate interpreters to \nhave their own GILs. It's not clear to me if that's in Python 3.12 \nalready, or under development for some future version, but by the time \nwe make the switch in Postgres, there probably will be a solution in \ncpython.\n\nAt a quick glance, I think perl and TCL are fine, you can have multiple \ninterpreters in one process. Need to check any other libraries we use.\n\n\n[0] \nhttps://www.postgresql.org/message-id/flat/9defcb14-a918-13fe-4b80-a0b02ff85527%40postgrespro.ru\n\n[1] \nhttps://wiki.postgresql.org/wiki/Developer_FAQ#Why_don.27t_you_use_raw_devices.2C_async-I.2FO.2C_.3Cinsert_your_favorite_wizz-bang_feature_here.3E.3F\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 5 Jun 2023 17:51:57 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> I spoke with some folks at PGCon about making PostgreSQL multi-threaded, \n> so that the whole server runs in a single process, with multiple \n> threads. It has been discussed many times in the past, last thread on \n> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n\n> I feel that there is now pretty strong consensus that it would be a good \n> thing, more so than before. Lots of work to get there, and lots of \n> details to be hashed out, but no objections to the idea at a high level.\n\n> The purpose of this email is to make that silent consensus explicit. If \n> you have objections to switching from the current multi-process \n> architecture to a single-process, multi-threaded architecture, please \n> speak up.\n\nFor the record, I think this will be a disaster. There is far too much\ncode that will get broken, largely silently, and much of it is not\nunder our control.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Jun 2023 11:18:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon Jun 5, 2023 at 9:51 AM CDT, Heikki Linnakangas wrote:\n> # Global variables\n>\n> We have a lot of global and static variables:\n>\n> $ objdump -t bin/postgres | grep -e \"\\.data\" -e \"\\.bss\" | grep -v \n> \"data.rel.ro\" | wc -l\n> 1666\n>\n> Some of them are pointers to shared memory structures and can stay as \n> they are. But many of them are per-connection state. The most \n> straightforward conversion for those is to turn them into thread-local \n> variables, like Konstantin did in [0].\n>\n> It might be good to have some kind of a Session context struct that we \n> pass everywhere, or maybe have a single thread-local variable to hold \n> it. Many of the global variables would become fields in the Session. But \n> that's future work.\n\n+1 to the session context idea after the more simple thread_local\nstorage idea.\n\n> # Extensions\n>\n> A lot of extensions also contain global variables or other things that \n> break in a multi-threaded environment. We need a way to label extensions \n> that support multi-threading. And in the future, also extensions that \n> *require* a multi-threaded server.\n>\n> Let's add flags to the control file to mark if the extension is \n> thread-safe and/or process-safe. If you try to load an extension that's \n> not compatible with the server's mode, throw an error.\n>\n> We might need new functions in addition _PG_init, called at connection \n> startup and shutdown. And background worker API probably needs some changes.\n\nIt would be a good idea to start exposing a variable through pkg-config\nto tell whether the backend is multi-threaded or multi-process.\n\n> # Exposed PIDs\n>\n> We expose backend process PIDs to users in a few places. \n> pg_stat_activity.pid and pg_terminate_backend(), for example. They need \n> to be replaced, or we can assign a fake PID to each connection when \n> running in multi-threaded mode.\n\nWould it be possible to just transparently slot in the thread ID\ninstead?\n\n> # Thread-safe libraries\n>\n> Need to switch to thread-safe versions of library functions, e.g. \n> uselocale() instead of setlocale().\n\nSeems like a good starting point.\n\n> The Python interpreter has a Global Interpreter Lock. It's not possible \n> to create two completely independent Python interpreters in the same \n> process, there will be some lock contention on the GIL. Fortunately, the \n> python community just accepted https://peps.python.org/pep-0684/. That's \n> exactly what we need: it makes it possible for separate interpreters to \n> have their own GILs. It's not clear to me if that's in Python 3.12 \n> already, or under development for some future version, but by the time \n> we make the switch in Postgres, there probably will be a solution in \n> cpython.\n\n3.12 is the currently in-development version of Python. 3.12 is planned\nfor release in October of this year.\n\nA workaround that some projects seem to do is to use multiple Python\ninterpreters[0], though it seems uncommon. It might be important to note\ndepending on the minimum version of Python Postgres aims to support (not\nsure on this policy).\n\nThe C-API of Python also provides mechanisms for releasing the GIL. I am\nnot familiar with how Postgres uses Python, but I have seen huge\nimprovements to performance with well-placed GIL releases in\nmulti-threaded contexts. Surely this API would just become a no-op after\nthe PEP is implemented.\n\n[0]: https://peps.python.org/pep-0684/#existing-use-of-multiple-interpreters\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 05 Jun 2023 10:28:50 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 11:18, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>> so that the whole server runs in a single process, with multiple\n>> threads. It has been discussed many times in the past, last thread on\n>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n> \n>> I feel that there is now pretty strong consensus that it would be a good\n>> thing, more so than before. Lots of work to get there, and lots of\n>> details to be hashed out, but no objections to the idea at a high level.\n> \n>> The purpose of this email is to make that silent consensus explicit. If\n>> you have objections to switching from the current multi-process\n>> architecture to a single-process, multi-threaded architecture, please\n>> speak up.\n> \n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n\nNoted. Other large projects have gone through this transition. It's not \neasy, but it's a lot easier now than it was 10 years ago. The platform \nand compiler support is there now, all libraries have thread-safe \ninterfaces, etc.\n\nI don't expect you or others to buy into any particular code change at \nthis point, or to contribute time into it. Just to accept that it's a \nworthwhile goal. If the implementation turns out to be a disaster, then \nit won't be accepted, of course. But I'm optimistic.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 18:33:57 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 11:28, Tristan Partin wrote:\n> On Mon Jun 5, 2023 at 9:51 AM CDT, Heikki Linnakangas wrote:\n>> # Extensions\n>>\n>> A lot of extensions also contain global variables or other things that\n>> break in a multi-threaded environment. We need a way to label extensions\n>> that support multi-threading. And in the future, also extensions that\n>> *require* a multi-threaded server.\n>>\n>> Let's add flags to the control file to mark if the extension is\n>> thread-safe and/or process-safe. If you try to load an extension that's\n>> not compatible with the server's mode, throw an error.\n>>\n>> We might need new functions in addition _PG_init, called at connection\n>> startup and shutdown. And background worker API probably needs some changes.\n> \n> It would be a good idea to start exposing a variable through pkg-config\n> to tell whether the backend is multi-threaded or multi-process.\n\nI think we need to support both modes without having to recompile the \nserver or the extensions. So it needs to be a runtime check.\n\n>> # Exposed PIDs\n>>\n>> We expose backend process PIDs to users in a few places.\n>> pg_stat_activity.pid and pg_terminate_backend(), for example. They need\n>> to be replaced, or we can assign a fake PID to each connection when\n>> running in multi-threaded mode.\n> \n> Would it be possible to just transparently slot in the thread ID\n> instead?\n\nPerhaps. It might break applications that use the PID directly with e.g. \n'kill <PID>', though.\n\n>> The Python interpreter has a Global Interpreter Lock. It's not possible\n>> to create two completely independent Python interpreters in the same\n>> process, there will be some lock contention on the GIL. Fortunately, the\n>> python community just accepted https://peps.python.org/pep-0684/. That's\n>> exactly what we need: it makes it possible for separate interpreters to\n>> have their own GILs. It's not clear to me if that's in Python 3.12\n>> already, or under development for some future version, but by the time\n>> we make the switch in Postgres, there probably will be a solution in\n>> cpython.\n> \n> 3.12 is the currently in-development version of Python. 3.12 is planned\n> for release in October of this year.\n> \n> A workaround that some projects seem to do is to use multiple Python\n> interpreters[0], though it seems uncommon. It might be important to note\n> depending on the minimum version of Python Postgres aims to support (not\n> sure on this policy).\n> \n> The C-API of Python also provides mechanisms for releasing the GIL. I am\n> not familiar with how Postgres uses Python, but I have seen huge\n> improvements to performance with well-placed GIL releases in\n> multi-threaded contexts. Surely this API would just become a no-op after\n> the PEP is implemented.\n> \n> [0]: https://peps.python.org/pep-0684/#existing-use-of-multiple-interpreters\n\nOh, cool. I'm inclined to jump straight to PEP-684 and require python \n3.12 in multi-threaded mode, though, or just accept that it's slow. But \nlet's see what the state of the world is when we get there.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 18:43:54 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "nOn Mon, Jun 5, 2023 at 05:51:57PM +0300, Heikki Linnakangas wrote:\n> # Restart on crash\n> \n> If a backend process crashes, postmaster terminates all other backends and\n> restarts the system. That's hard (impossible?) to do safely if everything\n> runs in one process. We can continue have a separate postmaster process that\n> just monitors the main process and restarts it on crash.\n\nIt would be good to know what new class of errors would cause server\nrestarts, e.g., memory allocation failures?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 5 Jun 2023 13:10:52 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 13:10, Bruce Momjian wrote:\n> nOn Mon, Jun 5, 2023 at 05:51:57PM +0300, Heikki Linnakangas wrote:\n>> # Restart on crash\n>>\n>> If a backend process crashes, postmaster terminates all other backends and\n>> restarts the system. That's hard (impossible?) to do safely if everything\n>> runs in one process. We can continue have a separate postmaster process that\n>> just monitors the main process and restarts it on crash.\n> \n> It would be good to know what new class of errors would cause server\n> restarts, e.g., memory allocation failures?\n\nYou mean \"out of memory\"? No, that would be horrible.\n\nI don't think there would be any new class of errors that would cause \nserver restarts. In theory, having a separate address space for each \nbackend gives you some protection. In practice, there are a lot of \nshared memory structures anyway that you can stomp over, and a segfault \nor unexpected exit of any backend process causes postmaster to restart \nthe whole system anyway.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 20:29:16 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/5/23 11:33 AM, Heikki Linnakangas wrote:\r\n> On 05/06/2023 11:18, Tom Lane wrote:\r\n>> Heikki Linnakangas <[email protected]> writes:\r\n>>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\r\n>>> so that the whole server runs in a single process, with multiple\r\n>>> threads. It has been discussed many times in the past, last thread on\r\n>>> pgsql-hackers was back in 2017 when Konstantin made some experiments \r\n>>> [0].\r\n>>\r\n>>> I feel that there is now pretty strong consensus that it would be a good\r\n>>> thing, more so than before. Lots of work to get there, and lots of\r\n>>> details to be hashed out, but no objections to the idea at a high level.\r\n>>\r\n>>> The purpose of this email is to make that silent consensus explicit. If\r\n>>> you have objections to switching from the current multi-process\r\n>>> architecture to a single-process, multi-threaded architecture, please\r\n>>> speak up.\r\n>>\r\n>> For the record, I think this will be a disaster. There is far too much\r\n>> code that will get broken, largely silently, and much of it is not\r\n>> under our control.\r\n> \r\n> Noted. Other large projects have gone through this transition. It's not \r\n> easy, but it's a lot easier now than it was 10 years ago. The platform \r\n> and compiler support is there now, all libraries have thread-safe \r\n> interfaces, etc.\r\n> \r\n> I don't expect you or others to buy into any particular code change at \r\n> this point, or to contribute time into it. Just to accept that it's a \r\n> worthwhile goal. If the implementation turns out to be a disaster, then \r\n> it won't be accepted, of course. But I'm optimistic.\r\n\r\nI don't have enough expertise in this area to comment on if it'd be a \r\n\"disaster\" or not. My zoomed out observations are two-fold:\r\n\r\n1. It seems like there's a lack of consensus on which of processes vs. \r\nthreads yield the best performance benefit, and from talking to folks \r\nwith greater expertise than me, this can vary between workloads. I \r\nbelieve one DB even gives uses a choice if they want to run in processes \r\nvs. threads.\r\n\r\n2. While I wouldn't want to necessarily discourage a moonshot effort, I \r\nwould ask if developer time could be better spent on tackling some of \r\nthe other problems around vertical scalability? Per some PGCon \r\ndiscussions, there's still room for improvement in how PostgreSQL can \r\nbest utilize resources available very large \"commodity\" machines (a \r\n448-core / 24TB RAM instance comes to mind).\r\n\r\nI'm purposely giving a nonanswer on whether it's a worthwhile goal, but \r\nrather I'd be curious where it could stack up against some other efforts \r\nto continue to help PostgreSQL improve performance and handle very large \r\nworkloads.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 5 Jun 2023 13:40:13 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 08:29:16PM +0300, Heikki Linnakangas wrote:\n> On 05/06/2023 13:10, Bruce Momjian wrote:\n> > nOn Mon, Jun 5, 2023 at 05:51:57PM +0300, Heikki Linnakangas wrote:\n> > > # Restart on crash\n> > > \n> > > If a backend process crashes, postmaster terminates all other backends and\n> > > restarts the system. That's hard (impossible?) to do safely if everything\n> > > runs in one process. We can continue have a separate postmaster process that\n> > > just monitors the main process and restarts it on crash.\n> > \n> > It would be good to know what new class of errors would cause server\n> > restarts, e.g., memory allocation failures?\n> \n> You mean \"out of memory\"? No, that would be horrible.\n> \n> I don't think there would be any new class of errors that would cause server\n> restarts. In theory, having a separate address space for each backend gives\n> you some protection. In practice, there are a lot of shared memory\n> structures anyway that you can stomp over, and a segfault or unexpected exit\n> of any backend process causes postmaster to restart the whole system anyway.\n\nUh, yes, but don't we detect failures while modifying shared memory and\nforce a restart? Wouldn't the scope of failures be much larger?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 5 Jun 2023 14:04:01 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 14:04, Bruce Momjian wrote:\n> On Mon, Jun 5, 2023 at 08:29:16PM +0300, Heikki Linnakangas wrote:\n>> I don't think there would be any new class of errors that would cause server\n>> restarts. In theory, having a separate address space for each backend gives\n>> you some protection. In practice, there are a lot of shared memory\n>> structures anyway that you can stomp over, and a segfault or unexpected exit\n>> of any backend process causes postmaster to restart the whole system anyway.\n> \n> Uh, yes, but don't we detect failures while modifying shared memory and\n> force a restart? Wouldn't the scope of failures be much larger?\n\nIf one process writes over shared memory that it shouldn't, it can cause \na crash in that process or some other process that reads it. Same with \nmultiple threads, no difference there.\n\nWith a single process, one thread can modify another thread's \"backend \nprivate\" memory, and cause the other thread to crash. Perhaps that's \nwhat you meant?\n\nIn practice, I don't think it's so bad. Even in a multi-threaded \nenvironment, common bugs like buffer overflows and use-after-free are \nstill much more likely to access memory owned by the same thread, thanks \nto how memory allocators work. And a completely random memory access is \nstill more likely to cause a segfault than corrupting another thread's \nmemory. And tools like CLOBBER_FREED_MEMORY/MEMORY_CONTEXT_CHECKING and \nvalgrind are pretty good at catching memory access bugs at development \ntime, whether it's multiple processes or threads.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 21:30:28 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-05 Mo 11:18, Tom Lane wrote:\n> Heikki Linnakangas<[email protected]> writes:\n>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>> so that the whole server runs in a single process, with multiple\n>> threads. It has been discussed many times in the past, last thread on\n>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>> I feel that there is now pretty strong consensus that it would be a good\n>> thing, more so than before. Lots of work to get there, and lots of\n>> details to be hashed out, but no objections to the idea at a high level.\n>> The purpose of this email is to make that silent consensus explicit. If\n>> you have objections to switching from the current multi-process\n>> architecture to a single-process, multi-threaded architecture, please\n>> speak up.\n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n>\n> \t\t\t\n\n\nIf we were starting out today we would probably choose a threaded \nimplementation. But moving to threaded now seems to me like a \nmulti-year-multi-person project with the prospect of years to come \nchasing bugs and the prospect of fairly modest advantages. The risk to \nreward doesn't look great.\n\nThat's my initial reaction. I could be convinced otherwise.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-05 Mo 11:18, Tom Lane wrote:\n\n\nHeikki Linnakangas <[email protected]> writes:\n\n\nI spoke with some folks at PGCon about making PostgreSQL multi-threaded, \nso that the whole server runs in a single process, with multiple \nthreads. It has been discussed many times in the past, last thread on \npgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n\n\n\n\n\nI feel that there is now pretty strong consensus that it would be a good \nthing, more so than before. Lots of work to get there, and lots of \ndetails to be hashed out, but no objections to the idea at a high level.\n\n\n\n\n\nThe purpose of this email is to make that silent consensus explicit. If \nyou have objections to switching from the current multi-process \narchitecture to a single-process, multi-threaded architecture, please \nspeak up.\n\n\n\nFor the record, I think this will be a disaster. There is far too much\ncode that will get broken, largely silently, and much of it is not\nunder our control.\n\n\t\t\t\n\n\n\nIf we were starting out today we would probably choose a threaded\n implementation. But moving to threaded now seems to me like a\n multi-year-multi-person project with the prospect of years to come\n chasing bugs and the prospect of fairly modest advantages. The\n risk to reward doesn't look great.\nThat's my initial reaction. I could be convinced otherwise.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 5 Jun 2023 14:51:50 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/5/23 14:51, Andrew Dunstan wrote:\n> \n> On 2023-06-05 Mo 11:18, Tom Lane wrote:\n>> Heikki Linnakangas<[email protected]> writes:\n>>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>>> so that the whole server runs in a single process, with multiple\n>>> threads. It has been discussed many times in the past, last thread on\n>>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>>> I feel that there is now pretty strong consensus that it would be a good\n>>> thing, more so than before. Lots of work to get there, and lots of\n>>> details to be hashed out, but no objections to the idea at a high level.\n>>> The purpose of this email is to make that silent consensus explicit. If\n>>> you have objections to switching from the current multi-process\n>>> architecture to a single-process, multi-threaded architecture, please\n>>> speak up.\n>> For the record, I think this will be a disaster. There is far too much\n>> code that will get broken, largely silently, and much of it is not\n>> under our control.\n> \n> If we were starting out today we would probably choose a threaded \n> implementation. But moving to threaded now seems to me like a \n> multi-year-multi-person project with the prospect of years to come \n> chasing bugs and the prospect of fairly modest advantages. The risk to \n> reward doesn't look great.\n> \n> That's my initial reaction. I could be convinced otherwise.\n\n\nI read through the thread thus far, and Andrew's response is the one \nthat best aligns with my reaction.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 16:08:24 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 8:18 AM Tom Lane <[email protected]> wrote:\n\n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n>\n\nWhile I've long been in favor of a multi-threaded implementation, now in my\nold age, I tend to agree with Tom. I'd be interested in Konstantin's\nthoughts (and PostgresPro's experience) of multi-threaded vs. internal\npooling with the current process-based model. I recall looking at and\nplaying with Konstantin's implementations of both, which were impressive.\nYes, the latter doesn't solve the same issues, but many real-world ones\nwhere multi-threaded is argued. Personally, I think there would be not only\na significant amount of time spent dealing with in-the-field stability\nregressions before a multi-threaded implementation matures, but it would\nalso increase the learning curve for anyone trying to start with internals\ndevelopment.\n\n-- \nJonah H. Harris\n\nOn Mon, Jun 5, 2023 at 8:18 AM Tom Lane <[email protected]> wrote:\nFor the record, I think this will be a disaster. There is far too much\ncode that will get broken, largely silently, and much of it is not\nunder our control.While I've long been in favor of a multi-threaded implementation, now in my old age, I tend to agree with Tom. I'd be interested in Konstantin's thoughts (and PostgresPro's experience) of multi-threaded vs. internal pooling with the current process-based model. I recall looking at and playing with Konstantin's implementations of both, which were impressive. Yes, the latter doesn't solve the same issues, but many real-world ones where multi-threaded is argued. Personally, I think there would be not only a significant amount of time spent dealing with in-the-field stability regressions before a multi-threaded implementation matures, but it would also increase the learning curve for anyone trying to start with internals development.-- Jonah H. Harris",
"msg_date": "Mon, 5 Jun 2023 14:07:52 -0700",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 09:30:28PM +0300, Heikki Linnakangas wrote:\n> If one process writes over shared memory that it shouldn't, it can cause a\n> crash in that process or some other process that reads it. Same with\n> multiple threads, no difference there.\n> \n> With a single process, one thread can modify another thread's \"backend\n> private\" memory, and cause the other thread to crash. Perhaps that's what\n> you meant?\n> \n> In practice, I don't think it's so bad. Even in a multi-threaded\n> environment, common bugs like buffer overflows and use-after-free are still\n> much more likely to access memory owned by the same thread, thanks to how\n> memory allocators work. And a completely random memory access is still more\n> likely to cause a segfault than corrupting another thread's memory. And\n> tools like CLOBBER_FREED_MEMORY/MEMORY_CONTEXT_CHECKING and valgrind are\n> pretty good at catching memory access bugs at development time, whether it's\n> multiple processes or threads.\n\nI remember we used to have macros we called before we modified critical\nparts of shared memory, and if a process exited while in those blocks,\nthe server would restart. Unfortunately, I can't find that in the code\nnow.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 5 Jun 2023 19:26:15 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 4:26 PM Bruce Momjian <[email protected]> wrote:\n> I remember we used to have macros we called before we modified critical\n> parts of shared memory, and if a process exited while in those blocks,\n> the server would restart. Unfortunately, I can't find that in the code\n> now.\n\nIsn't that what we call a critical section? They effectively \"promote\"\nany ERROR (e.g., from an OOM) into a PANIC.\n\nI thought that we only used critical sections for things that are\nWAL-logged, but I double checked just now. Turns out that I was wrong:\nPGSTAT_BEGIN_WRITE_ACTIVITY() contains its own START_CRIT_SECTION(),\ndespite not being involved in WAL logging. And so critical sections\ncould indeed be described as something that we use whenever shared\nmemory cannot be left in an inconsistent state (which often coincides\nwith WAL logging, but need not).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Jun 2023 16:50:11 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 04:50:11PM -0700, Peter Geoghegan wrote:\n> On Mon, Jun 5, 2023 at 4:26 PM Bruce Momjian <[email protected]> wrote:\n> > I remember we used to have macros we called before we modified critical\n> > parts of shared memory, and if a process exited while in those blocks,\n> > the server would restart. Unfortunately, I can't find that in the code\n> > now.\n> \n> Isn't that what we call a critical section? They effectively \"promote\"\n> any ERROR (e.g., from an OOM) into a PANIC.\n> \n> I thought that we only used critical sections for things that are\n> WAL-logged, but I double checked just now. Turns out that I was wrong:\n> PGSTAT_BEGIN_WRITE_ACTIVITY() contains its own START_CRIT_SECTION(),\n> despite not being involved in WAL logging. And so critical sections\n> could indeed be described as something that we use whenever shared\n> memory cannot be left in an inconsistent state (which often coincides\n> with WAL logging, but need not).\n\nYes, sorry, critical sections is what I was remembering. My question is\nwhether all unexpected backend exits should be treated as critical\nsections?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 5 Jun 2023 20:15:56 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/5/23 2:07 PM, Jonah H. Harris wrote:\n> On Mon, Jun 5, 2023 at 8:18 AM Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n> \n> \n> While I've long been in favor of a multi-threaded implementation, now in\n> my old age, I tend to agree with Tom. I'd be interested in Konstantin's\n> thoughts (and PostgresPro's experience) of multi-threaded vs. internal\n> pooling with the current process-based model. I recall looking at and\n> playing with Konstantin's implementations of both, which were\n> impressive. Yes, the latter doesn't solve the same issues, but many\n> real-world ones where multi-threaded is argued. Personally, I think\n> there would be not only a significant amount of time spent dealing with\n> in-the-field stability regressions before a multi-threaded\n> implementation matures, but it would also increase the learning curve\n> for anyone trying to start with internals development.\n\nTo me, processes feel just a little easier to observe and inspect, a\nlittle easier to debug, and a little easier to reason about. Tooling\ndoes exist for threads - but operating systems track more things at a\nprocess level and I like having the full arsenal of unix process-based\ntooling at my disposal.\n\nEven simple things, like being able to see at a glance from \"ps\" or\n\"top\" output which process is the bgwriter or the checkpointer, and\nbeing able to attach gdb only on that process without pausing the whole\nsystem. Or to a single backend.\n\nA thread model certainly has advantages but I do feel that some useful\nthings might be lost here.\n\nAnd for the record, just within the past few weeks I saw a small mistake\nin some C code which smashed the stack of another thread in the same\nprocess space. It manifested as unpredictable periodic random SIGSEGV\nand SIGBUS with core dumps that were useless gibberish, and it was\nrather difficult to root cause.\n\nBut one interesting outcome of that incident was learning from my\ncolleague Josh that apparently SUSv2 and C99 contradict each other: when\nsnprintf() is called with size=0 then SUSv2 stipulates an unspecified\nreturn value less than 1, while C99 allows str to be NULL in this case,\nand gives the return value (as always) as the number of characters that\nwould have been written in case the output string has been large enough.\n\nSo long story short... I think the robustness angle on the process model\nshouldn't be underestimated either.\n\n-Jeremy\n\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 17:27:11 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 5:15 PM Bruce Momjian <[email protected]> wrote:\n> > Isn't that what we call a critical section? They effectively \"promote\"\n> > any ERROR (e.g., from an OOM) into a PANIC.\n\n> Yes, sorry, critical sections is what I was remembering. My question is\n> whether all unexpected backend exits should be treated as critical\n> sections?\n\nI think that it boils down to this: critical sections help us to avoid\nvarious inconsistencies that might otherwise be introduced to critical\nstate, usually in shared memory. And so critical sections are mostly\nabout protecting truly crucial state, even in the presence of\nirrecoverable problems (e.g., those caused by corruption that was\nmissed before the critical section was reached, fsync() reporting\nfailure on recent Postgres versions). This is mostly about the state\nitself -- it's not about cleaning up from routine errors at all. The\nserver isn't supposed to PANIC, and won't unless some fundamental\nassumption that the system makes isn't met.\n\nI said that an OOM could cause a PANIC. But that really shouldn't be\npossible in practice, since it can only happen when code in a critical\nsection actually attempts to allocate memory in the first place. There\nis an assertion in palloc() that will catch code that violates that\nrule. It has been known to happen from time to time, but theoretically\nit should never happen.\n\nDiscussion about the robustness of threads versus processes seems to\nonly be concerned with what can happen after something \"impossible\"\ntakes place. Not before. Backend code is not supposed to corrupt\nmemory, whether shared or local, with or without threads. Code in\ncritical sections isn't supposed to even attempt memory allocation.\nJeremy and others have suggested that processes have significant\nrobustness advantages. Maybe they do, but it's hard to say either way\nbecause these benefits only apply \"when the impossible happens\". In\nany given case it's reasonable to wonder if the user was protected by\nour multi-process architecture, or protected by dumb luck. Could even\nbe both.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 5 Jun 2023 17:50:04 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": ">> For the record, I think this will be a disaster. There is far too\n>> much\n>> code that will get broken, largely silently, and much of it is not\n>> under our control.\n>>\n>> \t\t\t\n> \n> \n> If we were starting out today we would probably choose a threaded\n> implementation. But moving to threaded now seems to me like a\n> multi-year-multi-person project with the prospect of years to come\n> chasing bugs and the prospect of fairly modest advantages. The risk to\n> reward doesn't look great.\n\n+1.\n\nLong time ago (PostgreSQL 7 days) I modified PostgreSQL to threaded\nimplementation so that it runs on Windows because there's was no\nWindows port of PostgreSQL at that time. I don't remember the details\nbut it was desperately hard for me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 06 Jun 2023 10:30:05 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 06.06.2023 12:07 AM, Jonah H. Harris wrote:\n> On Mon, Jun 5, 2023 at 8:18 AM Tom Lane <[email protected]> wrote:\n>\n> For the record, I think this will be a disaster. There is far too\n> much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n>\n>\n> While I've long been in favor of a multi-threaded implementation, now \n> in my old age, I tend to agree with Tom. I'd be interested in \n> Konstantin's thoughts (and PostgresPro's experience) of multi-threaded \n> vs. internal pooling with the current process-based model. I recall \n> looking at and playing with Konstantin's implementations of both, \n> which were impressive. Yes, the latter doesn't solve the same issues, \n> but many real-world ones where multi-threaded is argued. Personally, I \n> think there would be not only a significant amount of time spent \n> dealing with in-the-field stability regressions before a \n> multi-threaded implementation matures, but it would also increase the \n> learning curve for anyone trying to start with internals development.\n>\n> -- \n> Jonah H. Harris\n>\n\n\nLet me share my experience with porting Postgres to threads (by the way \n- repository is still alive - \nhttps://github.com/postgrespro/postgresql.pthreads \n<https://github.com/postgrespro/postgresql.pthreads>\nbut I have not keep it in sync with recent versions of Postgres).\n\n1. Solving the problem with static variables was not so difficult as I \nexpected - thanks to TLS and its support in modern compilers.\nSo the only thing we should do is to add some special modified to \nvariable declaration:\n\n-static int MyLockNo = 0;\n-static bool holdingAllLocks = false;\n+static session_local int MyLockNo = 0;\n+static session_local bool holdingAllLocks = false;\n\nBut there are about 2k such variables which storage class has to be changed.\nThis is one of the reasons why I do not agree with the proposal to \ndefine some session context, place all session specific variables in \nsuch context and pass it everywhere. It will be very inconvenient to \nmaintain structure with 2k fields and adding new field to this struct \neach time you need some non-local variable. Even i it can be hide in \nsome macros like DEF_SESSION_VAR(type, name).\nAlso it requires changing of all Postgres code working with this \nvariables, not just declarations.\nSo patch will be 100x times more and almost any line of Postgres code \nhas to be changed.\nAnd I do not see any reasons for it except portability and avoid \ndependecy on compiler.\nImplementation of TLS is quite efficient (at least at x86) - there is \nspecial register pointing to TLS area, so access TLS variable is not \nmore expensive than static variable.\n\n2. Performance improvement from switching to threads was not so large \n(~10%). But please notice that I have not changed ny Postgres sync \nprimitives.\n(But still not sure that using for example pthead_rwlock instead of our \nown LWLock will cause some gains in performance)\n\n3. Multithreading significantly simplify concurrent query execution and \ninteraction between workers.\nRight now with dynamic shared memory stuff we can support work with \nvarying size data in shared memory but\nin mutithreaded program it can be done much easier.\n\n4. Multuthreaded model opens a way for fixing many existed Postgres \nproblems: lack of shared catalog and prepared statements cache, changing \npage pool size (shared buffers) in runtime, ...\n\n5. During this porting I had most of troubles with the following \ncomponents: GUCs, signals, handling errors and file descriptor cache. \nFile descriptor cache really becomes bottleneck because now all backends \nand competing for file descriptors which number is usually limited by \n1024 (without editing system configuration). Protecting it with mutex \ncause significant degrade of performance. So I have to maintain \nthread-local cache.\n\n6. It is not clear how to support external extensions.\n\n7. It will be hard to support non-multithreaded PL languages (like \npython), but for example support of Java will be more natural and efficient.\n\nI do not think that development of multithreaded application is more \ncomplex or requires large \"learning curve\".\nWhen you deal with parallel execution you should be careful in any case.\nThe advantage of process model is that there is much clear distinction \nbetween shared and private variables.\nConcerning debugging and profiling - it is more convenient with \nmultithreading in some cases and less convenient in other.\nBut programmers are living with threads for more than 30 years so now \nmost tools are supporting threads at least not worse than processes.\nAnd for many developers now working with threads is more natural and \nconvenient.\n\n\nOOM and local backend memory consumption seems to be one of the main \nchallenges for multithreadig model:\nright now some queries can cause high consumption of memory. work_mem is \njust a hint and real memory consumption can be much higher.\nEven if it doesn't cause OOM, still not all of the allocated memory is \nreturned to OS after query completion and increase memory fragmentation.\nRight now restart of single backend suffering from memory fragmentation \neliminates this problem. But if will be impossible for multhreaded Postgres.\n\n\nSo? as I see from this thread, most of authoritative members of Postgres \ncommunity are still very pessimistic (or conservative:)\nabout making Postgres multi-threaded. And it is really huge work which \nwill cause significant code discrepancy. It significantly complicates\nbackpatching and support of external extension. It can not be done \nwithout support and approval by most of committers. This is why this \nwork was stalled in PgPro.\n\nMy personal opinion is that Postgres almost reaches its \"limit of \nevolution\" or is close to it.\nMaking some major changes such as multithreading, undo log, columnar \nstore with vector executor\nrequires so much changes and cause so many conflicts with existed code \nthat it will be easier to develop new system from scratch\nrather than trying to plugin new approach in old architecture. May be I \nwrong. It can be my personal fault that I was not able to bring \nmultithread Postgres, builtin connection pooler, vectorized executor, \nlibpq compression and other my PRs to commit.\nI have a filling that it is not possible to merge in mainstream \nsomething non-trivial, affecting Postgres core without interest and help \nof several\ncommitters. Fro the other hand presence of such Postgres forks as \nTimescaleDB, OrioleDB, GreenPlum demonstrates that Postgres still has \nhigh potential for extension.\n\n\n\n\n\n\n\n\nOn 06.06.2023 12:07 AM, Jonah H. Harris\n wrote:\n\n\n\n\n\nOn Mon, Jun 5, 2023 at\n 8:18 AM Tom Lane <[email protected]>\n wrote:\n\n For the record, I think this will be a disaster. There is\n far too much\n code that will get broken, largely silently, and much of it\n is not\n under our control.\n\n\n\nWhile I've long been in favor of a multi-threaded\n implementation, now in my old age, I tend to agree with Tom.\n I'd be interested in Konstantin's thoughts\n (and PostgresPro's experience) of multi-threaded vs.\n internal pooling with the current process-based model. I\n recall looking at and playing with Konstantin's\n implementations of both, which were impressive. Yes, the\n latter doesn't solve the same issues, but many\n real-world ones where multi-threaded is argued. Personally,\n I think there would be not only a significant amount of time\n spent dealing with in-the-field stability regressions before\n a multi-threaded implementation matures, but it would also\n increase the learning curve for anyone trying to start with\n internals development.\n\n\n\n-- \n\n\nJonah H. Harris\n\n\n\n\n\n\n\n\n\n Let me share my experience with porting Postgres to threads (by the\n way - repository is still alive - https://github.com/postgrespro/postgresql.pthreads\n but I have not keep it in sync with recent versions of Postgres).\n\n 1. Solving the problem with static variables was not so difficult as\n I expected - thanks to TLS and its support in modern compilers.\n So the only thing we should do is to add some special modified to\n variable declaration:\n\n -static int MyLockNo = 0;\n -static bool holdingAllLocks = false;\n +static session_local int MyLockNo = 0;\n +static session_local bool holdingAllLocks = false;\n\n But there are about 2k such variables which storage class has to be\n changed.\n This is one of the reasons why I do not agree with the proposal to\n define some session context, place all session specific variables in\n such context and pass it everywhere. It will be very inconvenient to\n maintain structure with 2k fields and adding new field to this\n struct each time you need some non-local variable. Even i it can be\n hide in some macros like DEF_SESSION_VAR(type, name).\n Also it requires changing of all Postgres code working with this\n variables, not just declarations. \n So patch will be 100x times more and almost any line of Postgres\n code has to be changed.\n And I do not see any reasons for it except portability and avoid\n dependecy on compiler.\n Implementation of TLS is quite efficient (at least at x86) - there\n is special register pointing to TLS area, so access TLS variable is\n not more expensive than static variable.\n\n 2. Performance improvement from switching to threads was not so\n large (~10%). But please notice that I have not changed ny Postgres\n sync primitives.\n (But still not sure that using for example pthead_rwlock instead of\n our own LWLock will cause some gains in performance)\n\n 3. Multithreading significantly simplify concurrent query execution\n and interaction between workers. \n Right now with dynamic shared memory stuff we can support work with\n varying size data in shared memory but\n in mutithreaded program it can be done much easier.\n\n 4. Multuthreaded model opens a way for fixing many existed Postgres\n problems: lack of shared catalog and prepared statements cache,\n changing page pool size (shared buffers) in runtime, ...\n\n 5. During this porting I had most of troubles with the following\n components: GUCs, signals, handling errors and file descriptor\n cache. File descriptor cache really becomes bottleneck because now\n all backends and competing for file descriptors which number is\n usually limited by 1024 (without editing system configuration).\n Protecting it with mutex cause significant degrade of performance.\n So I have to maintain thread-local cache.\n\n 6. It is not clear how to support external extensions.\n\n 7. It will be hard to support non-multithreaded PL languages (like\n python), but for example support of Java will be more natural and\n efficient.\n\n I do not think that development of multithreaded application is\n more complex or requires large \"learning curve\".\n When you deal with parallel execution you should be careful in any\n case.\n The advantage of process model is that there is much clear\n distinction between shared and private variables.\n Concerning debugging and profiling - it is more convenient with\n multithreading in some cases and less convenient in other.\n But programmers are living with threads for more than 30 years so\n now most tools are supporting threads at least not worse than\n processes.\n And for many developers now working with threads is more natural and\n convenient.\n\n\n OOM and local backend memory consumption seems to be one of the main\n challenges for multithreadig model:\n right now some queries can cause high consumption of memory.\n work_mem is just a hint and real memory consumption can be much\n higher.\n Even if it doesn't cause OOM, still not all of the allocated memory\n is returned to OS after query completion and increase memory\n fragmentation.\n Right now restart of single backend suffering from memory\n fragmentation eliminates this problem. But if will be impossible for\n multhreaded Postgres.\n\n\n So? as I see from this thread, most of authoritative members of Postgres community are still very pessimistic (or\n conservative:) \n about making Postgres multi-threaded. And it is really huge work\n which will cause significant\n code discrepancy. It significantly complicates\n backpatching and support of external extension. It can not be done\n without support and\n approval by most of committers. This is why this work was stalled\n in PgPro. \n\n My personal opinion is that Postgres almost reaches its \"limit of evolution\" or is close to it.\n Making some major changes such as multithreading, undo log,\n columnar store with vector executor \n requires so much changes and cause so many conflicts with existed\n code that it will be easier to develop new system from scratch\n rather than trying to plugin new approach in old architecture. May\n be I wrong. It can be my personal fault that I was not able to\n bring multithread Postgres, builtin connection pooler, vectorized\n executor, libpq compression and other my PRs to commit. \n I have a filling that it is not possible to merge in mainstream\n something non-trivial, affecting Postgres core without interest\n and help of several\n committers. Fro the other hand presence of such Postgres forks as\n TimescaleDB, OrioleDB, GreenPlum demonstrates that Postgres still\n has high potential for extension.",
"msg_date": "Tue, 6 Jun 2023 15:06:19 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 10:52 AM Heikki Linnakangas <[email protected]> wrote:\n> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n> so that the whole server runs in a single process, with multiple\n> threads. It has been discussed many times in the past, last thread on\n> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n> I feel that there is now pretty strong consensus that it would be a good\n> thing, more so than before. Lots of work to get there, and lots of\n> details to be hashed out, but no objections to the idea at a high level.\n\nI'm not sure that there's a strong consensus, but I do think it's a good idea.\n\n> # Transition period\n>\n> The transition surely cannot be done fully in one release. Even if we\n> could pull it off in core, extensions will need more time to adapt.\n> There will be a transition period of at least one release, probably\n> more, where you can choose multi-process or multi-thread model using a\n> GUC. Depending on how it goes, we can document it as experimental at first.\n\nI think the transition period should probably be effectively infinite.\nThere might be some distant future day when we'd remove the process\nsupport, if things go incredibly well with threads, but I don't think\nit would be any time soon. If nothing else, considering that we don't\nwant to force a hard compatibility break for extensions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 09:40:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 9:40 AM Robert Haas <[email protected]> wrote:\n> I'm not sure that there's a strong consensus, but I do think it's a good idea.\n\nLet me elaborate on this a bit.\n\nI think one of PostgreSQL's bigger problems right now is that it\ndoesn't scale as far as users would like. Beyond a couple of hundred\nconnections, everything goes to heck. Back in the day, the big\nscalability problems were around locking, but we've done a pretty good\njob cleaning that stuff up over the issues. Now, the problem when you\nrun a ton of PostgreSQL connections isn't so much that PostgreSQL\nstops working as it is that the OS stops working. PostgreSQL backends\nuse a lot of memory, even if they're idle. Some of that is for stuff\nthat we could optimize but haven't, like catcache and relcache\nentries, and some of it is for stuff that we can't do anything about,\nlike per-process page tables. But the problem isn't just RAM, either.\nI've seen machines running >1000 PostgreSQL backends where kill -9\ntook many *minutes* to work because the OS was overwhelmed. I don't\nknow exactly what goes wrong inside the kernel, but clearly something\ndoes.\n\nNot all databases have this problem, and PostgreSQL isn't going to be\nable to stop having it without some kind of major architectural\nchange. Changing from a process model to a threaded model might be\ninsufficient, because while I think that threads consume fewer OS\nresources than processes, what is really needed, in all likelihood, is\nthe ability to have idle connections have neither a process nor a\nthread associated with them until they cease being idle. That's a huge\nproject and I'm not volunteering to do it, but if we want to have the\nsame kind of scalability as some competing products, that is probably\na place to which we ultimately need to go. Getting out of the current\nmodel where every backend has an arbitrarily large amount of state\nhanging off of random global variables, not all of which are even\nknown to any central system, is a critical step in that journey.\n\nAlso, programming with DSA and shm_mq sucks. It's doable (proof by\nexample) but it's awkward and it takes a long time and the performance\nisn't great. Here again, threads instead of processes is no panacea.\nFor as long as we support a process model - and my guess is that we're\ntalking about a very long time - new features are going to have to\nwork with those systems or else be optional. But the amount of sheer\nmental energy that is required to deal with DSA means we're unlikely\nto ever have a rich library of parallel primitives. Maybe we wouldn't\nanyway, volunteer efforts are hard to predict, but this is certainly\nnot helping. I do think that there's some danger that if sharing\nmemory becomes as easy as calling palloc(), we'll end up with memory\nleaks that could eventually take the whole system down. We need to\ngive some thought to how to avoid or manage that danger.\n\nEven think about something like the main lock table. That's a fixed\nsize hash table, so lock exhaustion is a real possibility. If we\nweren't limited to a fixed-size shared memory segment, we could let\nthat thing grow without a server restart. We might not want to let it\ngrow infinitely, but we could raise the maximum size by 100x and\nallocate as required and I think we'd just be better off. Doing that\nas things stand would require nailing down that amount of memory\nforever whether it's ever needed or not, which doesn't seem like a\ngood idea. But doing something where the memory can be allocated only\nif it's needed would avoid user-facing errors with relatively little\ncost.\n\nI think doing something like this is going to be a huge effort, and\nfrankly, there's probably no point in anybody other than a handful of\npeople (Heikki, Andres, a handful of others) even trying. There's too\nmany ways to go wrong, and this has to be done really well to be worth\ndoing at all. But if somebody with the requisite expertise wants to\nhave a go at it, I don't think we should tell them \"no, we don't want\nthat\" on principle. Let's talk about whether a specific proposal is\ngood or bad, and why it's good or bad, rather than falling back on an\nessentially religious argument. It's not an article of faith that\nPostgreSQL should not use threads: it's a technology decision. The\ndifficulty of reversing the decision made long ago should weigh\nheavily in evaluating any proposal to do so, but the potential\nbenefits of such a change should be considered, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 10:13:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 06/06/2023 09:40, Robert Haas wrote:\n> On Mon, Jun 5, 2023 at 10:52 AM Heikki Linnakangas <[email protected]> wrote:\n>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>> so that the whole server runs in a single process, with multiple\n>> threads. It has been discussed many times in the past, last thread on\n>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>>\n>> I feel that there is now pretty strong consensus that it would be a good\n>> thing, more so than before. Lots of work to get there, and lots of\n>> details to be hashed out, but no objections to the idea at a high level.\n> \n> I'm not sure that there's a strong consensus, but I do think it's a good idea.\n\nThe consensus is not as strong as I hoped for... To summarize:\n\nTom, Andrew, Joe are worried that it will break a lot of stuff. That's a \nvalid point. The transition needs to be done well and not break things, \nI agree with that. But if we can make the transition smooth, that's not \nan objection to the idea itself.\n\nMany comments have been along the lines of \"it's hard, not worth the \neffort\". That's fair, but also not an objection to the idea itself, if \nsomeone decides to spend the time on it.\n\nBruce was worried about the loss of isolation that the separate address \nspaces gives, and Jeremy shared an anecdote on that. That is an \nobjection to the idea itself, i.e. even if transition was smooth, \nbug-free and effortless, that point remains. I personally think the \nisolation we get from separate address spaces is overrated. Yes, it \ngives you some protection, but given how much shared memory there is, \nthe blast radius is large even with separate backend processes.\n\nSo I think there's hope. I didn't hear any _strong_ objections to the \nidea itself, assuming the transition can be done smoothly.\n\n>> # Transition period\n>>\n>> The transition surely cannot be done fully in one release. Even if we\n>> could pull it off in core, extensions will need more time to adapt.\n>> There will be a transition period of at least one release, probably\n>> more, where you can choose multi-process or multi-thread model using a\n>> GUC. Depending on how it goes, we can document it as experimental at first.\n> \n> I think the transition period should probably be effectively infinite.\n> There might be some distant future day when we'd remove the process\n> support, if things go incredibly well with threads, but I don't think\n> it would be any time soon.\n\nI don't think this is worth it, unless we plan to eventually remove the \nmulti-process mode. We could e.g. make lock table expandable in threaded \nmode, and fixed-size in process mode, but the big gains would come from \nbeing able to share things between threads and have variable-length \nshared data structures more easily. As long as you need to also support \nprocesses, you need to code to the lowest common denominator and don't \nreally get the benefits.\n\nI don't know how long a transition period we need. Maybe 1 release, maybe 5.\n\n> If nothing else, considering that we don't want to force a hard\n> compatibility break for extensions.\nExtensions regularly need small tweaks to adapt to new major Postgres \nversions, I don't think this would be too different.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 18:46:48 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-06 08:06, Konstantin Knizhnik wrote:\n> 7. It will be hard to support non-multithreaded PL languages (like \n> python), but for example support of Java will be more natural and \n> efficient.\n\nTo this I say ...\n\nHmm.\n\nSurely, the current situation with a JVM in each backend process\n(that calls for one) has been often seen as heavier than desirable.\n\nAt the same time, I am not sure how manageable one giant process\nwith one giant JVM instance would prove to be, either.\n\nIt is somewhat nice to be able to tweak JVM settings in a session\nand see what happens, without disrupting other sessions. There may\nalso exist cases for different JVM settings in per-user or per-\ndatabase GUCs.\n\nLike Python with the GIL, it is documented for JNI_CreateJavaVM\nthat \"Creation of multiple VMs in a single process is not\nsupported.\"[1]\n\nAnd the devs of Java, in their immeasurable wisdom, have announced\na \"JDK Enhancement Proposal\" (that's just what these things are\ncalled, don't blame Orwell), JEP 411[2][3], in which all of the\nSecurity Manager features that PL/Java relies on for bounds on\n'trusted' behavior are deprecated for eventual removal with no\nfunctional replacement. I'd be even more leery of using one big\nshared JVM for everybody's work after that happens.\n\nMight the work toward allowing a run-time choice between a\nprocess or threaded model also make possible some\nintermediate models as well? A backend process for\nconnections to a particular database, or with particular\nauthentication credentials? Go through the authentication\nhandshake and then sendfd the connected socket to the\nappropriate process. (Has every supported platform got\nsomething like sendfd?)\n\nThat way, there could be some flexibility to arrange how many\ndistinct backends (and, for Java purposes, how many JVMs) get\nfired up, and have each sharing sessions that have something in\ncommon.\n\nOr, would that just require all the complexity of both\napproaches to synchronization, with no sufficient benefit?\n\nRegards,\n-Chap\n\n[1] \nhttps://docs.oracle.com/en/java/javase/17/docs/specs/jni/invocation.html#jni_createjavavm\n[2] \nhttps://blogs.apache.org/netbeans/entry/jep-411-deprecate-the-security1\n[3] https://github.com/tada/pljava/wiki/JEP-411\n\n\n",
"msg_date": "Tue, 06 Jun 2023 11:48:23 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 06/06/2023 11:48, [email protected] wrote:\n> And the devs of Java, in their immeasurable wisdom, have announced\n> a \"JDK Enhancement Proposal\" (that's just what these things are\n> called, don't blame Orwell), JEP 411[2][3], in which all of the\n> Security Manager features that PL/Java relies on for bounds on\n> 'trusted' behavior are deprecated for eventual removal with no\n> functional replacement. I'd be even more leery of using one big\n> shared JVM for everybody's work after that happens.\n\nOuch.\n\n> Might the work toward allowing a run-time choice between a\n> process or threaded model also make possible some\n> intermediate models as well? A backend process for\n> connections to a particular database, or with particular\n> authentication credentials? Go through the authentication\n> handshake and then sendfd the connected socket to the\n> appropriate process. (Has every supported platform got\n> something like sendfd?)\n\nI'm afraid having multiple processes and JVMs doesn't help that. If you \ncan escape the one JVM in one backend process, it's game over. Backend \nprocesses are not a security barrier, and you have the same problems \nwith the current multi-process architecture, too.\n\nhttps://github.com/greenplum-db/plcontainer is one approach. It launches \na separate process for the PL, separate from the backend process, and \nsandboxes that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 19:24:11 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-06 12:24, Heikki Linnakangas wrote:\n> I'm afraid having multiple processes and JVMs doesn't help that.\n> If you can escape the one JVM in one backend process, it's game over.\n\nSo there's escape and there's escape, right? Java still prioritizes\n(and has, in fact, strengthened) barriers against breaking module\nencapsulation, or getting access to arbitrary native memory or code.\n\nThe features that have been deprecated, to eventually go away, are\nthe ones that offer fine-grained control over operations that there\nare Java APIs for. Eventually it won't be as easy as it is now to say\n\"ok, your function gets to open these files or these sockets but\nnot those ones.\"\n\nEven for those things, there may yet be solutions. There are Java\nAPIs for virtualizing the view of the file system, for example. It's\nyet to be seen how things will shake out. Configuration may get\ntrickier, and there may be some incentive to to include, say,\nsepgsql in the picture.\n\nSure, even access to a file API can be game over, depending on\nwhat file you open, but that's already the risk for every PL\nwith an 'untrusted' flavor.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 06 Jun 2023 13:00:11 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 06.06.2023 5:13 PM, Robert Haas wrote:\n> On Tue, Jun 6, 2023 at 9:40 AM Robert Haas <[email protected]> wrote:\n>> I'm not sure that there's a strong consensus, but I do think it's a good idea.\n> Let me elaborate on this a bit.\n>\n>\n>\n> Not all databases have this problem, and PostgreSQL isn't going to be\n> able to stop having it without some kind of major architectural\n> change. Changing from a process model to a threaded model might be\n> insufficient, because while I think that threads consume fewer OS\n> resources than processes, what is really needed, in all likelihood, is\n> the ability to have idle connections have neither a process nor a\n> thread associated with them until they cease being idle. That's a huge\n> project and I'm not volunteering to do it, but if we want to have the\n> same kind of scalability as some competing products, that is probably\n> a place to which we ultimately need to go. Getting out of the current\n> model where every backend has an arbitrarily large amount of state\n> hanging off of random global variables, not all of which are even\n> known to any central system, is a critical step in that journey.\n\nIt looks like built-in connection pooler, doesn't it?\nActually built-in connection pooler has a lot o common things with \nmultithreaded Postgres.\nIt also needs to keep session context.\nTe main difference is that there is no need to place here all Postgres \nglobal/static variables, because lefitime of most of them is shorter \nthan transaction. So it is really enough to place all such variables in \nsingle struct.\nThis is how built-in connection pooler was implemented in PgPro.\n\nReading all concerns against multithreading Postgres makes me think \nthat it may erasonable to combine two approaches:\nstill have processes (backends) but be able to spawn multiple threads \ninside process (for example for parallel query execution).\nIt can be considered that such approach can only increase complexity of \nimplementation and combine drawbacks of both approaches.\nBut actually such approach allows:\n1. Support old (external, non-reentrant) extensions - them will be \nexecuted by dedicated backends.\n2. Simplify parallel query execution and make it more efficient.\n3. Allows to most efficiently use multitreaded PL-s (like JVM based). As \nfar as there will be no single VM for all connections, but only for some \ngroup of them(for example belonging to one user), then most complaints \nconcerning sharing VM between different connections can be avoided\n4. Avoid or minimize problems with OOM and memory fragmentation.\n5. Can be combine with connection pooler (save inactive connection state \nwithout having process or thread for it)\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 20:04:08 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 11:46 AM Heikki Linnakangas <[email protected]> wrote:\n> Bruce was worried about the loss of isolation that the separate address\n> spaces gives, and Jeremy shared an anecdote on that. That is an\n> objection to the idea itself, i.e. even if transition was smooth,\n> bug-free and effortless, that point remains. I personally think the\n> isolation we get from separate address spaces is overrated. Yes, it\n> gives you some protection, but given how much shared memory there is,\n> the blast radius is large even with separate backend processes.\n\nAn interesting idea might be to look at the places where we ereport or\nelog FATAL due to some kind of backend data structure corruption and\nask whether there would be an argument for elevating the level to\nPANIC if we changed this. There are definitely some places where we\nargue that the only corrupted state is backend-local and thus we don't\nneed to PANIC if it's corrupted. I wonder to what extent this change\nwould undermine that argument.\n\nEven if it does, I think it's worth it. Corrupted backend-local data\nstructures aren't that common, thankfully.\n\n> I don't think this is worth it, unless we plan to eventually remove the\n> multi-process mode. We could e.g. make lock table expandable in threaded\n> mode, and fixed-size in process mode, but the big gains would come from\n> being able to share things between threads and have variable-length\n> shared data structures more easily. As long as you need to also support\n> processes, you need to code to the lowest common denominator and don't\n> really get the benefits.\n>\n> I don't know how long a transition period we need. Maybe 1 release, maybe 5.\n\nI think 1 release is wildly optimistic. Even if someone wrote a patch\nfor this and got it committed this release cycle, it's likely that\nthere would be follow-up commits needed over a period of several years\nbefore it really worked as well as we'd like. Only after that could we\nconsider deprecating the per-process way. But I don't think that's\nnecessarily a huge problem. I originally intended DSM as an optional\nfeature: if you didn't have it, then you couldn't use features that\ndepended on it, but the rest of the system still worked. Eventually,\nother people liked it enough that we decided to introduce hard\ndependencies on it. I think that's a good model for a change like\nthis. When the inventor of a new system thinks that we should have a\nhard dependency on it, MEH. When there's a groundswell of other,\nunaffiliated hackers making that argument, COOL.\n\nI'm also not quite convinced that there's no long-term use case for\nmulti-process mode. Maybe you're right and there isn't, but that\namounts to arguing that every extension in the world will be happy to\nrun in a multi-threaded world rather than not. I don't know if I quite\nbelieve that. It also amounts to arguing that performance is going to\nbe better for everyone in this new multi-threaded mode, and that it\nwon't cause unforeseen problems for any significant numbers of users,\nand maybe those things are true, but I think we need to get this new\nsystem in place and get some real-world experience before we can judge\nthese kinds of things. I agree that, in theory, it would be nice to\nget to a place where the multi-process mode is a dinosaur and that we\ncan just rip it out ... but I don't share your confidence that we can\nget there in any short time period.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:59:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 2:00 PM Robert Haas <[email protected]> wrote:\n\n>\n> I'm also not quite convinced that there's no long-term use case for\n> multi-process mode. Maybe you're right and there isn't, but that\n> amounts to arguing that every extension in the world will be happy to\n> run in a multi-threaded world rather than not. I don't know if I quite\n> believe that. It also amounts to arguing that performance is going to\n> be better for everyone in this new multi-threaded mode, and that it\n> won't cause unforeseen problems for any significant numbers of users,\n> and maybe those things are true, but I think we need to get this new\n> system in place and get some real-world experience before we can judge\n> these kinds of things. I agree that, in theory, it would be nice to\n> get to a place where the multi-process mode is a dinosaur and that we\n> can just rip it out ... but I don't share your confidence that we can\n> get there in any short time period.\n>\n\nFirst, I am enjoying the activity of this thread. But my first question is\n\"to what end\"?\nDo I consider threads better? (yes... and no)\n\nI do wonder if we could add better threading within any given\nsession/process to get a hybrid?\n[maybe this gets us closer to solving some of the problems incrementally?]\n\nIf I could have anything (today)... I would prefer a Master-Master\nImplementation leveraging some\nof the ultra-fast server-server communication protocols to help sync\nthings. Then I wouldn't care.\nI could avoid the O/S Overwhelm caused by excessive processes, via\nspinning up machines.\n[Unfortunately I know that PG leverages the filesystem cache, etc to such a\ndegree that communicating\nfrom one master to another would require a really special architecture\nthere. And the N! communication lines].\n\nKirk...\n\nOn Tue, Jun 6, 2023 at 2:00 PM Robert Haas <[email protected]> wrote:\nI'm also not quite convinced that there's no long-term use case for\nmulti-process mode. Maybe you're right and there isn't, but that\namounts to arguing that every extension in the world will be happy to\nrun in a multi-threaded world rather than not. I don't know if I quite\nbelieve that. It also amounts to arguing that performance is going to\nbe better for everyone in this new multi-threaded mode, and that it\nwon't cause unforeseen problems for any significant numbers of users,\nand maybe those things are true, but I think we need to get this new\nsystem in place and get some real-world experience before we can judge\nthese kinds of things. I agree that, in theory, it would be nice to\nget to a place where the multi-process mode is a dinosaur and that we\ncan just rip it out ... but I don't share your confidence that we can\nget there in any short time period.First, I am enjoying the activity of this thread. But my first question is \"to what end\"?Do I consider threads better? (yes... and no)I do wonder if we could add better threading within any given session/process to get a hybrid?[maybe this gets us closer to solving some of the problems incrementally?]If I could have anything (today)... I would prefer a Master-Master Implementation leveraging someof the ultra-fast server-server communication protocols to help sync things. Then I wouldn't care.I could avoid the O/S Overwhelm caused by excessive processes, via spinning up machines.[Unfortunately I know that PG leverages the filesystem cache, etc to such a degree that communicatingfrom one master to another would require a really special architecture there. And the N! communication lines].Kirk...",
"msg_date": "Tue, 6 Jun 2023 14:50:38 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 2:51 PM Kirk Wolak <[email protected]> wrote:\n> I do wonder if we could add better threading within any given session/process to get a hybrid?\n> [maybe this gets us closer to solving some of the problems incrementally?]\n\nI don't think it helps much -- if anything, I think that would be more\ncomplicated.\n\n> If I could have anything (today)... I would prefer a Master-Master Implementation leveraging some\n> of the ultra-fast server-server communication protocols to help sync things. Then I wouldn't care.\n> I could avoid the O/S Overwhelm caused by excessive processes, via spinning up machines.\n> [Unfortunately I know that PG leverages the filesystem cache, etc to such a degree that communicating\n> from one master to another would require a really special architecture there. And the N! communication lines].\n\nI think there's plenty of interesting things to improve in this area,\nbut they're different things than what this thread is about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 14:55:55 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, 5 Jun 2023 at 10:52, Heikki Linnakangas <[email protected]> wrote:\n>\n> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n> so that the whole server runs in a single process, with multiple\n> threads. It has been discussed many times in the past, last thread on\n> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n> I feel that there is now pretty strong consensus that it would be a good\n> thing, more so than before. Lots of work to get there, and lots of\n> details to be hashed out, but no objections to the idea at a high level.\n>\n> The purpose of this email is to make that silent consensus explicit. If\n> you have objections to switching from the current multi-process\n> architecture to a single-process, multi-threaded architecture, please\n> speak up.\n\nI suppose I should reiterate my comments that I gave at the time. I'm\nnot sure they qualify as \"objections\" but they're some kind of general\nconcern.\n\nI think of processes and threads as fundamentally the same things,\njust a slightly different API -- namely that in one memory is by\ndefault unshared and needs to be explicitly shared and in the other\nit's default shared and needs to be explicitly unshared. There are\nobvious practical API differences too like how signals are handled but\nthose are just implementation details.\n\nSo the question is whether defaulting to shared memory or defaulting\nto unshared memory is better -- and whether the implementation details\nare significant enough to override that.\n\nAnd my general concern was that in my experience default shared memory\nleads to hugely complex and chaotic shared data structures with often\nvery loose rules for ownership of shared data and who is responsible\nfor making updates, handling errors, or releasing resources.\n\nSo all else equal I feel like having a good infrastructure for\nexplicitly allocating shared memory segments and managing them is\nsuperior.\n\nHowever all else is not equal. The discussion in the hallway turned to\nwhether we could just use pthread primitives like mutexes and\ncondition variables instead of our own locks -- and the point was\nraised that those libraries assume these objects will be in threads of\none process not shared across completely different processes.\n\nAnd that's probably not the only library we're stuck reimplementing\nbecause of this. So the question is are these things worth taking the\nrisk of having data structures shared implicitly and having unclear\nownership rules?\n\nI was going to say supporting both modes relieves that fear since it\nwould force that extra discipline and allow testing under the more\nrestrictive rule. However I don't think that will actually work. As\nlong as we support both modes we lose all the advantages of threads.\nWe still wouldn't be able to use pthreads and would still need to\nprovide and maintain our homegrown replacement infrastructure.\n\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 6 Jun 2023 16:14:41 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 6:52 AM Andrew Dunstan <[email protected]> wrote:\n> If we were starting out today we would probably choose a threaded implementation. But moving to threaded now seems to me like a multi-year-multi-person project with the prospect of years to come chasing bugs and the prospect of fairly modest advantages. The risk to reward doesn't look great.\n>\n> That's my initial reaction. I could be convinced otherwise.\n\nHere is one thing I often think about when contemplating threads.\nTake a look at dsa.c. It calls itself a shared memory allocator, but\nreally it has two jobs, the second being to provide software emulation\nof virtual memory. That’s behind dshash.c and now the stats system,\nand various parts of the parallel executor code. It’s slow and\ncomplicated, and far from the state of the art. I wrote that code\n(building on allocator code from Robert) with the expectation that it\nwas a transitional solution to unblock a bunch of projects. I always\nexpected that we'd eventually be deleting it. When I explain that\nsubsystem to people who are not steeped in the lore of PostgreSQL, it\nsounds completely absurd. I mean, ... it is, right? My point is\nthat we’re doing pretty unreasonable and inefficient contortions to\ndevelop new features -- we're not just happily chugging along without\nthreads at no cost.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 10:26:07 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> ... My point is\n> that we’re doing pretty unreasonable and inefficient contortions to\n> develop new features -- we're not just happily chugging along without\n> threads at no cost.\n\nSure, but it's not like chugging along *with* threads would be no-cost.\nOthers have already pointed out the permanent downsides of that, such\nas loss of isolation between sessions leading to debugging headaches\n(and, I predict, more than one security-grade bug).\n\nI agree that if we were building this system from scratch today,\nwe'd probably choose thread-per-session not process-per-session.\nBut the costs of getting to that from where we are will be enormous.\nI seriously doubt that the net benefits could justify that work,\nno matter how long you want to look forward. It's not really\nsignificantly different from \"let's rewrite the server in\nC++/Rust/$latest_hotness\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Jun 2023 22:02:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 11:30 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jun 6, 2023 at 11:46 AM Heikki Linnakangas <[email protected]> wrote:\n> > Bruce was worried about the loss of isolation that the separate address\n> > spaces gives, and Jeremy shared an anecdote on that. That is an\n> > objection to the idea itself, i.e. even if transition was smooth,\n> > bug-free and effortless, that point remains. I personally think the\n> > isolation we get from separate address spaces is overrated. Yes, it\n> > gives you some protection, but given how much shared memory there is,\n> > the blast radius is large even with separate backend processes.\n>\n> An interesting idea might be to look at the places where we ereport or\n> elog FATAL due to some kind of backend data structure corruption and\n> ask whether there would be an argument for elevating the level to\n> PANIC if we changed this. There are definitely some places where we\n> argue that the only corrupted state is backend-local and thus we don't\n> need to PANIC if it's corrupted. I wonder to what extent this change\n> would undermine that argument.\n\nWith the threaded model, that shouldn't change, right? Even though all\nmemory space is now shared across threads, we can maintain the same\nrules for modifying critical shared data structures, i.e. modifying\nsuch memory should still fall under the CRITICAL SECTION, so I guess\nthe rules for promoting error level to PANIC will remain the same.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Jun 2023 09:35:39 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 7:32 AM Tom Lane <[email protected]> wrote:\n>\n> Thomas Munro <[email protected]> writes:\n> > ... My point is\n> > that we’re doing pretty unreasonable and inefficient contortions to\n> > develop new features -- we're not just happily chugging along without\n> > threads at no cost.\n>\n> Sure, but it's not like chugging along *with* threads would be no-cost.\n> Others have already pointed out the permanent downsides of that, such\n> as loss of isolation between sessions leading to debugging headaches\n> (and, I predict, more than one security-grade bug).\n\nI agree in some cases debugging would be hard, but I feel there are\ncases where the thread model will make the debugging experience better\ne.g breaking at the entry point of the new parallel worker or other\nworker is hard with the process model but that would be very smooth\nwith the thread model as per my experience.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Jun 2023 09:42:48 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/6/23 22:02, Tom Lane wrote:\n> (and, I predict, more than one security-grade bug).\n\n*That* is what worries me the most\n\n> I agree that if we were building this system from scratch today,\n> we'd probably choose thread-per-session not process-per-session.\n> But the costs of getting to that from where we are will be enormous.\n> I seriously doubt that the net benefits could justify that work,\n> no matter how long you want to look forward. It's not really\n> significantly different from \"let's rewrite the server in\n> C++/Rust/$latest_hotness\".\n\nAgreed.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 08:46:51 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 10:02 PM Tom Lane <[email protected]> wrote:\n> I agree that if we were building this system from scratch today,\n> we'd probably choose thread-per-session not process-per-session.\n> But the costs of getting to that from where we are will be enormous.\n> I seriously doubt that the net benefits could justify that work,\n> no matter how long you want to look forward. It's not really\n> significantly different from \"let's rewrite the server in\n> C++/Rust/$latest_hotness\".\n\nWell, I don't know, I think that's a bunch of things that are not all\nthe same. Rewriting the server in a whole different programming\nlanguage would be a massive effort. I can't really see anyone\nvolunteering to rewrite a million lines of C (or whatever we've got)\nin Rust, and I'm not sure who would use the result if they did, or\nwhy. We could, perhaps, allow new source files to be written in Rust\nwhile keeping old ones written in C, but then every hacker has to know\ntwo languages, and having code written in both languages manipulating\nthe same data structures would probably be a recipe for confusion and\nbugs. It's hard to believe that the upsides would be worth the pain.\nMaybe transition to C++ would be easier, or maybe it wouldn't, I'm not\nsure. But from my point of the view, the issue here is simply that\nstop-the-world-and-change-everything is not a viable way forward for a\nproject the size of PostgreSQL, but incremental changes are\npotentially acceptable if the benefits outweigh the drawbacks.\n\nSo what are the costs, exactly, of transition to a threaded model? It\nseems to me that there's basically one problem: global variables.\nSure, there's a bunch of stuff around process management that would\nlikely have to be revised in some way, but that's not that much code\nand wouldn't have that much impact on unrelated development. However,\nthe project's widespread and often gratuitous use of global variables\nwould have to be addressed in some way, and I think that will pretty\nmuch inevitably involve touching all of those global variable\ndeclarations in some way. Now, if we can get away with simply marking\nall of those thread-local, then it's of the same general flavor as\nPGDLLIMPORT. I am aware that you think that PGDLLIMPORT markings are\nugly as sin, and these would be more widespread since they'd have to\nbe applied to literally every global variable, including file-local\nones. However, it's hard to imagine that adding such markings would\ncause PostgreSQL development to grind to a halt. It would cause minor\nrebasing pain and that's about it. I hope that we'd have some tool\nthat would make the build fail if any markings are missing and\neverybody would be annoyed until they finished rebasing all of their\nWIP patches and then that would just be how things are. It's not\n*lovely* but it doesn't sound that bad either.\n\nIn my mind, the bigger question is how much further than that do you\nhave to go? I think I remember a previous conversation with Andres\nwhere he opined that thread-local variables are \"really expensive\"\n(and I apologize in advance if I'm mis-remembering this). Now, Andres\nis not a man who accepts a tax on performance of any size without a\nfight, so his \"really expensive\" might turn out to resemble my \"pretty\ncheap.\" However, if widespread use of TLS is too expensive and we have\nto start rewriting code to not depend on global variables, that's\ngoing to be more of a problem. If we can get by with doing such\nrewrites only in performance-critical places, it might not still be\ntoo bad. Personally, I think the degree of dependence that PostgreSQL\nhas on global variables is pretty excessive and I don't think that a\ncertain amount of refactoring to reduce it would be a bad thing. If it\nturns into an infinite series of hastily-written patches to rejigger\nevery source file we have, though, then I'm not really on board with\nthat.\n\nHeikki mentions the idea of having a central Session object and just\npassing that around. I have a hard time believing that's going to work\nout nicely. First, it's not extensible. Right now, if you need a bit\nof additional session-local state, you just declare a variable and\nyou're all set. That's not a perfect system and does cause some\nproblems, but we can't go from there to a system where it's impossible\nto add session-local state without hacking core. Second, we will be\nsad if session.h ends up #including every other header file that\ndefines a data structure anywhere in the backend. Or at least I'll be\nsad. I'm not actually against the idea of having some kind of session\nobject that we pass around, but I think it either needs to be limited\nto a relatively small set of well-defined things, or else it needs to\nbe design in some kind of extensible way that doesn't require it to\nknow the full details of every sort of object that's being used as\nsession-local state anywhere in the system. I haven't really seen any\nconvincing design ideas around this yet.\n\nBut I think jumping to the conclusion that the migration path here is\nakin to rewriting the whole code base in Rust is jumping too far. I do\nsee some problems here that I don't know how to solve, but that's\nnowhere near in the same category as find . -name '*.c' -exec rm {} \\;\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Jun 2023 08:53:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 8:22 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n> so that the whole server runs in a single process, with multiple\n> threads. It has been discussed many times in the past, last thread on\n> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n> I feel that there is now pretty strong consensus that it would be a good\n> thing, more so than before. Lots of work to get there, and lots of\n> details to be hashed out, but no objections to the idea at a high level.\n>\n> The purpose of this email is to make that silent consensus explicit. If\n> you have objections to switching from the current multi-process\n> architecture to a single-process, multi-threaded architecture, please\n> speak up.\n>\n> If there are no major objections, I'm going to update the developer FAQ,\n> removing the excuses there for why we don't use threads [1]. And we can\n> start to talk about the path to get there. Below is a list of some\n> hurdles and proposed high-level solutions. This isn't an exhaustive\n> list, just some of the most obvious problems:\n>\n> # Transition period\n>\n> The transition surely cannot be done fully in one release. Even if we\n> could pull it off in core, extensions will need more time to adapt.\n> There will be a transition period of at least one release, probably\n> more, where you can choose multi-process or multi-thread model using a\n> GUC. Depending on how it goes, we can document it as experimental at first.\n>\n> # Thread per connection\n>\n> To get started, it's most straightforward to have one thread per\n> connection, just replacing backend process with a backend thread. In the\n> future, we might want to have a thread pool with some kind of a\n> scheduler to assign active queries to worker threads. Or multiple\n> threads per connection, or spawn additional helper threads for specific\n> tasks. But that's future work.\n\nWith multiple processes, we can use all the available cores (at least\ntheoretically if all those processes are independent). But is that\nguaranteed with single process multi-thread model? Google didn't throw\nany definitive answer to that. Usually it depends upon the OS and\narchitecture.\n\nMaybe a good start is to start using threads instead of parallel\nworkers e.g. for parallel vacuum, parallel query and so on while\nleaving the processes for connections and leaders. that itself might\ntake significant time. Based on that experience move to a completely\nthreaded model. Based on my experience with other similar products, I\nthink we will settle on a multi-process multi-thread model.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 7 Jun 2023 18:38:38 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "07.06.2023 15:53, Robert Haas wrote:\n> Right now, if you need a bit\n> of additional session-local state, you just declare a variable and\n> you're all set. That's not a perfect system and does cause some\n> problems, but we can't go from there to a system where it's impossible\n> to add session-local state without hacking core.\n\n> or else it needs to\n> be design in some kind of extensible way that doesn't require it to\n> know the full details of every sort of object that's being used as\n> session-local state anywhere in the system.\nAnd it is quite possible. Although with indirection involved.\n\nFor example, we want to add session variable \"my_hello_var\".\nWe first need to declare \"offset variable\".\nThen register it in a session.\nAnd then use function and/or macros to get actual address:\n\n /* session.h */\n extern size_t RegisterSessionVar(size_t size);\n extern void* CurSessionVar(size_t offset);\n\n\n /* session.c */\n typedef struct Session {\n char *vars;\n } Session;\n\n static _Thread_local Session* curSession;\n static size_t sessionVarsSize = 0;\n size_t\n RegisterSessionVar(size_t size)\n {\n size_t off = sessionVarsSize;\n sessionVarsSize += size;\n return off;\n }\n \n void*\n CurSession(size_t offset)\n {\n return curSession->vars + offset;\n }\n \n /* module_internal.h */\n typedef int my_hello_var_t;\n extern size_t my_hello_var_offset;\n\n /* access macros */\n #define my_hello_var (*(my_hello_var_t*)(CurSessionVar(my_hello_var_offset)))\n\n /* module.c */\n size_t my_hello_var_offset = 0;\n \n void\n PG_init() {\n RegisterSessionVar(sizeof(my_hello_var_t), &my_hello_var_offset);\n }\n\nFor security reasons, offset could be mangled.\n\n------\n\nregards,\nYura Sokolov\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 19:05:54 +0300",
"msg_from": "Yura Sokolov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 6/5/23 17:33, Heikki Linnakangas wrote:\n> On 05/06/2023 11:18, Tom Lane wrote:\n>> Heikki Linnakangas <[email protected]> writes:\n>>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>>> so that the whole server runs in a single process, with multiple\n>>> threads. It has been discussed many times in the past, last thread on\n>>> pgsql-hackers was back in 2017 when Konstantin made some experiments\n>>> [0].\n>>\n>>> I feel that there is now pretty strong consensus that it would be a good\n>>> thing, more so than before. Lots of work to get there, and lots of\n>>> details to be hashed out, but no objections to the idea at a high level.\n>>\n>>> The purpose of this email is to make that silent consensus explicit. If\n>>> you have objections to switching from the current multi-process\n>>> architecture to a single-process, multi-threaded architecture, please\n>>> speak up.\n>>\n>> For the record, I think this will be a disaster. There is far too much\n>> code that will get broken, largely silently, and much of it is not\n>> under our control.\n> \n> Noted. Other large projects have gone through this transition. It's not\n> easy, but it's a lot easier now than it was 10 years ago. The platform\n> and compiler support is there now, all libraries have thread-safe\n> interfaces, etc.\n> \n\nIs the platform support really there for all platforms we want/intend to\nsupport? I have no problem believing that for modern Linux/BSD systems,\nbut what about the older stuff we currently support.\n\nAlso, which other projects did this transition? Is there something we\ncould learn from them? Were they restricted to much smaller list of\nplatforms?\n\n> I don't expect you or others to buy into any particular code change at\n> this point, or to contribute time into it. Just to accept that it's a\n> worthwhile goal. If the implementation turns out to be a disaster, then\n> it won't be accepted, of course. But I'm optimistic.\n> \n\nI personally am not opposed to the effort in principle, but how do you\neven evaluate cost and benefits for a transition like this? I have no\nidea how to quantify the costs/benefits for this as a single change.\n\nI've seen some benchmarks in the past, but it's hard to say which of\nthese improvements are possible only with threads, and what would be\ndoable with less invasive changes with the process model.\n\nIMHO the only way to move this forward is to divide this into smaller\nchanges, each of which gives us some benefit we'd want anyway. For\nexample, this thread already mentioned improving handling of many\nconnections. AFAICS that requires isolating \"session state\", which seems\nuseful even without a full switch to threads as it makes connection\npooling simpler. It should be easier to get a buy-in for these changes,\nwhile introducing abstractions simplifying the switch to threads.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Jun 2023 21:20:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:20 AM Tomas Vondra\n<[email protected]> wrote:\n> Is the platform support really there for all platforms we want/intend to\n> support? I have no problem believing that for modern Linux/BSD systems,\n> but what about the older stuff we currently support.\n\nThere is a conversation to be had about whether/when/how to adopt\nC11/C17 threads (= same API on Windows and Unix, but sadly two\nstraggler systems don't have required OS support yet (macOS,\nOpenBSD)), but POSIX + NT threads were all worked out in the 90s. We\nhave last-mover advantage here.\n\n> Also, which other projects did this transition? Is there something we\n> could learn from them? Were they restricted to much smaller list of\n> platforms?\n\nApache may be interesting. Wide ecosystem of extensions.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 07:59:16 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-05 17:51:57 +0300, Heikki Linnakangas wrote:\n> If there are no major objections, I'm going to update the developer FAQ,\n> removing the excuses there for why we don't use threads [1].\n\nI think we should do this even if there's no concensus to slowly change to\nthreads. There's clearly no concensus on the opposite either.\n\n\n\n> # Transition period\n> \n> The transition surely cannot be done fully in one release. Even if we could\n> pull it off in core, extensions will need more time to adapt. There will be\n> a transition period of at least one release, probably more, where you can\n> choose multi-process or multi-thread model using a GUC. Depending on how it\n> goes, we can document it as experimental at first.\n\nOne interesting bit around the transition is what tooling we ought to provide\nto detect problems. It could e.g. be reasonably feasible to write something\nchecking how many read-write global variables an extension has on linux\nsystems.\n\n\n\n> # Extensions\n> \n> A lot of extensions also contain global variables or other things that break\n> in a multi-threaded environment. We need a way to label extensions that\n> support multi-threading. And in the future, also extensions that *require* a\n> multi-threaded server.\n> \n> Let's add flags to the control file to mark if the extension is thread-safe\n> and/or process-safe. If you try to load an extension that's not compatible\n> with the server's mode, throw an error.\n\nI don't think the control file is the right place - that seems more like\nsomething that should be signalled via PG_MODULE_MAGIC. We need to check this\nnot just during CREATE EXTENSION, but also during loading of libraries - think\nof shared_preload_libraries.\n\n\n\n> # Restart on crash\n> \n> If a backend process crashes, postmaster terminates all other backends and\n> restarts the system. That's hard (impossible?) to do safely if everything\n> runs in one process. We can continue have a separate postmaster process that\n> just monitors the main process and restarts it on crash.\n\nYea, we definitely need the supervisor function in a separate\nprocess. Presumably that means we need to split off some of the postmaster\nresponsibilities - e.g. I don't think it'd make sense to handle connection\nestablishment in the supervisor process. I wonder if this is something that\ncould end up being beneficial even in the process world.\n\nA related issue is that we won't get SIGCHLD in the supervisor process\nanymore. So we'd need to come up with some design for that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:30:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> would ask if developer time could be better spent on tackling some of the\n> other problems around vertical scalability? Per some PGCon discussions,\n> there's still room for improvement in how PostgreSQL can best utilize\n> resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> instance comes to mind).\n\nI think we're starting to hit quite a few limits related to the process model,\nparticularly on bigger machines. The overhead of cross-process context\nswitches is inherently higher than switching between threads in the same\nprocess - and my suspicion is that that overhead will continue to\nincrease. Once you have a significant number of connections we end up spending\na *lot* of time in TLB misses, and that's inherent to the process model,\nbecause you can't share the TLB across processes.\n\n\nThe amount of duplicated code we have to deal with due to to the process model\nis quite substantial. We have local memory, statically allocated shared memory\nand dynamically allocated shared memory variants for some things. And that's\njust going to continue.\n\n\n> I'm purposely giving a nonanswer on whether it's a worthwhile goal, but\n> rather I'd be curious where it could stack up against some other efforts to\n> continue to help PostgreSQL improve performance and handle very large\n> workloads.\n\nThere's plenty of things we can do before, but in the end I think tackling the\nissues you mention and moving to threads are quite tightly linked.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:37:21 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 07.06.23 23:30, Andres Freund wrote:\n> Yea, we definitely need the supervisor function in a separate\n> process. Presumably that means we need to split off some of the postmaster\n> responsibilities - e.g. I don't think it'd make sense to handle connection\n> establishment in the supervisor process. I wonder if this is something that\n> could end up being beneficial even in the process world.\n\nSomething to think about perhaps ... how would that be different from \nusing an existing external supervisor process like systemd or supervisord.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:39:01 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Tomas Vondra schrieb am 07.06.2023 um 21:20:\n> Also, which other projects did this transition? Is there something we\n> could learn from them? Were they restricted to much smaller list of\n> platforms?\n\nFirebird did this a while ago if I'm not mistaken.\n\nNot open source, but Oracle was historically multi-threaded on Windows and multi-process on all other platforms.\nI _think_ starting with 19c you can optionally run it multi-threaded on Linux as well.\n\nBut I doubt, they are willing to share any insights ;)\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:39:54 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-05 20:15:56 -0400, Bruce Momjian wrote:\n> Yes, sorry, critical sections is what I was remembering. My question is\n> whether all unexpected backend exits should be treated as critical\n> sections?\n\nYes.\n\nPeople have argued that the process model is more robust. But it turns out\nthat we have to crash-restart for just about any \"bad failure\" anyway. It used\nto be (a long time ago) that we didn't, but that was just broken.\n\nThere are some advantages in debuggability, because it's a *tad* harder for a\nbug in one process to cause another to crash, if less state is shared. But\nthat's by far outweighed by most debugging / validation tools not\nunderstanding the multi-processes-with-shared-shmem model.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:45:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 23:39:01 +0200, Peter Eisentraut wrote:\n> On 07.06.23 23:30, Andres Freund wrote:\n> > Yea, we definitely need the supervisor function in a separate\n> > process. Presumably that means we need to split off some of the postmaster\n> > responsibilities - e.g. I don't think it'd make sense to handle connection\n> > establishment in the supervisor process. I wonder if this is something that\n> > could end up being beneficial even in the process world.\n> \n> Something to think about perhaps ... how would that be different from using\n> an existing external supervisor process like systemd or supervisord.\n\nI think that's not really comparable. A postgres internal solution can\nmaintain resources like shared memory allocations, listening sockets, etc\nacross crash restarts. With something like systemd that's much harder to make\nwork well. And then there's the fact that you now need to deal with much more\ndrastic cross-platform behavioural differences.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:48:22 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 08:53:24 -0400, Robert Haas wrote:\n> In my mind, the bigger question is how much further than that do you\n> have to go? I think I remember a previous conversation with Andres\n> where he opined that thread-local variables are \"really expensive\"\n> (and I apologize in advance if I'm mis-remembering this).\n\nIt really is architecture and OS dependent. I think time has reduced the cost\nsomewhat, due to older architectures / OSs aging out. But yea, it's not free.\n\nI suspect that we'd gain *far* more from the higher TLB hit rate, than we'd\nloose due to using many thread local variables. Even with a stupid\nsearch-and-replace approach.\n\nBut we'd gain more if we reduced the number of thread local variables...\n\n\n> Now, Andres is not a man who accepts a tax on performance of any size\n> without a fight, so his \"really expensive\" might turn out to resemble my\n> \"pretty cheap.\" However, if widespread use of TLS is too expensive and we\n> have to start rewriting code to not depend on global variables, that's going\n> to be more of a problem. If we can get by with doing such rewrites only in\n> performance-critical places, it might not still be too bad. Personally, I\n> think the degree of dependence that PostgreSQL has on global variables is\n> pretty excessive and I don't think that a certain amount of refactoring to\n> reduce it would be a bad thing. If it turns into an infinite series of\n> hastily-written patches to rejigger every source file we have, though, then\n> I'm not really on board with that.\n\nI think a lot of such rewrites would be a good idea, even if we right now all\nagree to swear we'll never go to threads. Not having any sort of grouping of\nglobal variables makes it IMO considerably harder to debug. I can easily ask\nsomebody to print out a variable pointing to a struct describing the state of\na subsystem. I can't really do that for 50 variables.\n\nAnd once you do that, I think you reduce the TLS cost substantially. The\nvariable pointing to the struct is already likely in a register. Whereas each\nindividual variable being in TLS makes the job harder for the compiler.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:58:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-06 16:14:41 -0400, Greg Stark wrote:\n> I think of processes and threads as fundamentally the same things,\n> just a slightly different API -- namely that in one memory is by\n> default unshared and needs to be explicitly shared and in the other\n> it's default shared and needs to be explicitly unshared.\n\nIn theory that's true, in practice it's entirely wrong.\n\nFor one, the amount of complexity you need to deal with to share state across\nprocesses, post fork, is *substantial*. You can share file descriptors across\nprocesses, but it's extremely platform dependant, requires cooperation between\nboth processes etc. You can share memory allocations made after the processes\nforked, but you're typically not going to be able to guarantee they're at the\nsame pointer values. Etc.\n\nBut more importantly, there's crucial performance differences between threads\nand processes. Having the same memory mapping between threads makes allows the\nhardware to share the TLB (on x86 via process context identifiers), which\nisn't realistically possible with different processes.\n\n\n> However all else is not equal. The discussion in the hallway turned to\n> whether we could just use pthread primitives like mutexes and\n> condition variables instead of our own locks -- and the point was\n> raised that those libraries assume these objects will be in threads of\n> one process not shared across completely different processes.\n\nIndependent of threads vs processes, I am -many on using pthread mutexes and\ncondition variables. From experiments, that *looses* performance, and we loose\na lot of control and increase cross-platform behavioural differences. I also\ndon't see any benefit in going in that direction.\n\n\n> And that's probably not the only library we're stuck reimplementing\n> because of this. So the question is are these things worth taking the\n> risk of having data structures shared implicitly and having unclear\n> ownership rules?\n> \n> I was going to say supporting both modes relieves that fear since it\n> would force that extra discipline and allow testing under the more\n> restrictive rule. However I don't think that will actually work. As\n> long as we support both modes we lose all the advantages of threads.\n\nI don't think that has to be true. We could e.g. eventually decide that we\ndon't support parallel query without threading support - which would allow us\nto get rid of a very significant amount of code and runtime overhead.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:09:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/7/23 2:39 PM, Thomas Kellerer wrote:\n> Tomas Vondra schrieb am 07.06.2023 um 21:20:\n>> Also, which other projects did this transition? Is there something we\n>> could learn from them? Were they restricted to much smaller list of\n>> platforms?\n> \n> Not open source, but Oracle was historically multi-threaded on Windows\n> and multi-process on all other platforms.\n> I _think_ starting with 19c you can optionally run it multi-threaded on\n> Linux as well.\nLooks like it actually became publicly available in 12c. AFAICT Oracle\nsupports both modes today, with a config parameter to switch between them.\n\nThis is a very interesting case study.\n\nConcepts Manual:\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/process-architecture.html#GUID-4B460E97-18A0-4F5A-A62F-9608FFD43664\n\nReference:\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/23/refrn/THREADED_EXECUTION.html#GUID-7A668A49-9FC5-4245-AD27-10D90E5AE8A8\n\nList of Oracle process types, which ones can run as threads and which\nones always run as processes:\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/23/refrn/background-processes.html#GUID-86184690-5531-405F-AA05-BB935F57B76D\n\nLooks like they have four processes that will never run in threads:\n* dbwriter (writes dirty blocks in background)\n* process monitor (cleanup after process crash to avoid full server\nrestarts) <jealous>\n* process spawner (like postmaster)\n* time keeper process\n\nPer Tim Hall's oracle-base, it seems that plenty of people are sticking\nwith the process model, and that one use case for threads was:\n\"consolidating lots of instances onto a single server without using the\nmultitennant option. Without the multithreaded model, the number of OS\nprocesses could get very high.\"\n\nhttps://oracle-base.com/articles/12c/multithreaded-model-using-threaded_execution_12cr1\n\nI did google search for \"oracle threaded_execution\" and browsed a bit;\ndidn't see anything that seems earth shattering so far.\n\nLudovico Caldara and Martin Bach published blogs when it was first\nreleased, which just introduced but didn't test or hammer on it. The\nfeature has existed for 10 years now and I don't see any blog posts\nsaying that \"everyone should use this because it doubles your\nperformance\" or anything like that. I think if there were really\nsignificant performance gains then there would be many interesting blog\nposts on the internet by now from the independent Oracle professional\ncommunity - I know many of these people.\n\nIn fact, there's an interesting blog by Kamil Stawiarski from 2015 where\nhe actually observed one case of /slower/ performance with threads. That\nblog post ends with: \"So I raise the question: why and when use threaded\nexecution? If ever?\"\n\nhttps://blog.ora-600.pl/2015/12/17/oracle-12c-internals-of-threaded-execution/\n\nI'm not sure if he ever got an answer\n\n-Jeremy\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:37:31 -0700",
"msg_from": "Jeremy Schneider <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 10:37 AM Jeremy Schneider\n<[email protected]> wrote:\n> On 6/7/23 2:39 PM, Thomas Kellerer wrote:\n> > Tomas Vondra schrieb am 07.06.2023 um 21:20:\n> >> Also, which other projects did this transition? Is there something we\n> >> could learn from them? Were they restricted to much smaller list of\n> >> platforms?\n> >\n> > Not open source, but Oracle was historically multi-threaded on Windows\n> > and multi-process on all other platforms.\n> > I _think_ starting with 19c you can optionally run it multi-threaded on\n> > Linux as well.\n> Looks like it actually became publicly available in 12c. AFAICT Oracle\n> supports both modes today, with a config parameter to switch between them.\n\nIt's old, but this describes the 4 main models and which well known\nRDBMSes use them in section 2.3:\n\nhttps://dsf.berkeley.edu/papers/fntdb07-architecture.pdf\n\nTL;DR DB2 is the winner, it can do process-per-connection,\nthread-per-connection, process-pool or thread-pool.\n\nI understand this thread to be about thread-per-connection (= backend,\nsession, socket) for now.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:37:00 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 3:00 AM Andres Freund <[email protected]> wrote:\n>\n\n> Yea, we definitely need the supervisor function in a separate\n> process. Presumably that means we need to split off some of the postmaster\n> responsibilities - e.g. I don't think it'd make sense to handle connection\n> establishment in the supervisor process. I wonder if this is something that\n> could end up being beneficial even in the process world.\n>\n> A related issue is that we won't get SIGCHLD in the supervisor process\n> anymore. So we'd need to come up with some design for that.\n\nIf we fork the main Postgres process from the supervisor process then\nany exit to the main process will send SIGCHLD in the supervisor\nprocess, right? I agree we can handle all connection establishment\nand other thread-related stuff in the main Postgres process. But I\nassume this main process should be forked out of the supervisor\nprocess.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:08:34 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 6/8/23 12:37 AM, Jeremy Schneider wrote:\n> On 6/7/23 2:39 PM, Thomas Kellerer wrote:\n>> Tomas Vondra schrieb am 07.06.2023 um 21:20:\n> \n> I did google search for \"oracle threaded_execution\" and browsed a bit;\n> didn't see anything that seems earth shattering so far.\n\nFWIW, I recall Karl Arao's wiki page: https://karlarao.github.io/karlaraowiki/#%2212c%20threaded_execution%22\nwhere some performance and memory consumption studies have been done.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 07:55:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> > 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> > would ask if developer time could be better spent on tackling some of the\n> > other problems around vertical scalability? Per some PGCon discussions,\n> > there's still room for improvement in how PostgreSQL can best utilize\n> > resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> > instance comes to mind).\n>\n> I think we're starting to hit quite a few limits related to the process model,\n> particularly on bigger machines. The overhead of cross-process context\n> switches is inherently higher than switching between threads in the same\n> process - and my suspicion is that that overhead will continue to\n> increase. Once you have a significant number of connections we end up spending\n> a *lot* of time in TLB misses, and that's inherent to the process model,\n> because you can't share the TLB across processes.\n\n\nThis part was touched in the \"AMA with a Linux Kernale Hacker\"\nUnconference session where he mentioned that the had proposed a\n'mshare' syscall for this.\n\nSo maybe a more fruitful way to fixing the perceived issues with\nprocess model is to push for small changes in Linux to overcome these\navoiding a wholesale rewrite ?\n\n>\n>\n> The amount of duplicated code we have to deal with due to to the process model\n> is quite substantial. We have local memory, statically allocated shared memory\n> and dynamically allocated shared memory variants for some things. And that's\n> just going to continue.\n\nMaybe we can already remove the distinction between static and dynamic\nshared memory ?\n\nThough I already heard some complaints at the conference discussions\nthat having the dynamic version available has made some developers\nsloppy in using it resulting in wastefulness.\n\n>\n>\n> > I'm purposely giving a nonanswer on whether it's a worthwhile goal, but\n> > rather I'd be curious where it could stack up against some other efforts to\n> > continue to help PostgreSQL improve performance and handle very large\n> > workloads.\n>\n> There's plenty of things we can do before, but in the end I think tackling the\n> issues you mention and moving to threads are quite tightly linked.\n\nStill we should be focusing our attention at solving the issues and\nnot at \"moving to threads\" and hoping this will fix the issues by\nitself.\n\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:54:17 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "I think I remember that in the early days of development somebody did\nsend a patch-set for making PostgreSQL threaded on Solaris.\n\nI don't remember why this did not catch on.\n\nOn Wed, Jun 7, 2023 at 11:40 PM Thomas Kellerer <[email protected]> wrote:\n>\n> Tomas Vondra schrieb am 07.06.2023 um 21:20:\n> > Also, which other projects did this transition? Is there something we\n> > could learn from them? Were they restricted to much smaller list of\n> > platforms?\n>\n> Firebird did this a while ago if I'm not mistaken.\n>\n> Not open source, but Oracle was historically multi-threaded on Windows and multi-process on all other platforms.\n> I _think_ starting with 19c you can optionally run it multi-threaded on Linux as well.\n>\n> But I doubt, they are willing to share any insights ;)\n>\n>\n>\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:56:37 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 12:09 AM Andres Freund <[email protected]> wrote:\n...\n\n> We could e.g. eventually decide that we\n> don't support parallel query without threading support - which would allow us\n> to get rid of a very significant amount of code and runtime overhead.\n\nHere I was hoping to go in the opposite direction and support parallel\nquery across replicas.\n\nThis looks much more doable based on the process model than the single\nprocess / multiple threads model.\n\n---\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 8 Jun 2023 12:04:05 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 11:54 AM Hannu Krosing <[email protected]> wrote:\n>\n> On Wed, Jun 7, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> > > 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> > > would ask if developer time could be better spent on tackling some of the\n> > > other problems around vertical scalability? Per some PGCon discussions,\n> > > there's still room for improvement in how PostgreSQL can best utilize\n> > > resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> > > instance comes to mind).\n> >\n> > I think we're starting to hit quite a few limits related to the process model,\n> > particularly on bigger machines. The overhead of cross-process context\n> > switches is inherently higher than switching between threads in the same\n> > process - and my suspicion is that that overhead will continue to\n> > increase. Once you have a significant number of connections we end up spending\n> > a *lot* of time in TLB misses, and that's inherent to the process model,\n> > because you can't share the TLB across processes.\n>\n>\n> This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> Unconference session where he mentioned that the had proposed a\n> 'mshare' syscall for this.\n\nAlso, the *static* huge pages already let you solve this problem now\nby sharing the page tables\n\n\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 8 Jun 2023 12:15:58 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 6/8/23 01:37, Thomas Munro wrote:\n> On Thu, Jun 8, 2023 at 10:37 AM Jeremy Schneider\n> <[email protected]> wrote:\n>> On 6/7/23 2:39 PM, Thomas Kellerer wrote:\n>>> Tomas Vondra schrieb am 07.06.2023 um 21:20:\n>>>> Also, which other projects did this transition? Is there something we\n>>>> could learn from them? Were they restricted to much smaller list of\n>>>> platforms?\n>>>\n>>> Not open source, but Oracle was historically multi-threaded on Windows\n>>> and multi-process on all other platforms.\n>>> I _think_ starting with 19c you can optionally run it multi-threaded on\n>>> Linux as well.\n>> Looks like it actually became publicly available in 12c. AFAICT Oracle\n>> supports both modes today, with a config parameter to switch between them.\n> \n> It's old, but this describes the 4 main models and which well known\n> RDBMSes use them in section 2.3:\n> \n> https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf\n> \n> TL;DR DB2 is the winner, it can do process-per-connection,\n> thread-per-connection, process-pool or thread-pool.\n> \n\nI think the basic architectures are known, especially from the user\nperspective. I'm more interested in challenges the projects faced while\nmoving from one architecture to the other, or how / why they support\nmore than just one, etc.\n\nIn [1] Heikki argued that:\n\n I don't think this is worth it, unless we plan to eventually remove\n the multi-process mode. ... As long as you need to also support\n processes, you need to code to the lowest common denominator and\n don't really get the benefits.\n\nBut these projects clearly support multiple architectures, and have no\nintention to ditch some of them. So how did they do that? Surely they\nthink there are benefits.\n\nOne option would be to just have separate code paths for processes and\nthreads, but the effort required to maintain and improve that would be\ndeadly. So the only feasible option seems to be they managed to abstract\nthe subsystems enough for the \"regular\" code to not care about model.\n\n\n[1]\nhttps://www.postgresql.org/message-id/[email protected]\n\n> I understand this thread to be about thread-per-connection (= backend,\n> session, socket) for now.\n\nMaybe, although people also proposed to switch the parallel query to\nthreads (so that'd be multiple threads per session). But I don't think\nit really matters, the concerns are mostly about moving from one\narchitecture to another and/or supporting both.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Jun 2023 12:37:37 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-07 We 17:58, Andres Freund wrote:\n> Hi,\n>\n> On 2023-06-07 08:53:24 -0400, Robert Haas wrote:\n>> Now, Andres is not a man who accepts a tax on performance of any size\n>> without a fight, so his \"really expensive\" might turn out to resemble my\n>> \"pretty cheap.\" However, if widespread use of TLS is too expensive and we\n>> have to start rewriting code to not depend on global variables, that's going\n>> to be more of a problem. If we can get by with doing such rewrites only in\n>> performance-critical places, it might not still be too bad. Personally, I\n>> think the degree of dependence that PostgreSQL has on global variables is\n>> pretty excessive and I don't think that a certain amount of refactoring to\n>> reduce it would be a bad thing. If it turns into an infinite series of\n>> hastily-written patches to rejigger every source file we have, though, then\n>> I'm not really on board with that.\n> I think a lot of such rewrites would be a good idea, even if we right now all\n> agree to swear we'll never go to threads. Not having any sort of grouping of\n> global variables makes it IMO considerably harder to debug. I can easily ask\n> somebody to print out a variable pointing to a struct describing the state of\n> a subsystem. I can't really do that for 50 variables.\n>\n> And once you do that, I think you reduce the TLS cost substantially. The\n> variable pointing to the struct is already likely in a register. Whereas each\n> individual variable being in TLS makes the job harder for the compiler.\n>\n\nI could certainly get on board with a project to tame the use of global \nvariables.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-07 We 17:58, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-06-07 08:53:24 -0400, Robert Haas wrote:\n\n\nNow, Andres is not a man who accepts a tax on performance of any size\nwithout a fight, so his \"really expensive\" might turn out to resemble my\n\"pretty cheap.\" However, if widespread use of TLS is too expensive and we\nhave to start rewriting code to not depend on global variables, that's going\nto be more of a problem. If we can get by with doing such rewrites only in\nperformance-critical places, it might not still be too bad. Personally, I\nthink the degree of dependence that PostgreSQL has on global variables is\npretty excessive and I don't think that a certain amount of refactoring to\nreduce it would be a bad thing. If it turns into an infinite series of\nhastily-written patches to rejigger every source file we have, though, then\nI'm not really on board with that.\n\n\n\nI think a lot of such rewrites would be a good idea, even if we right now all\nagree to swear we'll never go to threads. Not having any sort of grouping of\nglobal variables makes it IMO considerably harder to debug. I can easily ask\nsomebody to print out a variable pointing to a struct describing the state of\na subsystem. I can't really do that for 50 variables.\n\nAnd once you do that, I think you reduce the TLS cost substantially. The\nvariable pointing to the struct is already likely in a register. Whereas each\nindividual variable being in TLS makes the job harder for the compiler.\n\n\n\n\n\nI could certainly get on board with a project to tame the use of\n global variables.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 8 Jun 2023 08:00:49 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 7/6/23 23:37, Andres Freund wrote:\n> [snip]\n> I think we're starting to hit quite a few limits related to the process model,\n> particularly on bigger machines. The overhead of cross-process context\n> switches is inherently higher than switching between threads in the same\n> process - and my suspicion is that that overhead will continue to\n> increase. Once you have a significant number of connections we end up spending\n> a *lot* of time in TLB misses, and that's inherent to the process model,\n> because you can't share the TLB across processes.\n\nIMHO, as one sysadmin who has previously played with Postgres on \"quite \nlarge\" machines, I'd propose what most would call a \"hybrid model\"....\n\n* Threads are a very valuable addition for the \"frontend\" of the server. \nMost would call this a built-in session-aware connection pooler :)\n\n Heikki's (and others') efforts towards separating connection state \ninto discrete structs is clearly a prerequisite for this; \nImplementation-wise, just toss the connState into a TLS[thread-local \nstorage] variable and many problems just vanish.\n\n Postgres wouldn't be the first to adopt this approach, either...\n\n* For \"heavyweight\" queries, the scalability of \"almost independent\" \nprocesses w.r.t. NUMA is just _impossible to achieve_ (locality of \nreference!) with a pure threaded system. When CPU+mem-bound \n(bandwidth-wise), threads add nothing IMO.\n\nIndeed a separate postmaster is very much needed in order to control the \nprocesses / guard overall integrity.\n\n\nHence, my humble suggestion is to consider a hybrid architecture which \nbenefits from each model's strengths. I am quite convinced that \ntransition would be much safer and simpler (I do share most of Tom and \nother's concerns...)\n\nOther projects to draw inspiration from:\n\n * Postfix -- multi-process, postfix's master guards processes and \nperforms privileged operations; unprivileged \"subsystems\". Interesting \nIPC solutions\n * Apache -- MPMs provide flexibility and support for e.g. non-threaded \nworkloads (PHP is the most popular; cfr. \"prefork\" multi-process MPM)\n * NginX is actually multi-process (one per CPU) + event-based \n(multiplexing) ...\n * PowerDNS is internally threaded, but has a \"guardian\" process. Seems \nto be evolving to a more hybrid model.\n\n\nI would suggest something along the lines of :\n\n* postmaster -- process supervision and (potentially privileged) \noperations; process coordination (i.e descriptor passing); mostly as-is\n* *frontend* -- connection/session handling; possibly even event-driven\n* backends -- process heavyweight queries as independently as possible. \nCan span worker threads AND processes when needed\n* *dispatcher* -- takes care of cached/lightweight queries (cached \ncatalog / full snapshot visibility+processing)\n* utility processes can be left \"as is\" mostly, except to be made \nmulti-threaded for heavy-sync ones (e.g. vacuum workers, stat workers)\n\nFor fixed-size buffers, i.e. pages / chunks, I'd say mmaped (anonymous) \nshared memory isn't that bad... but haven't read the actual code in years.\n\nFor message queues / invalidation messages, i guess that shmem-based \nsync is really a nuisance. My understanding is that Linux-specific (i.e. \neventfd) mechanisms aren't quite considered .. or are they?\n\n> The amount of duplicated code we have to deal with due to to the process model\n> is quite substantial. We have local memory, statically allocated shared memory\n> and dynamically allocated shared memory variants for some things. And that's\n> just going to continue.\n\nCode duplication is indeed a problem... but I wouldn't call \"different \napproaches/solution for very similar problems depending on \ncontext/requirement\" a duplicate. I might well be wrong / lack detail, \nthough... (again: haven't read PG's code for some years already).\n\n\nJust my two cents.\n\n\nThanks,\n\n J.L.\n\n-- \nParkinson's Law: Work expands to fill the time alloted to it.\n\n\n\n\n\n\nOn 7/6/23 23:37, Andres Freund wrote:\n\n[snip]\n \nI think we're starting to hit quite a few limits related to the process model,\nparticularly on bigger machines. The overhead of cross-process context\nswitches is inherently higher than switching between threads in the same\nprocess - and my suspicion is that that overhead will continue to\nincrease. Once you have a significant number of connections we end up spending\na *lot* of time in TLB misses, and that's inherent to the process model,\nbecause you can't share the TLB across processes.\n\nIMHO, as one sysadmin who has previously played with Postgres on\n \"quite large\" machines, I'd propose what most would call a \"hybrid\n model\"....\n* Threads are a very valuable addition for the \"frontend\" of the\n server. Most would call this a built-in session-aware connection\n pooler :)\n\n Heikki's (and others') efforts towards separating connection\n state into discrete structs is clearly a prerequisite for this;\n Implementation-wise, just toss the connState into a\n TLS[thread-local storage] variable and many problems just vanish.\n Postgres wouldn't be the first to adopt this approach,\n either...\n\n* For \"heavyweight\" queries, the scalability of \"almost\n independent\" processes w.r.t. NUMA is just _impossible to achieve_\n (locality of reference!) with a pure threaded system. When\n CPU+mem-bound (bandwidth-wise), threads add nothing IMO.\n\nIndeed a separate postmaster is very much needed in order to\n control the processes / guard overall integrity.\n\n\nHence, my humble suggestion is to consider a hybrid architecture\n which benefits from each model's strengths. I am quite convinced\n that transition would be much safer and simpler (I do share most\n of Tom and other's concerns...)\n\nOther projects to draw inspiration from:\n * Postfix -- multi-process, postfix's master guards processes\n and performs privileged operations; unprivileged \"subsystems\".\n Interesting IPC solutions\n * Apache -- MPMs provide flexibility and support for e.g.\n non-threaded workloads (PHP is the most popular; cfr. \"prefork\"\n multi-process MPM)\n * NginX is actually multi-process (one per CPU) + event-based\n (multiplexing) ...\n * PowerDNS is internally threaded, but has a \"guardian\" process.\n Seems to be evolving to a more hybrid model.\n\n\n\nI would suggest something along the lines of :\n* postmaster -- process supervision and (potentially privileged)\n operations; process coordination (i.e descriptor passing); mostly\n as-is\n * frontend -- connection/session handling; possibly even\n event-driven\n * backends -- process heavyweight queries as independently as\n possible. Can span worker threads AND processes when needed\n * dispatcher -- takes care of cached/lightweight queries\n (cached catalog / full snapshot visibility+processing)\n * utility processes can be left \"as is\" mostly, except to be made\n multi-threaded for heavy-sync ones (e.g. vacuum workers, stat\n workers)\n\n For fixed-size buffers, i.e. pages / chunks, I'd say mmaped\n (anonymous) shared memory isn't that bad... but haven't read the\n actual code in years.\nFor message queues / invalidation messages, i guess that\n shmem-based sync is really a nuisance. My understanding is that\n Linux-specific (i.e. eventfd) mechanisms aren't quite considered\n .. or are they?\n\n\nThe amount of duplicated code we have to deal with due to to the process model\nis quite substantial. We have local memory, statically allocated shared memory\nand dynamically allocated shared memory variants for some things. And that's\njust going to continue.\n\nCode duplication is indeed a problem... but I wouldn't call\n \"different approaches/solution for very similar problems depending\n on context/requirement\" a duplicate. I might well be wrong / lack\n detail, though... (again: haven't read PG's code for some years\n already).\n\n\n\nJust my two cents.\n\n\n\nThanks,\n J.L.\n\n-- \nParkinson's Law: Work expands to fill the time alloted to it.",
"msg_date": "Thu, 8 Jun 2023 14:01:16 +0200",
"msg_from": "Jose Luis Tallon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 11:54, Hannu Krosing <[email protected]> wrote:\n>\n> On Wed, Jun 7, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> > > 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> > > would ask if developer time could be better spent on tackling some of the\n> > > other problems around vertical scalability? Per some PGCon discussions,\n> > > there's still room for improvement in how PostgreSQL can best utilize\n> > > resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> > > instance comes to mind).\n> >\n> > I think we're starting to hit quite a few limits related to the process model,\n> > particularly on bigger machines. The overhead of cross-process context\n> > switches is inherently higher than switching between threads in the same\n> > process - and my suspicion is that that overhead will continue to\n> > increase. Once you have a significant number of connections we end up spending\n> > a *lot* of time in TLB misses, and that's inherent to the process model,\n> > because you can't share the TLB across processes.\n>\n>\n> This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> Unconference session where he mentioned that the had proposed a\n> 'mshare' syscall for this.\n>\n> So maybe a more fruitful way to fixing the perceived issues with\n> process model is to push for small changes in Linux to overcome these\n> avoiding a wholesale rewrite ?\n\nWe support not just Linux, but also Windows and several (?) BSDs. I'm\nnot against pushing Linux to make things easier for us, but Linux is\nan open source project, too, where someone need to put in time to get\nthe shiny things that you want. And I'd rather see our time spent in\nPostgreSQL, as Linux is only used by a part of our user base.\n\n> > The amount of duplicated code we have to deal with due to to the process model\n> > is quite substantial. We have local memory, statically allocated shared memory\n> > and dynamically allocated shared memory variants for some things. And that's\n> > just going to continue.\n>\n> Maybe we can already remove the distinction between static and dynamic\n> shared memory ?\n\nThat sounds like a bad idea, dynamic shared memory is more expensive\nto maintain than our static shared memory systems, not in the least\nbecause DSM is not guaranteed to share the same addresses in each\nprocess' address space.\n\n> Though I already heard some complaints at the conference discussions\n> that having the dynamic version available has made some developers\n> sloppy in using it resulting in wastefulness.\n\nDo you know any examples of this wastefulness?\n\n> > > I'm purposely giving a nonanswer on whether it's a worthwhile goal, but\n> > > rather I'd be curious where it could stack up against some other efforts to\n> > > continue to help PostgreSQL improve performance and handle very large\n> > > workloads.\n> >\n> > There's plenty of things we can do before, but in the end I think tackling the\n> > issues you mention and moving to threads are quite tightly linked.\n>\n> Still we should be focusing our attention at solving the issues and\n> not at \"moving to threads\" and hoping this will fix the issues by\n> itself.\n\nI suspect that it is much easier to solve some of the issues when\nworking in a shared address space.\nE.g. resizing shared_buffers is difficult right now due to the use of\na static allocation of shared memory, but if we had access to a single\nshared address space, it'd be easier to do any cleanup necessary for\ndynamically increasing/decreasing its size.\nSame with parallel workers - if we have a shared address space, the\nworkers can pass any sized objects around without being required to\nmove the tuples through DSM and waiting for the leader process to\nempty that buffer when it gets full.\n\nSure, most of that is probably possible with DSM as well, it's just\nthat I see a lot more issues that you need to take care of when you\ndon't have a shared address space (such as the pointer translation we\ndo in dsa_get_address).\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 14:15:33 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 2:15 PM Matthias van de Meent\n<[email protected]> wrote:\n>\n> On Thu, 8 Jun 2023 at 11:54, Hannu Krosing <[email protected]> wrote:\n> >\n> > On Wed, Jun 7, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> > > > 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> > > > would ask if developer time could be better spent on tackling some of the\n> > > > other problems around vertical scalability? Per some PGCon discussions,\n> > > > there's still room for improvement in how PostgreSQL can best utilize\n> > > > resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> > > > instance comes to mind).\n> > >\n> > > I think we're starting to hit quite a few limits related to the process model,\n> > > particularly on bigger machines. The overhead of cross-process context\n> > > switches is inherently higher than switching between threads in the same\n> > > process - and my suspicion is that that overhead will continue to\n> > > increase. Once you have a significant number of connections we end up spending\n> > > a *lot* of time in TLB misses, and that's inherent to the process model,\n> > > because you can't share the TLB across processes.\n> >\n> >\n> > This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> > Unconference session where he mentioned that the had proposed a\n> > 'mshare' syscall for this.\n> >\n> > So maybe a more fruitful way to fixing the perceived issues with\n> > process model is to push for small changes in Linux to overcome these\n> > avoiding a wholesale rewrite ?\n>\n> We support not just Linux, but also Windows and several (?) BSDs. I'm\n> not against pushing Linux to make things easier for us, but Linux is\n> an open source project, too, where someone need to put in time to get\n> the shiny things that you want. And I'd rather see our time spent in\n> PostgreSQL, as Linux is only used by a part of our user base.\n\nDo we have any statistics for the distribution of our user base ?\n\nMy gut feeling says that for performance-critical use the non-Linux is\nin low single digits at best.\n\nMy fascination for OpenSource started with realisation that instead of\nworkarounds you can actually fix the problem at source. So if the\nspecific problem is that TLB is not shared then the proper fix is\nmaking it shared instead of rewriting everything else to get around\nit. None of us is limited to writing code in PostgreSQL only. If the\neasiest and more generix fix can be done in Linux then so be it.\n\nIt is also possible that Windows and *BSD already have a similar feature.\n\n>\n> > > The amount of duplicated code we have to deal with due to to the process model\n> > > is quite substantial. We have local memory, statically allocated shared memory\n> > > and dynamically allocated shared memory variants for some things. And that's\n> > > just going to continue.\n> >\n> > Maybe we can already remove the distinction between static and dynamic\n> > shared memory ?\n>\n> That sounds like a bad idea, dynamic shared memory is more expensive\n> to maintain than our static shared memory systems, not in the least\n> because DSM is not guaranteed to share the same addresses in each\n> process' address space.\n\nThen this too needs to be fixed\n\n>\n> > Though I already heard some complaints at the conference discussions\n> > that having the dynamic version available has made some developers\n> > sloppy in using it resulting in wastefulness.\n>\n> Do you know any examples of this wastefulness?\n\nNo. Just somebody mentioned it in a hallway conversation and the rest\nof the developers present mumbled approvingly :)\n\n> > > > I'm purposely giving a nonanswer on whether it's a worthwhile goal, but\n> > > > rather I'd be curious where it could stack up against some other efforts to\n> > > > continue to help PostgreSQL improve performance and handle very large\n> > > > workloads.\n> > >\n> > > There's plenty of things we can do before, but in the end I think tackling the\n> > > issues you mention and moving to threads are quite tightly linked.\n> >\n> > Still we should be focusing our attention at solving the issues and\n> > not at \"moving to threads\" and hoping this will fix the issues by\n> > itself.\n>\n> I suspect that it is much easier to solve some of the issues when\n> working in a shared address space.\n\nProbably. But it would come at the cost of needing to change a lot of\nother parts of PostgreSQL.\n\nI am not against making code cleaner for potential threaded model\nsupport. I am just a bit sceptical about the actual switch being easy,\nor doable in the next 10-15 years.\n\n> E.g. resizing shared_buffers is difficult right now due to the use of\n> a static allocation of shared memory, but if we had access to a single\n> shared address space, it'd be easier to do any cleanup necessary for\n> dynamically increasing/decreasing its size.\n\nThis again could be done with shared memory mapping + dynamic shared memory.\n\n> Same with parallel workers - if we have a shared address space, the\n> workers can pass any sized objects around without being required to\n> move the tuples through DSM and waiting for the leader process to\n> empty that buffer when it gets full.\n\nLarger shared memory :)\n\nSame for shared plan cache and shared schema cache.\n\n> Sure, most of that is probably possible with DSM as well, it's just\n> that I see a lot more issues that you need to take care of when you\n> don't have a shared address space (such as the pointer translation we\n> do in dsa_get_address).\n\nAll of the above seem to point to the need of a single thing - having\nan option for shared memory mappings .\n\nSo let's focus on fixing things with minimal required change.\n\nAnd this would not have an adverse affect on systems that can not\nshare mapping, they just won't become faster. And thay are all welcome\nto add the option for shared mappings too if they see enough value in\nit.\n\nIt could sound like the same thing as threaded model, but should need\nmuch less changes and likely no changes for most out-of-tree\nextensions\n\n---\nCheers\nHannu\n\n\n",
"msg_date": "Thu, 8 Jun 2023 14:44:11 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 6:04 AM Hannu Krosing <[email protected]> wrote:\n> Here I was hoping to go in the opposite direction and support parallel\n> query across replicas.\n>\n> This looks much more doable based on the process model than the single\n> process / multiple threads model.\n\nI don't think this is any more or less difficult to support in one\nmodel vs. the other. The problems seem pretty much unrelated.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:38:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 07.06.2023 3:53 PM, Robert Haas wrote:\n> I think I remember a previous conversation with Andres\n> where he opined that thread-local variables are \"really expensive\"\n> (and I apologize in advance if I'm mis-remembering this). Now, Andres\n> is not a man who accepts a tax on performance of any size without a\n> fight, so his \"really expensive\" might turn out to resemble my \"pretty\n> cheap.\" However, if widespread use of TLS is too expensive and we have\n> to start rewriting code to not depend on global variables, that's\n> going to be more of a problem. If we can get by with doing such\n> rewrites only in performance-critical places, it might not still be\n> too bad. Personally, I think the degree of dependence that PostgreSQL\n> has on global variables is pretty excessive and I don't think that a\n> certain amount of refactoring to reduce it would be a bad thing. If it\n> turns into an infinite series of hastily-written patches to rejigger\n> every source file we have, though, then I'm not really on board with\n> that.\n\nActually TLS not not more expensive then accessing struct fields (at \nleast at x86 platform), consider the following program:\n\ntypedef struct {\n int a;\n int b;\n int c;\n} ABC;\n\n__thread int a;\n__thread int b;\n__thread int c;\n\n\nvoid use_struct(ABC* abc) {\n abc->a += 1;\n abc->b += 1;\n abc->c += 1;\n}\n\nvoid use_tls(ABC* abc) {\n a += 1;\n b += 1;\n c += 1;\n}\n\n\nNow look at the generated assembler:\n\nuse_struct:\n addl $1, (%rdi)\n addl $1, 4(%rdi)\n addl $1, 8(%rdi)\n ret\n\n\nuse_tls:\n addl $1, %fs:a@tpoff\n addl $1, %fs:b@tpoff\n addl $1, %fs:c@tpoff\n ret\n\n> Heikki mentions the idea of having a central Session object and just\n> passing that around. I have a hard time believing that's going to work\n> out nicely. First, it's not extensible. Right now, if you need a bit\n> of additional session-local state, you just declare a variable and\n> you're all set. That's not a perfect system and does cause some\n> problems, but we can't go from there to a system where it's impossible\n> to add session-local state without hacking core. Second, we will be\n> sad if session.h ends up #including every other header file that\n> defines a data structure anywhere in the backend. Or at least I'll be\n> sad. I'm not actually against the idea of having some kind of session\n> object that we pass around, but I think it either needs to be limited\n> to a relatively small set of well-defined things, or else it needs to\n> be design in some kind of extensible way that doesn't require it to\n> know the full details of every sort of object that's being used as\n> session-local state anywhere in the system. I haven't really seen any\n> convincing design ideas around this yet.\n\n\nThere are about 2k static/global variables in Postgres.\nIt is almost impossible to maintain such struct.\nBut session context may be still needed for other purposes - if we want \nto support built-in connection pool.\n\nIf we are using threads, then all variables needs to be either \nthread-local, either access to them should be synchronized.\nBut If we want to save session context, then there is no need to \nsave/restore all this 2k variables.\nWe need to capture and these variables which lifetime exceeds \ntransaction boundary.\nThere are not so much such variables - tens not hundreds.\n\nThe question is how to better handle this \"session context\".\nThere are two alternatives:\n1. Save/restore this context from/to normal TLS variables.\n2. Replace such variables with access through the session context struct.\n\nI prefer 2) because it requires less changes in code.\nAnd performance overhead of session context store/resume is negligible \nwhen number of such variables is ~10.\n\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 16:47:48 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:30 PM Andres Freund <[email protected]> wrote:\n> On 2023-06-05 17:51:57 +0300, Heikki Linnakangas wrote:\n> > If there are no major objections, I'm going to update the developer FAQ,\n> > removing the excuses there for why we don't use threads [1].\n>\n> I think we should do this even if there's no concensus to slowly change to\n> threads. There's clearly no concensus on the opposite either.\n\nThis is a very fair point.\n\n> One interesting bit around the transition is what tooling we ought to provide\n> to detect problems. It could e.g. be reasonably feasible to write something\n> checking how many read-write global variables an extension has on linux\n> systems.\n\nYes, this would be great.\n\n> I don't think the control file is the right place - that seems more like\n> something that should be signalled via PG_MODULE_MAGIC. We need to check this\n> not just during CREATE EXTENSION, but also during loading of libraries - think\n> of shared_preload_libraries.\n\n+1.\n\n> Yea, we definitely need the supervisor function in a separate\n> process. Presumably that means we need to split off some of the postmaster\n> responsibilities - e.g. I don't think it'd make sense to handle connection\n> establishment in the supervisor process. I wonder if this is something that\n> could end up being beneficial even in the process world.\n\nYeah, I've had similar thoughts. I'm not exactly sure what the\nadvantages of such a refactoring might be, but the current structure\nfeels pretty limiting. It works OK because we don't do anything in the\npostmaster other than fork a new backend, but I'm not sure if that's\nthe best strategy. It means, for example, that if there's a ton of new\nconnection requests, we're spawning a ton of new processes, which\nmeans that you can put a lot of load on a PostgreSQL instance even if\nyou can't authenticate. Maybe we'd be better off with a pool of\nprocesses accepting connections; if authentication fails, that\nconnection goes back into the pool and tries again. If authentication\nsucceeds, either that process transitions to being a regular backend,\nleaving the authentication pool, or perhaps hands off the connection\nto a \"real backend\" at that point and loops around to accept() the\nnext request.\n\nWhether that's a good ideal in detail or not, the point remains that\nhaving the postmaster handle this task is quite limiting. It forces us\nto hand off the connection to a new process at the earliest possible\nstage, so that the postmaster remains free to handle other duties.\nGiving the responsibility to another process would let us make\ndecisions about where to perform the hand-off based on real\narchitectural thought rather than being forced to do a certain way\nbecause nothing else will work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:56:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:37 PM Andres Freund <[email protected]> wrote:\n> I think we're starting to hit quite a few limits related to the process model,\n> particularly on bigger machines. The overhead of cross-process context\n> switches is inherently higher than switching between threads in the same\n> process - and my suspicion is that that overhead will continue to\n> increase. Once you have a significant number of connections we end up spending\n> a *lot* of time in TLB misses, and that's inherent to the process model,\n> because you can't share the TLB across processes.\n\nThis is a very good point.\n\nOur default posture on this mailing list is to try to maximize use of\nOS facilities rather than reimplementing things - well and good. But\nif a user writes a query with FOO JOIN BAR ON FOO.X = BAR.X OR FOO.Y =\nBAR.Y and then complains that the resulting query plan sucks, we don't\nslink off in embarrassment: we tell the user that there's not really\nany fast plan for that query and that if they write queries like that\nthey have to live with the consequences. But the same thing applies\nhere. To the extent that context switching between more processes is\nmore expensive than context switching between threads for\nhardware-related reasons, that's not something that the OS can fix for\nus. If we choose to do the expensive thing then we pay the overhead.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:08:57 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:39 PM Peter Eisentraut <[email protected]> wrote:\n> On 07.06.23 23:30, Andres Freund wrote:\n> > Yea, we definitely need the supervisor function in a separate\n> > process. Presumably that means we need to split off some of the postmaster\n> > responsibilities - e.g. I don't think it'd make sense to handle connection\n> > establishment in the supervisor process. I wonder if this is something that\n> > could end up being beneficial even in the process world.\n>\n> Something to think about perhaps ... how would that be different from\n> using an existing external supervisor process like systemd or supervisord.\n\nsystemd wouldn't start individual PostgreSQL processes, right? If we\nwant a checkpointer and a wal writer and a background writer and\nwhatever we have to have our own supervisor process to spawn all those\nand keep them running. We could remove the logic to do a full system\nreset without a postmaster exit in favor of letting systemd restart\neverything from scratch, if we wanted to do that. But we'd still need\nour own supervisor to start up all of the individual threads/processes\nthat we need.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:15:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:45 PM Andres Freund <[email protected]> wrote:\n> People have argued that the process model is more robust. But it turns out\n> that we have to crash-restart for just about any \"bad failure\" anyway. It used\n> to be (a long time ago) that we didn't, but that was just broken.\n\nHow hard have you thought about memory leaks as a failure mode? Or\nfile descriptor leaks?\n\nRight now, a process needs to release all of its shared resources\nbefore exiting, or trigger a crash-and-restart cycle. But it doesn't\nneed to release any process-local resources, because the OS will take\ncare of that. But that wouldn't be true any more, and that seems like\nit might require fixing quite a few things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:17:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, 7 Jun 2023 at 18:09, Andres Freund <[email protected]> wrote:\n> Having the same memory mapping between threads makes allows the\n> hardware to share the TLB (on x86 via process context identifiers), which\n> isn't realistically possible with different processes.\n\nAs a matter of historical interest Solaris actually did implement this\nacross different processes. It was called by the somewhat unfortunate\nname \"Intimate Shared Memory\". I don't think Linux ever implemented\nanything like it but I'm not sure.\n\nI think this was not so much about cache hit rate but about just sheer\nwasted memory in page mappings. So I guess hugepages more or less\ntarget the same issues. But I find it interesting that they were\nalready running into issues like this 20 years ago -- presumably those\nissues have only grown.\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:33:26 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:\n> > That sounds like a bad idea, dynamic shared memory is more expensive\n> > to maintain than our static shared memory systems, not in the least\n> > because DSM is not guaranteed to share the same addresses in each\n> > process' address space.\n>\n> Then this too needs to be fixed\n\nHonestly, I'm struggling to respond to this non-sarcastically. I mean,\nI was the one who implemented DSM. Do you think it works the way that\nit works because I considered doing something smart and decided to do\nsomething dumb instead?\n\nSuppose you have two PostgreSQL backends A and B. If we're not running\non Windows, each of these was forked from the postmaster, so things\nlike the text and data segments and the main shared memory segment are\ngoing to be mapped at the same address in both processes, because they\ninherit those mappings from the postmaster. However, additional things\ncan get mapped into the address space of either process later. This\ncan happen in a variety of ways. For instance, a shared library can\nget loaded into one process and not the other. Or it can get loaded\ninto both processes but at different addresses - keep in mind that\nit's the OS, not PostgreSQL, that decides what address to use when\nloading a shared library. Or, if one process allocates a bunch of\nmemory, then new address space will have to be mapped into that\nprocess to handle those memory allocations and, again, it is the OS\nthat decides where to put those mappings. So over time the memory\nmappings of these two processes can diverge arbitrarily. That means\nthat if the same DSM has to be mapped into both processes, there is no\nguarantee that it can be placed at the same address in both processes.\nThe address that gets used in one process might not be available in\nthe other process.\n\nIt's worth pointing out here that there are no portable primitives\navailable for a process to examine what memory segments are mapped\ninto its address space. I think it's probably possible on every OS,\nbut it works differently on different ones. Linux exposes such details\nthrough /proc, for example, but macOS doesn't have /proc. So if we're\nusing standard, portable primitives, we can't even TRY to put the DSM\nat the same address in every process that maps it. But even if we used\nnon-portable primitives to examine what's mapped into the address\nspace of every process, it wouldn't solve the problem. Suppose 4\nprocesses want to share a DSM, so they all run around and use\nnon-portable OS-specific interfaces to figure out where there's a free\nchunk of address space large enough to accommodate that DSM and they\nall map it there. Hooray! But then say a fifth process comes along and\nit ALSO wants to map that DSM, but in that fifth process the address\nspace that was available in the other four processes has already been\nused by something else. Well, now we're screwed.\n\nThe fact that DSM is expensive and awkward to use isn't a defect in\nthe implementation of DSM. It's a consequence of the fact that the\naddress space mappings in one PostgreSQL backend can be almost\narbitrarily different from the address space mappings in another\nPostgreSQL backend. If only there were some kind of OS feature\navailable that would allow us to set things up so that all of the\nPostgreSQL backends shared the same address space mappings!\n\nOh, right, there is: THREADS.\n\nThe fact that we don't use threads is the reason why DSM sucks and has\nto suck. In fact it's the reason why DSM has to exist at all. Saying\n\"fix DSM instead of using threads\" is roughly in the same category as\nsaying \"if the peasants are revolting because they have no bread, then\nlet them eat cake.\" Both statements evince a complete failure to\nunderstand the actual source of the problem.\n\nWith apologies for my grumpiness,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:56:32 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 4:56 PM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:\n> > > That sounds like a bad idea, dynamic shared memory is more expensive\n> > > to maintain than our static shared memory systems, not in the least\n> > > because DSM is not guaranteed to share the same addresses in each\n> > > process' address space.\n> >\n> > Then this too needs to be fixed\n>\n> Honestly, I'm struggling to respond to this non-sarcastically. I mean,\n> I was the one who implemented DSM. Do you think it works the way that\n> it works because I considered doing something smart and decided to do\n> something dumb instead?\n\nNo, I meant that this needs to be fixed at OS level, by being able to\nuse the same mapping.\n\nWe should not shy away from asking the OS people for adding the useful\nfeatures still missing.\n\nIt was mentioned in the Unconference Kernel Hacker AMA talk and said\nkernel hacker works for Oracle, andf they also seemed to be needing\nthis :)\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:02:08 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 14:44, Hannu Krosing <[email protected]> wrote:\n>\n> On Thu, Jun 8, 2023 at 2:15 PM Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > On Thu, 8 Jun 2023 at 11:54, Hannu Krosing <[email protected]> wrote:\n> > >\n> > > This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> > > Unconference session where he mentioned that the had proposed a\n> > > 'mshare' syscall for this.\n> > >\n> > > So maybe a more fruitful way to fixing the perceived issues with\n> > > process model is to push for small changes in Linux to overcome these\n> > > avoiding a wholesale rewrite ?\n> >\n> > We support not just Linux, but also Windows and several (?) BSDs. I'm\n> > not against pushing Linux to make things easier for us, but Linux is\n> > an open source project, too, where someone need to put in time to get\n> > the shiny things that you want. And I'd rather see our time spent in\n> > PostgreSQL, as Linux is only used by a part of our user base.\n>\n> Do we have any statistics for the distribution of our user base ?\n>\n> My gut feeling says that for performance-critical use the non-Linux is\n> in low single digits at best.\n>\n> My fascination for OpenSource started with realisation that instead of\n> workarounds you can actually fix the problem at source. So if the\n> specific problem is that TLB is not shared then the proper fix is\n> making it shared instead of rewriting everything else to get around\n> it. None of us is limited to writing code in PostgreSQL only. If the\n> easiest and more generix fix can be done in Linux then so be it.\n\nTLB is a CPU hardware facility, not something that the OS can decide\nto share between processes. While sharing (some) OS memory management\nfacilities across threads might be possible (as you mention, that\nmshare syscall would be an example), that doesn't solve the issue of\nthe hardware not supporting sharing TLB entries across processes. We'd\nuse less kernel memory for memory management, but the CPU would still\nstall on TLB misses every time we switch processes on the CPU (unless\nwe somehow were able to use non-process-namespaced TLB entries, which\nwould make our processes not meaningfully different from threads\nw.r.t. address space).\n\n> > >\n> > > Maybe we can already remove the distinction between static and dynamic\n> > > shared memory ?\n> >\n> > That sounds like a bad idea, dynamic shared memory is more expensive\n> > to maintain than our static shared memory systems, not in the least\n> > because DSM is not guaranteed to share the same addresses in each\n> > process' address space.\n>\n> Then this too needs to be fixed\n\nThat needs kernel facilities in all (most?) supported OSes, and I\nthink that's much more work than moving to threads:\nAllocations from the kernel are arbitrarily random across the\navailable address space, so a DSM segment that is allocated in one\nbackend might overlap with unshared allocations of a different\nbackend, making those backends have conflicting memory address spaces.\nThe only way to make that work is to have a shared memory addressing\nspace, but some backends just not having the allocation mapped into\ntheir local address space; which seems only slightly more isolated\nthan threads and much more effort to maintain.\n\n> > > Though I already heard some complaints at the conference discussions\n> > > that having the dynamic version available has made some developers\n> > > sloppy in using it resulting in wastefulness.\n> >\n> > Do you know any examples of this wastefulness?\n>\n> No. Just somebody mentioned it in a hallway conversation and the rest\n> of the developers present mumbled approvingly :)\n\nThe only \"wastefulness\" that I know of in our use of DSM is the queue,\nand that's by design: We need to move data from a backend's private\nmemory to memory that's accessible to other backends; i.e. shared\nmemory. You can't do that without copying or exposing your private\nmemory.\n\n> > > Still we should be focusing our attention at solving the issues and\n> > > not at \"moving to threads\" and hoping this will fix the issues by\n> > > itself.\n> >\n> > I suspect that it is much easier to solve some of the issues when\n> > working in a shared address space.\n>\n> Probably. But it would come at the cost of needing to change a lot of\n> other parts of PostgreSQL.\n>\n> I am not against making code cleaner for potential threaded model\n> support. I am just a bit sceptical about the actual switch being easy,\n> or doable in the next 10-15 years.\n\nPostgreSQL only has a support cycle of 5 years. 5 years after the last\nrelease of un-threaded PostgreSQL we could drop support for \"legacy\"\nextension models that don't support threading.\n\n> > E.g. resizing shared_buffers is difficult right now due to the use of\n> > a static allocation of shared memory, but if we had access to a single\n> > shared address space, it'd be easier to do any cleanup necessary for\n> > dynamically increasing/decreasing its size.\n>\n> This again could be done with shared memory mapping + dynamic shared memory.\n\nYes, but as I said, that's much more difficult than lock and/or atomic\noperations on shared-between-backends static variables, because if\nthese variables aren't in shared memory you need to pass the messages\nto update the variables to all backends.\n\n> > Same with parallel workers - if we have a shared address space, the\n> > workers can pass any sized objects around without being required to\n> > move the tuples through DSM and waiting for the leader process to\n> > empty that buffer when it gets full.\n>\n> Larger shared memory :)\n>\n> Same for shared plan cache and shared schema cache.\n\nShared memory in processes is not free, if only because the TLB gets\nsaturated much faster.\n\n> > Sure, most of that is probably possible with DSM as well, it's just\n> > that I see a lot more issues that you need to take care of when you\n> > don't have a shared address space (such as the pointer translation we\n> > do in dsa_get_address).\n>\n> All of the above seem to point to the need of a single thing - having\n> an option for shared memory mappings .\n>\n> So let's focus on fixing things with minimal required change.\n\nThat seems logical, but not all kernels support dynamic shared memory\nmappings. And, as for your suggested solution, I couldn't find much\ninfo on this mshare syscall (or its successor mmap/VM_SHARED_PT), nor\non whether it would actually fix the TLB issue.\n\n> And this would not have an adverse affect on systems that can not\n> share mapping, they just won't become faster. And thay are all welcome\n> to add the option for shared mappings too if they see enough value in\n> it.\n>\n> It could sound like the same thing as threaded model, but should need\n> much less changes and likely no changes for most out-of-tree\n> extensions\n\nWe can't expect the kernel to fix everything for us - that's what we\nbuild PostgreSQL for. Where possible, we do want to rely on OS\nprimitives, but I'm not sure that it would be easy to share memory\naddress mappings across backends, for reasons including the above\n(\"That needs kernel facilities in all [...] more effort to maintain\").\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:08:16 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-08 10:33:26 -0400, Greg Stark wrote:\n> On Wed, 7 Jun 2023 at 18:09, Andres Freund <[email protected]> wrote:\n> > Having the same memory mapping between threads makes allows the\n> > hardware to share the TLB (on x86 via process context identifiers), which\n> > isn't realistically possible with different processes.\n> \n> As a matter of historical interest Solaris actually did implement this\n> across different processes. It was called by the somewhat unfortunate\n> name \"Intimate Shared Memory\". I don't think Linux ever implemented\n> anything like it but I'm not sure.\n\nI don't think it shared the TLB - it did share page tables though.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 08:54:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 17:02, Hannu Krosing <[email protected]> wrote:\n>\n> On Thu, Jun 8, 2023 at 4:56 PM Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:\n> > > > That sounds like a bad idea, dynamic shared memory is more expensive\n> > > > to maintain than our static shared memory systems, not in the least\n> > > > because DSM is not guaranteed to share the same addresses in each\n> > > > process' address space.\n> > >\n> > > Then this too needs to be fixed\n> >\n> > Honestly, I'm struggling to respond to this non-sarcastically. I mean,\n> > I was the one who implemented DSM. Do you think it works the way that\n> > it works because I considered doing something smart and decided to do\n> > something dumb instead?\n>\n> No, I meant that this needs to be fixed at OS level, by being able to\n> use the same mapping.\n>\n> We should not shy away from asking the OS people for adding the useful\n> features still missing.\n\nWhile I agree that \"sharing page tables across processes\" is useful,\nit looks like it'd be much more effort to correctly implement for e.g.\nDSM than implementing threading.\nKonstantin's diff is \"only\" 20.1k lines [0] added and/or modified,\nwhich is a lot, but it's manageable (13k+ of which are from files that\nwere auto-generated and then committed, likely accidentally).\n\n> It was mentioned in the Unconference Kernel Hacker AMA talk and said\n> kernel hacker works for Oracle, andf they also seemed to be needing\n> this :)\n\nThough these new kernel features allowing for better performance\n(mostly in kernel memory usage, probably) would be nice to have, we\nwouldn't get performance benefits for older kernels, benefits which we\nwould get if we were to implement threading.\nI'm not on board with a policy of us twiddling thumbs and waiting for\nthe OS to fix our architectural performance issues. Sure, the kernel\ncould optimize for our usage pattern, but I think that's not something\nwe can (or should) rely on for performance ^1.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://github.com/postgrespro/postgresql.pthreads/compare/801386af...d5933309?w=1\n^1 OT: I think the same about us (ab)using the OS page cache, but\nthat's a tale for a different time and thread.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:55:57 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 11:02 AM Hannu Krosing <[email protected]> wrote:\n> No, I meant that this needs to be fixed at OS level, by being able to\n> use the same mapping.\n>\n> We should not shy away from asking the OS people for adding the useful\n> features still missing.\n>\n> It was mentioned in the Unconference Kernel Hacker AMA talk and said\n> kernel hacker works for Oracle, andf they also seemed to be needing\n> this :)\n\nFair enough, but we aspire to work on a bunch of different operating\nsystems. To make use of an OS facility, we need something that works\non at least Linux, Windows, macOS, and a few different BSD flavors.\nIt's not as if when the PostgreSQL project asks for a new operating\nsystem facility everyone springs into action to provide it\nimmediately. And even if they did, and even if they all released an\nimplementation of whatever we requested next year, it would still be\nat least five, more realistically ten, years before systems with those\nfacilities were ubiquitous. And unless we have truly obscene amounts\nof clout in the OS community, it's likely that all of those different\noperating systems would implement different things to meet the stated\nneed, and then we'd have to have a complex bunch of platform-dependent\ncode in order to keep working on all of those systems.\n\nTo me, this is a road to nowhere. I have no problem at all with us\nexpressing our needs to the OS community, but realistically, any\nPostgreSQL feature that depends on an OS feature less than twenty\nyears old is going to have to be optional, which means that if we want\nto do anything about sharing address space mappings in the next few\nyears, it's going to need to be based on threads.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:56:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 2023-06-08 14:01:16 +0200, Jose Luis Tallon wrote:\n> * For \"heavyweight\" queries, the scalability of \"almost independent\"\n> processes w.r.t. NUMA is just _impossible to achieve_ (locality of\n> reference!) with a pure threaded system. When CPU+mem-bound\n> (bandwidth-wise), threads add nothing IMO.\n\nI don't think this is true in any sort of way.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 08:57:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 12:15:58 +0200, Hannu Krosing wrote:\n> On Thu, Jun 8, 2023 at 11:54 AM Hannu Krosing <[email protected]> wrote:\n> >\n> > On Wed, Jun 7, 2023 at 11:37 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2023-06-05 13:40:13 -0400, Jonathan S. Katz wrote:\n> > > > 2. While I wouldn't want to necessarily discourage a moonshot effort, I\n> > > > would ask if developer time could be better spent on tackling some of the\n> > > > other problems around vertical scalability? Per some PGCon discussions,\n> > > > there's still room for improvement in how PostgreSQL can best utilize\n> > > > resources available very large \"commodity\" machines (a 448-core / 24TB RAM\n> > > > instance comes to mind).\n> > >\n> > > I think we're starting to hit quite a few limits related to the process model,\n> > > particularly on bigger machines. The overhead of cross-process context\n> > > switches is inherently higher than switching between threads in the same\n> > > process - and my suspicion is that that overhead will continue to\n> > > increase. Once you have a significant number of connections we end up spending\n> > > a *lot* of time in TLB misses, and that's inherent to the process model,\n> > > because you can't share the TLB across processes.\n> >\n> >\n> > This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> > Unconference session where he mentioned that the had proposed a\n> > 'mshare' syscall for this.\n\nAs-is that'd just lead to sharing page table, not the TLB. I don't think you\ncurrently do sharing of the TLB for parts of your address space on x86\nhardware. It's possible that something like that gets added to future\nhardware, but ...\n\n\n> Also, the *static* huge pages already let you solve this problem now\n> by sharing the page tables\n\nYou don't share the page tables with huge pages on linux.\n\n\n- Andres\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:00:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:\n\n> Do we have any statistics for the distribution of our user base ?\n>\n> My gut feeling says that for performance-critical use the non-Linux is\n> in low single digits at best.\n>\n\nStats are probably not possible, but based on years of consulting, as well\nas watching places like SO, Slack, IRC, etc. over the years, IMO that's a\nvery accurate gut feeling. I'd hazard 1% or less for non-Linux systems.\n\nCheers,\nGreg\n\nOn Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:Do we have any statistics for the distribution of our user base ?\n\nMy gut feeling says that for performance-critical use the non-Linux is\nin low single digits at best.Stats are probably not possible, but based on years of consulting, as well as watching places like SO, Slack, IRC, etc. over the years, IMO that's a very accurate gut feeling. I'd hazard 1% or less for non-Linux systems.Cheers,Greg",
"msg_date": "Thu, 8 Jun 2023 12:05:21 -0400",
"msg_from": "Greg Sabino Mullane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 16:47:48 +0300, Konstantin Knizhnik wrote:\n> Actually TLS not not more expensive then accessing struct fields (at least\n> at x86 platform), consider the following program:\n\nIt really depends on the OS and the architecture, not just the\narchitecture. And even on x86-64 Linux, the fact that you're using the segment\noffset in the address calculation means you can't use the more complicated\naddressing modes for other reasons. And plenty instructions, e.g. most (?) SSE\ninstructions, won't be able to use that kind of addressing directly.\n\nEven just compiling your, example you can see that with gcc -O2 you get\nconsiderably faster code with the non-TLS version.\n\nAs a fairly extreme example, here's the mingw -O3 compiled code:\n\nuse_struct:\n movq xmm1, QWORD PTR .LC0[rip]\n movq xmm0, QWORD PTR [rcx]\n add DWORD PTR 8[rcx], 1\n paddd xmm0, xmm1\n movq QWORD PTR [rcx], xmm0\n ret\nuse_tls:\n sub rsp, 40\n lea rcx, __emutls_v.a[rip]\n call __emutls_get_address\n lea rcx, __emutls_v.b[rip]\n add DWORD PTR [rax], 1\n call __emutls_get_address\n lea rcx, __emutls_v.c[rip]\n add DWORD PTR [rax], 1\n call __emutls_get_address\n add DWORD PTR [rax], 1\n add rsp, 40\n ret\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:59:58 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 10:26:07AM +1200, Thomas Munro wrote:\n> On Tue, Jun 6, 2023 at 6:52???AM Andrew Dunstan <[email protected]> wrote:\n> > If we were starting out today we would probably choose a threaded implementation. But moving to threaded now seems to me like a multi-year-multi-person project with the prospect of years to come chasing bugs and the prospect of fairly modest advantages. The risk to reward doesn't look great.\n> >\n> > That's my initial reaction. I could be convinced otherwise.\n> \n> Here is one thing I often think about when contemplating threads.\n> Take a look at dsa.c. It calls itself a shared memory allocator, but\n> really it has two jobs, the second being to provide software emulation\n> of virtual memory. That???s behind dshash.c and now the stats system,\n> and various parts of the parallel executor code. It???s slow and\n> complicated, and far from the state of the art. I wrote that code\n> (building on allocator code from Robert) with the expectation that it\n> was a transitional solution to unblock a bunch of projects. I always\n> expected that we'd eventually be deleting it. When I explain that\n> subsystem to people who are not steeped in the lore of PostgreSQL, it\n> sounds completely absurd. I mean, ... it is, right? My point is\n\n Isn't all the memory operations would require nearly the same\nshared memory allocators if someone switches to a threaded imple-\nmentation?\n\n> that we???re doing pretty unreasonable and inefficient contortions to\n> develop new features -- we're not just happily chugging along without\n> threads at no cost.\n> \n\n\n",
"msg_date": "Thu, 8 Jun 2023 20:02:46 +0300",
"msg_from": "Ilya Anfimov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "I discovered this thread from a Twitter post \"PostgreSQL will finally\nbe rewritten in Rust\" :)\n\nOn Mon, Jun 5, 2023 at 5:18 PM Tom Lane <[email protected]> wrote:\n>\n> Heikki Linnakangas <[email protected]> writes:\n> > I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n> > so that the whole server runs in a single process, with multiple\n> > threads. It has been discussed many times in the past, last thread on\n> > pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n> > I feel that there is now pretty strong consensus that it would be a good\n> > thing, more so than before. Lots of work to get there, and lots of\n> > details to be hashed out, but no objections to the idea at a high level.\n>\n> > The purpose of this email is to make that silent consensus explicit. If\n> > you have objections to switching from the current multi-process\n> > architecture to a single-process, multi-threaded architecture, please\n> > speak up.\n>\n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n>\n> regards, tom lane\n>\n>\n\n\n",
"msg_date": "Thu, 8 Jun 2023 19:07:48 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 17:02:08 +0200, Hannu Krosing wrote:\n> On Thu, Jun 8, 2023 at 4:56 PM Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, Jun 8, 2023 at 8:44 AM Hannu Krosing <[email protected]> wrote:\n> > > > That sounds like a bad idea, dynamic shared memory is more expensive\n> > > > to maintain than our static shared memory systems, not in the least\n> > > > because DSM is not guaranteed to share the same addresses in each\n> > > > process' address space.\n> > >\n> > > Then this too needs to be fixed\n> >\n> > Honestly, I'm struggling to respond to this non-sarcastically. I mean,\n> > I was the one who implemented DSM. Do you think it works the way that\n> > it works because I considered doing something smart and decided to do\n> > something dumb instead?\n>\n> No, I meant that this needs to be fixed at OS level, by being able to\n> use the same mapping.\n>\n> We should not shy away from asking the OS people for adding the useful\n> features still missing.\n\nThere's a large part of this that is about hardware, not software. And\nhonestly, for most of the problems the answer is to just use threads. Adding\ncomplexity to operating systems to make odd architectures like postgres'\nbetter is a pretty dubious proposition.\n\nI don't think we have even remotely enough influence on CPU design to make\ne.g. *partial* TLB sharing across processes a thing.\n\n\n> It was mentioned in the Unconference Kernel Hacker AMA talk and said\n> kernel hacker works for Oracle, andf they also seemed to be needing\n> this :)\n\nThe proposals around that don't really help us all that much. Sharing the page\ntable will be a bit more efficient, but it won't really change anything\ndramatically. From what I understand they are primarily interested in\nchanging properties of a memory mapping across multiple processes, e.g. making\nsome memory executable and have that reflected in all processes. I don't think\nthis will help us much.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:41:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 17:55:57 +0200, Matthias van de Meent wrote:\n> While I agree that \"sharing page tables across processes\" is useful,\n> it looks like it'd be much more effort to correctly implement for e.g.\n> DSM than implementing threading.\n> Konstantin's diff is \"only\" 20.1k lines [0] added and/or modified,\n> which is a lot, but it's manageable (13k+ of which are from files that\n> were auto-generated and then committed, likely accidentally).\n\nHonestly, I don't think this patch is in a good enough state to allow a\nrealistic estimation of the overall work. Making global variables TLS is the\n*easy* part. Redesigning postmaster, definining how to deal with extension\nlibraries, extension compatibility, developing tools to make developing a\nthreaded postgres feasible, dealing with freeing session lifetime memory\nallocations that previously were freed via process exit, making the change\nrealistically reviewable, portability are all much harder.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:48:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 11:56:13 -0400, Robert Haas wrote:\n> On Thu, Jun 8, 2023 at 11:02 AM Hannu Krosing <[email protected]> wrote:\n> > No, I meant that this needs to be fixed at OS level, by being able to\n> > use the same mapping.\n> >\n> > We should not shy away from asking the OS people for adding the useful\n> > features still missing.\n> >\n> > It was mentioned in the Unconference Kernel Hacker AMA talk and said\n> > kernel hacker works for Oracle, andf they also seemed to be needing\n> > this :)\n> \n> Fair enough, but we aspire to work on a bunch of different operating\n> systems. To make use of an OS facility, we need something that works\n> on at least Linux, Windows, macOS, and a few different BSD flavors.\n> It's not as if when the PostgreSQL project asks for a new operating\n> system facility everyone springs into action to provide it\n> immediately. And even if they did, and even if they all released an\n> implementation of whatever we requested next year, it would still be\n> at least five, more realistically ten, years before systems with those\n> facilities were ubiquitous.\n\nI'm less concerned about this aspect - most won't have upgraded to a version\nof postgres that benefit from threaded postgres in a similar timeframe. And if\nthe benefits are large enough, people will move. But:\n\n\n> And unless we have truly obscene amounts of clout in the OS community, it's\n> likely that all of those different operating systems would implement\n> different things to meet the stated need, and then we'd have to have a\n> complex bunch of platform-dependent code in order to keep working on all of\n> those systems.\n\nAnd even more likely, they just won't do anything, because it's a model that\nlarge parts of the industry have decided isn't going anywhere. It'd be one\nthing if we had 5 kernel devs that we could deploy to work on this, but we\ndon't. So we have to convince kernel devs employed by others that somehow this\nis an urgent enough thing that they should work on it. The likely, imo\njustified, answer is just going to be: Fix your architecture, then we can\ntalk.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:54:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 5:02 AM Ilya Anfimov <[email protected]> wrote:\n> Isn't all the memory operations would require nearly the same\n> shared memory allocators if someone switches to a threaded imple-\n> mentation?\n\nIt's true that we'd need concurrency-aware MemoryContext\nimplementations (details can be debated), but we wouldn't need that\naddress translation layer, which adds a measurable cost at every\naccess.\n\n\n",
"msg_date": "Fri, 9 Jun 2023 07:10:35 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 8/6/23 15:56, Robert Haas wrote:\n> Yeah, I've had similar thoughts. I'm not exactly sure what the\n> advantages of such a refactoring might be, but the current structure\n> feels pretty limiting. It works OK because we don't do anything in the\n> postmaster other than fork a new backend, but I'm not sure if that's\n> the best strategy. It means, for example, that if there's a ton of new\n> connection requests, we're spawning a ton of new processes, which\n> means that you can put a lot of load on a PostgreSQL instance even if\n> you can't authenticate. Maybe we'd be better off with a pool of\n> processes accepting connections; if authentication fails, that\n> connection goes back into the pool and tries again.\n\n This. It's limited by connection I/O, hence a perfect use for \nthreads (minimize per-connection overhead).\n\nIMV, \"session state\" would be best stored/managed here. Would need a way \nto convey it efficiently, though.\n\n> If authentication\n> succeeds, either that process transitions to being a regular backend,\n> leaving the authentication pool, or perhaps hands off the connection\n> to a \"real backend\" at that point and loops around to accept() the\n> next request.\n\nNicely done by passing the FD around....\n\nBut at this point, we'd just get a nice reimplementation of a threaded \nconnection pool inside Postgres :\\\n\n> Whether that's a good ideal in detail or not, the point remains that\n> having the postmaster handle this task is quite limiting. It forces us\n> to hand off the connection to a new process at the earliest possible\n> stage, so that the postmaster remains free to handle other duties.\n> Giving the responsibility to another process would let us make\n> decisions about where to perform the hand-off based on real\n> architectural thought rather than being forced to do a certain way\n> because nothing else will work.\n\nAt least \"tcop\" surely feels like belonging in a separate process ....\n\n\n J.L.\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 21:30:28 +0200",
"msg_from": "Jose Luis Tallon <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 4:00 AM Andres Freund <[email protected]> wrote:\n> On 2023-06-08 12:15:58 +0200, Hannu Krosing wrote:\n> > > This part was touched in the \"AMA with a Linux Kernale Hacker\"\n> > > Unconference session where he mentioned that the had proposed a\n> > > 'mshare' syscall for this.\n>\n> As-is that'd just lead to sharing page table, not the TLB. I don't think you\n> currently do sharing of the TLB for parts of your address space on x86\n> hardware. It's possible that something like that gets added to future\n> hardware, but ...\n\nI wasn't in Mathew Wilcox's unconference in Ottawa but I found an\nolder article on LWN:\n\nhttps://lwn.net/Articles/895217/\n\nFor what it's worth, FreeBSD hackers have studied this topic too (and\nit's been done in Android and no doubt other systems before):\n\nhttps://www.cs.rochester.edu/u/sandhya/papers/ispass19.pdf\n\nI've shared that paper on this list before in the context of\nsuper/huge pages and their benefits (to executable code, and to the\nbuffer pool), but a second topic in that paper is the idea of a shared\npage table: \"We find that sharing PTPs across different processes can\nreduce execution cycles by as much as 6.9%. Moreover, the combined\neffects of using superpages to map the main executable and sharing\nPTPs for the small shared libraries can reduce execution cycles up to\n18.2%.\" And that's just part of it, because those guys are more\ninterested in shared code/libraries and such so that's probably not\neven getting to the stuff like buffer pool and DSMs that we might tend\nto think of first.\n\nI'm pretty sure PostgreSQL (along with another fork-based RDBMSs\nmentioned in this thread) must be one of the worst offenders for page\ntable bloat, simply because we can have a lot of processes and touch a\nlot of memory.\n\nI'm no expert in this stuff, but it seems to be that with shared page\ntable schemes you can avoid wasting huge amounts of RAM on duplicated\npage table entries (pages * processes), and with huge/super pages you\ncan reduce the number of pages, but AFAIK you still can't escape the\nTLB shootdown cost, which is all-or-nothing (PCID level at best). The\nonly way to avoid TLB shootdowns on context switches is to have\n*exactly the same memory map*. Or, as Robert succinctly shouted,\n\"THREADS\".\n\n\n",
"msg_date": "Fri, 9 Jun 2023 07:34:49 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "> On Mon, Jun 05, 2023 at 06:43:54PM +0300, Heikki Linnakangas wrote:\n> On 05/06/2023 11:28, Tristan Partin wrote:\n> > > # Exposed PIDs\n> > >\n> > > We expose backend process PIDs to users in a few places.\n> > > pg_stat_activity.pid and pg_terminate_backend(), for example. They need\n> > > to be replaced, or we can assign a fake PID to each connection when\n> > > running in multi-threaded mode.\n> >\n> > Would it be possible to just transparently slot in the thread ID\n> > instead?\n>\n> Perhaps. It might break applications that use the PID directly with e.g.\n> 'kill <PID>', though.\n\nI think things are getting more interesting if some external resource\naccounting like cgroups is taking place. From what I know cgroup v2 has\nonly few controllers that allow threaded granularity, and memory or io\ncontrollers are not part of this list. Since Postgres is doing quite a\nlot of different things, sometimes it makes sense to put different\nlimitations on different types of activity, e.g. to give more priority\nto a certain critical internal job on the account of slowing down\nbackends. In the end it might be complicated or not possible to do that\nfor individual threads. Such cases are probably not very important from\nthe high level point of view, but could become an argument when deciding\nwhat should be a process and what should be a thread.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 21:47:04 +0200",
"msg_from": "Dmitry Dolgov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 13:08, Hannu Krosing <[email protected]> wrote:\n\n> I discovered this thread from a Twitter post \"PostgreSQL will finally\n> be rewritten in Rust\" :)\n>\n\nBy the time we got around to finishing this, there would be a better\nlanguage to write it in.\n\nDave\n\nOn Thu, 8 Jun 2023 at 13:08, Hannu Krosing <[email protected]> wrote:I discovered this thread from a Twitter post \"PostgreSQL will finally\nbe rewritten in Rust\" :)By the time we got around to finishing this, there would be a better language to write it in.Dave",
"msg_date": "Thu, 8 Jun 2023 15:59:29 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 07:34:49 +1200, Thomas Munro wrote:\n> I wasn't in Mathew Wilcox's unconference in Ottawa but I found an\n> older article on LWN:\n> \n> https://lwn.net/Articles/895217/\n> \n> For what it's worth, FreeBSD hackers have studied this topic too (and\n> it's been done in Android and no doubt other systems before):\n> \n> https://www.cs.rochester.edu/u/sandhya/papers/ispass19.pdf\n> \n> I've shared that paper on this list before in the context of\n> super/huge pages and their benefits (to executable code, and to the\n> buffer pool), but a second topic in that paper is the idea of a shared\n> page table: \"We find that sharing PTPs across different processes can\n> reduce execution cycles by as much as 6.9%. Moreover, the combined\n> effects of using superpages to map the main executable and sharing\n> PTPs for the small shared libraries can reduce execution cycles up to\n> 18.2%.\" And that's just part of it, because those guys are more\n> interested in shared code/libraries and such so that's probably not\n> even getting to the stuff like buffer pool and DSMs that we might tend\n> to think of first.\n\nI've experimented with using huge pages for executable code on linux, and the\nbenefits are quite noticable:\nhttps://www.postgresql.org/message-id/20221104212126.qfh3yzi7luvyy5d6%40awork3.anarazel.de\n\nI'm a bit dubious that sharing the page table for executable code increase the\nbenefit that much further in real workloads. I suspect the reason it was\ndifferent for the authors of the paper is:\n\n> A fixed number of back-to-back\n> transactions are performed on a 5GB database, and we use the\n> -C option of pgbench to toggle between reconnecting after\n> each transaction (reconnect mode) and using one persistent\n> connection per client (persistent connection mode). We use\n> the reconnect mode by default unless stated otherwise.\n\nUsing -C explains why you'd see a lot of benefit from sharing page tables for\nexecutable code. But I don't think -C is a particularly interesting workload\nto optimize for.\n\n\n> I'm no expert in this stuff, but it seems to be that with shared page\n> table schemes you can avoid wasting huge amounts of RAM on duplicated\n> page table entries (pages * processes), and with huge/super pages you\n> can reduce the number of pages, but AFAIK you still can't escape the\n> TLB shootdown cost, which is all-or-nothing (PCID level at best).\n\nPretty much that. While you can avoid some TLB shootdowns via PCIDs, that only\navoids flushing the TLB, it doesn't help with the TLB hit rate being much\nlower due to the number of \"redundant\" mappings with different PCIDs.\n\n\n> The only way to avoid TLB shootdowns on context switches is to have *exactly\n> the same memory map*. Or, as Robert succinctly shouted, \"THREADS\".\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 13:26:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "This is an interesting message thread. I think in regards to the OP's call\nto make PG multi-threaded, there should be a clear and identifiable\nperformance target and use cases for the target. How much performance boost\ncan be expected, and if so, in which data application context? Will queries\nreturn faster for transactional use cases? analytic use cases? How much\ndata needs to be stored before one can observe the difference, or better\nyet, a difference with a measurable impact on reduced cloud compute costs\nas a % of compute cloud costs. I think if you can demonstrate for different\ntest datasets what those savings amount to you can either find momentum to\npursue it. Beyond that, even with better modern tooling for multi-threaded\ndevelopment, it's obviously a big lift (may well be worth it!). Some of us\ncagey old cats on this list (at least me) still have some work to do to\nshed the baggage that previous pain of MT dev has caused us. :-)\n\nCheers,\nSteve\n\nOn Thu, Jun 8, 2023 at 1:26 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-06-09 07:34:49 +1200, Thomas Munro wrote:\n> > I wasn't in Mathew Wilcox's unconference in Ottawa but I found an\n> > older article on LWN:\n> >\n> > https://lwn.net/Articles/895217/\n> >\n> > For what it's worth, FreeBSD hackers have studied this topic too (and\n> > it's been done in Android and no doubt other systems before):\n> >\n> > https://www.cs.rochester.edu/u/sandhya/papers/ispass19.pdf\n> >\n> > I've shared that paper on this list before in the context of\n> > super/huge pages and their benefits (to executable code, and to the\n> > buffer pool), but a second topic in that paper is the idea of a shared\n> > page table: \"We find that sharing PTPs across different processes can\n> > reduce execution cycles by as much as 6.9%. Moreover, the combined\n> > effects of using superpages to map the main executable and sharing\n> > PTPs for the small shared libraries can reduce execution cycles up to\n> > 18.2%.\" And that's just part of it, because those guys are more\n> > interested in shared code/libraries and such so that's probably not\n> > even getting to the stuff like buffer pool and DSMs that we might tend\n> > to think of first.\n>\n> I've experimented with using huge pages for executable code on linux, and\n> the\n> benefits are quite noticable:\n>\n> https://www.postgresql.org/message-id/20221104212126.qfh3yzi7luvyy5d6%40awork3.anarazel.de\n>\n> I'm a bit dubious that sharing the page table for executable code increase\n> the\n> benefit that much further in real workloads. I suspect the reason it was\n> different for the authors of the paper is:\n>\n> > A fixed number of back-to-back\n> > transactions are performed on a 5GB database, and we use the\n> > -C option of pgbench to toggle between reconnecting after\n> > each transaction (reconnect mode) and using one persistent\n> > connection per client (persistent connection mode). We use\n> > the reconnect mode by default unless stated otherwise.\n>\n> Using -C explains why you'd see a lot of benefit from sharing page tables\n> for\n> executable code. But I don't think -C is a particularly interesting\n> workload\n> to optimize for.\n>\n>\n> > I'm no expert in this stuff, but it seems to be that with shared page\n> > table schemes you can avoid wasting huge amounts of RAM on duplicated\n> > page table entries (pages * processes), and with huge/super pages you\n> > can reduce the number of pages, but AFAIK you still can't escape the\n> > TLB shootdown cost, which is all-or-nothing (PCID level at best).\n>\n> Pretty much that. While you can avoid some TLB shootdowns via PCIDs, that\n> only\n> avoids flushing the TLB, it doesn't help with the TLB hit rate being much\n> lower due to the number of \"redundant\" mappings with different PCIDs.\n>\n>\n> > The only way to avoid TLB shootdowns on context switches is to have\n> *exactly\n> > the same memory map*. Or, as Robert succinctly shouted, \"THREADS\".\n>\n> +1\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\nThis is an interesting message thread. I think in regards to the OP's call to make PG multi-threaded, there should be a clear and identifiable performance target and use cases for the target. How much performance boost can be expected, and if so, in which data application context? Will queries return faster for transactional use cases? analytic use cases? How much data needs to be stored before one can observe the difference, or better yet, a difference with a measurable impact on reduced cloud compute costs as a % of compute cloud costs. I think if you can demonstrate for different test datasets what those savings amount to you can either find momentum to pursue it. Beyond that, even with better modern tooling for multi-threaded development, it's obviously a big lift (may well be worth it!). Some of us cagey old cats on this list (at least me) still have some work to do to shed the baggage that previous pain of MT dev has caused us. :-)Cheers,SteveOn Thu, Jun 8, 2023 at 1:26 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2023-06-09 07:34:49 +1200, Thomas Munro wrote:\n> I wasn't in Mathew Wilcox's unconference in Ottawa but I found an\n> older article on LWN:\n> \n> https://lwn.net/Articles/895217/\n> \n> For what it's worth, FreeBSD hackers have studied this topic too (and\n> it's been done in Android and no doubt other systems before):\n> \n> https://www.cs.rochester.edu/u/sandhya/papers/ispass19.pdf\n> \n> I've shared that paper on this list before in the context of\n> super/huge pages and their benefits (to executable code, and to the\n> buffer pool), but a second topic in that paper is the idea of a shared\n> page table: \"We find that sharing PTPs across different processes can\n> reduce execution cycles by as much as 6.9%. Moreover, the combined\n> effects of using superpages to map the main executable and sharing\n> PTPs for the small shared libraries can reduce execution cycles up to\n> 18.2%.\" And that's just part of it, because those guys are more\n> interested in shared code/libraries and such so that's probably not\n> even getting to the stuff like buffer pool and DSMs that we might tend\n> to think of first.\n\nI've experimented with using huge pages for executable code on linux, and the\nbenefits are quite noticable:\nhttps://www.postgresql.org/message-id/20221104212126.qfh3yzi7luvyy5d6%40awork3.anarazel.de\n\nI'm a bit dubious that sharing the page table for executable code increase the\nbenefit that much further in real workloads. I suspect the reason it was\ndifferent for the authors of the paper is:\n\n> A fixed number of back-to-back\n> transactions are performed on a 5GB database, and we use the\n> -C option of pgbench to toggle between reconnecting after\n> each transaction (reconnect mode) and using one persistent\n> connection per client (persistent connection mode). We use\n> the reconnect mode by default unless stated otherwise.\n\nUsing -C explains why you'd see a lot of benefit from sharing page tables for\nexecutable code. But I don't think -C is a particularly interesting workload\nto optimize for.\n\n\n> I'm no expert in this stuff, but it seems to be that with shared page\n> table schemes you can avoid wasting huge amounts of RAM on duplicated\n> page table entries (pages * processes), and with huge/super pages you\n> can reduce the number of pages, but AFAIK you still can't escape the\n> TLB shootdown cost, which is all-or-nothing (PCID level at best).\n\nPretty much that. While you can avoid some TLB shootdowns via PCIDs, that only\navoids flushing the TLB, it doesn't help with the TLB hit rate being much\nlower due to the number of \"redundant\" mappings with different PCIDs.\n\n\n> The only way to avoid TLB shootdowns on context switches is to have *exactly\n> the same memory map*. Or, as Robert succinctly shouted, \"THREADS\".\n\n+1\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 8 Jun 2023 16:35:59 -0700",
"msg_from": "Stephan Doliov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "This is somewhat orthogonal to the topic of threading but relevant to the\nuse of resources.\n\nIf we are going to undertake some hard problems perhaps we should be\nlooking at other problems that solve other long term issues before we\ncommit to spending resources on changing the process model.\n\nOne thing I can think of is upgrading. AFAIK dump and restore is the only\nway to change the on disk format.\nPresuming that eventually we will be forced to change the on disk format it\nwould be nice to be able to do so in a manner which does not force long\ndown times\n\n Dave\n\n>\n>>\n\nThis is somewhat orthogonal to the topic of threading but relevant to the use of resources.If we are going to undertake some hard problems perhaps we should be looking at other problems that solve other long term issues before we commit to spending resources on changing the process model.One thing I can think of is upgrading. AFAIK dump and restore is the only way to change the on disk format. Presuming that eventually we will be forced to change the on disk format it would be nice to be able to do so in a manner which does not force long down times Dave",
"msg_date": "Fri, 9 Jun 2023 11:19:56 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Fri, 9 Jun 2023 at 17:20, Dave Cramer <[email protected]> wrote:\n>\n> This is somewhat orthogonal to the topic of threading but relevant to the use of resources.\n>\n> If we are going to undertake some hard problems perhaps we should be looking at other problems that solve other long term issues before we commit to spending resources on changing the process model.\n\n-1. This and that are orthogonal and effort in one does not need to\nblock the other. If someone is willing to put in the effort, let them.\nLast time I checked we, as a project, are not blocking bugfixes for\nnew features in MAIN either (or vice versa).\n\n> One thing I can think of is upgrading. AFAIK dump and restore is the only way to change the on disk format.\n> Presuming that eventually we will be forced to change the on disk format it would be nice to be able to do so in a manner which does not force long down times\n\nI agree that we should improve our upgrade process (and we had a great\ndiscussion on the topic at the PGCon Unconference last week), but in\nmy view that's not relevant to this discussion.\n\nKind regards,\n\nMatthias van de Meent\nNeon, Inc.\n\n\n",
"msg_date": "Fri, 9 Jun 2023 17:53:52 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Greetings,\n\n* Dave Cramer ([email protected]) wrote:\n> One thing I can think of is upgrading. AFAIK dump and restore is the only\n> way to change the on disk format.\n> Presuming that eventually we will be forced to change the on disk format it\n> would be nice to be able to do so in a manner which does not force long\n> down times\n\nThere is an ongoing effort moving in this direction. The $subject isn't\ngreat, but this patch set (which we are currently working on\nupdating...): https://commitfest.postgresql.org/43/3986/ attempts\nchanging a lot of currently compile-time block-size pieces to be\nrun-time which would open up the possibility to have a different page\nformat for, eg, different tablespaces. Possibly even different block\nsizes. We'd certainly welcome discussion from others who are\ninterested.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 9 Jun 2023 18:29:24 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 06:38:38PM +0530, Ashutosh Bapat wrote:\n> With multiple processes, we can use all the available cores (at least\n> theoretically if all those processes are independent). But is that\n> guaranteed with single process multi-thread model? Google didn't throw\n> any definitive answer to that. Usually it depends upon the OS and\n> architecture.\n> \n> Maybe a good start is to start using threads instead of parallel\n> workers e.g. for parallel vacuum, parallel query and so on while\n> leaving the processes for connections and leaders. that itself might\n> take significant time. Based on that experience move to a completely\n> threaded model. Based on my experience with other similar products, I\n> think we will settle on a multi-process multi-thread model.\n\nI think we have a few known problem that we might be able to solve\nwithout threads, but can help us eventually move to threads if we find\nit useful:\n\n1) Use threads for background workers rather than processes\n2) Allow sessions to be stopped and started by saving their state\n\nIdeally we would solve the problem of making shared structures\nresizable, but I am not sure how that can be easily done without\nthreads.\n \n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 9 Jun 2023 19:55:16 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 11:37:00AM +1200, Thomas Munro wrote:\n> It's old, but this describes the 4 main models and which well known\n> RDBMSes use them in section 2.3:\n> \n> https://dsf.berkeley.edu/papers/fntdb07-architecture.pdf\n> \n> TL;DR DB2 is the winner, it can do process-per-connection,\n> thread-per-connection, process-pool or thread-pool.\n> \n> I understand this thread to be about thread-per-connection (= backend,\n> session, socket) for now.\n\nI am quite confused that few people seem to care about which model,\nprocesses or threads, is better for Oracle, and how having both methods\navailable can be a reasonable solution to maintain. Someone suggested\nthey abstracted the differences so the maintenance burden was minor, but\nthat seems very hard to me.\n\nDid these vendors start with processes, add threads, and then find that\nthreads had downsides so they had to keep both?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 9 Jun 2023 20:23:08 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Fri, 9 Jun 2023 at 18:29, Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> * Dave Cramer ([email protected]) wrote:\n> > One thing I can think of is upgrading. AFAIK dump and restore is the only\n> > way to change the on disk format.\n> > Presuming that eventually we will be forced to change the on disk format\n> it\n> > would be nice to be able to do so in a manner which does not force long\n> > down times\n>\n> There is an ongoing effort moving in this direction. The $subject isn't\n> great, but this patch set (which we are currently working on\n> updating...): https://commitfest.postgresql.org/43/3986/ attempts\n> changing a lot of currently compile-time block-size pieces to be\n> run-time which would open up the possibility to have a different page\n> format for, eg, different tablespaces. Possibly even different block\n> sizes. We'd certainly welcome discussion from others who are\n> interested.\n>\n> Thanks,\n>\n> Stephen\n>\n\nUpgrading was just one example of difficult problems that need to be\naddressed.\nMy thought was that before we commit to something as potentially resource\nintensive as changing the threading model we compile a list of other \"big\nissues\" and prioritize.\n\nI realize open source is more of a scratch your itch kind of development\nmodel, but I'm not convinced the random walk that entails is the\nappropriate way to move forward. At the very least I'd like us to question\nit.\nDave\n\nOn Fri, 9 Jun 2023 at 18:29, Stephen Frost <[email protected]> wrote:Greetings,\n\n* Dave Cramer ([email protected]) wrote:\n> One thing I can think of is upgrading. AFAIK dump and restore is the only\n> way to change the on disk format.\n> Presuming that eventually we will be forced to change the on disk format it\n> would be nice to be able to do so in a manner which does not force long\n> down times\n\nThere is an ongoing effort moving in this direction. The $subject isn't\ngreat, but this patch set (which we are currently working on\nupdating...): https://commitfest.postgresql.org/43/3986/ attempts\nchanging a lot of currently compile-time block-size pieces to be\nrun-time which would open up the possibility to have a different page\nformat for, eg, different tablespaces. Possibly even different block\nsizes. We'd certainly welcome discussion from others who are\ninterested.\n\nThanks,\n\nStephenUpgrading was just one example of difficult problems that need to be addressed.My thought was that before we commit to something as potentially resource intensive as changing the threading model we compile a list of other \"big issues\" and prioritize.I realize open source is more of a scratch your itch kind of development model, but I'm not convinced the random walk that entails is the appropriate way to move forward. At the very least I'd like us to question it.Dave",
"msg_date": "Sat, 10 Jun 2023 07:20:53 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 4:52 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> If there are no major objections, I'm going to update the developer FAQ,\n> removing the excuses there for why we don't use threads [1].\n\nI think it is not wise to start the wholesale removal of the objections there.\n\nBut I think it is worthwhile to revisit the section about threads and\nmaybe split out the historic part which is no more true, and provide\nboth pros and cons for these.\n\nI started with this short summary from the discussion in this thread,\nfeel free to expand, argue, fix :)\n* is current excuse\n-- is counterargument or ack\n----------------\nAs an example, threads are not yet used instead of multiple processes\nfor backends because:\n* Historically, threads were poorly supported and buggy.\n-- yes they were, not relevant now when threads are well-supported and non-buggy\n\n* An error in one backend can corrupt other backends if they're\nthreads within a single process\n-- still valid for silent corruption\n-- for detected crash - yes, but we are restarting all backends in\ncase of crash anyway.\n\n* Speed improvements using threads are small compared to the remaining\nbackend startup time.\n-- we now have some measurements that show significant performance\nimprovements not related to startup time\n\n* The backend code would be more complex.\n-- this is still the case\n-- even more worrisome is that all extensions also need to be rewritten\n-- and many incompatibilities will be silent and take potentially years to find\n\n* Terminating backend processes allows the OS to cleanly and quickly\nfree all resources, protecting against memory and file descriptor\nleaks and making backend shutdown cheaper and faster\n-- still true\n\n* Debugging threaded programs is much harder than debugging worker\nprocesses, and core dumps are much less useful\n-- this was countered by claiming that\n -- by now we have reasonable debugger support for threads\n -- there is no direct debugger support for debugging the exact\nsystem set up like PostgreSQL processes + shared memory\n\n* Sharing of read-only executable mappings and the use of\nshared_buffers means processes, like threads, are very memory\nefficient\n-- this seems to say that the current process model is as good as threads ?\n-- there were a few counterarguments\n -- per-backend virtual memory mapping can add up to significant\namount of extra RAM usage\n -- the discussion did not yet touch various per-backend caches\n(pg_catalog cache, statement cache) which are arguably easier to\nimplement in threaded model\n -- TLB reload at each process switch is expensive and would be\nmostly avoided in case of threads\n\n* Regular creation and destruction of processes helps protect against\nmemory fragmentation, which can be hard to manage in long-running\nprocesses\n-- probably still true\n-------------------------------------\n\n\n",
"msg_date": "Sat, 10 Jun 2023 20:01:47 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "I don't have an objection, but I do wonder: can one (or perhaps a few)\nqueries/workloads be provided where threading would be significantly\nbeneficial?\n\n(some material there could help get people on-board with the idea and\npotentially guide many of the smaller questions that arise along the\nway)\n\nOn Mon, 5 Jun 2023 at 15:52, Heikki Linnakangas <[email protected]> wrote:\n>\n> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n> so that the whole server runs in a single process, with multiple\n> threads. It has been discussed many times in the past, last thread on\n> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n> I feel that there is now pretty strong consensus that it would be a good\n> thing, more so than before. Lots of work to get there, and lots of\n> details to be hashed out, but no objections to the idea at a high level.\n>\n> The purpose of this email is to make that silent consensus explicit. If\n> you have objections to switching from the current multi-process\n> architecture to a single-process, multi-threaded architecture, please\n> speak up.\n>\n> If there are no major objections, I'm going to update the developer FAQ,\n> removing the excuses there for why we don't use threads [1]. And we can\n> start to talk about the path to get there. Below is a list of some\n> hurdles and proposed high-level solutions. This isn't an exhaustive\n> list, just some of the most obvious problems:\n>\n> # Transition period\n>\n> The transition surely cannot be done fully in one release. Even if we\n> could pull it off in core, extensions will need more time to adapt.\n> There will be a transition period of at least one release, probably\n> more, where you can choose multi-process or multi-thread model using a\n> GUC. Depending on how it goes, we can document it as experimental at first.\n>\n> # Thread per connection\n>\n> To get started, it's most straightforward to have one thread per\n> connection, just replacing backend process with a backend thread. In the\n> future, we might want to have a thread pool with some kind of a\n> scheduler to assign active queries to worker threads. Or multiple\n> threads per connection, or spawn additional helper threads for specific\n> tasks. But that's future work.\n>\n> # Global variables\n>\n> We have a lot of global and static variables:\n>\n> $ objdump -t bin/postgres | grep -e \"\\.data\" -e \"\\.bss\" | grep -v\n> \"data.rel.ro\" | wc -l\n> 1666\n>\n> Some of them are pointers to shared memory structures and can stay as\n> they are. But many of them are per-connection state. The most\n> straightforward conversion for those is to turn them into thread-local\n> variables, like Konstantin did in [0].\n>\n> It might be good to have some kind of a Session context struct that we\n> pass everywhere, or maybe have a single thread-local variable to hold\n> it. Many of the global variables would become fields in the Session. But\n> that's future work.\n>\n> # Extensions\n>\n> A lot of extensions also contain global variables or other things that\n> break in a multi-threaded environment. We need a way to label extensions\n> that support multi-threading. And in the future, also extensions that\n> *require* a multi-threaded server.\n>\n> Let's add flags to the control file to mark if the extension is\n> thread-safe and/or process-safe. If you try to load an extension that's\n> not compatible with the server's mode, throw an error.\n>\n> We might need new functions in addition _PG_init, called at connection\n> startup and shutdown. And background worker API probably needs some changes.\n>\n> # Exposed PIDs\n>\n> We expose backend process PIDs to users in a few places.\n> pg_stat_activity.pid and pg_terminate_backend(), for example. They need\n> to be replaced, or we can assign a fake PID to each connection when\n> running in multi-threaded mode.\n>\n> # Signals\n>\n> We use signals for communication between backends. SIGURG in latches,\n> and SIGUSR1 in procsignal, for example. Those primitives need to be\n> rewritten with some other signalling mechanism in multi-threaded mode.\n> In principle, it's possible to set per-thread signal handlers, and send\n> a signal to a particular thread (pthread_kill), but I think it's better\n> to just rewrite them.\n>\n> We also document that you can send SIGINT, SIGTERM or SIGHUP to an\n> individual backend process. I think we need to deprecate that, and maybe\n> come up with some convenient replacement. E.g. send a message with\n> backend ID to a unix domain socket, and a new pg_kill executable to send\n> those messages.\n>\n> # Restart on crash\n>\n> If a backend process crashes, postmaster terminates all other backends\n> and restarts the system. That's hard (impossible?) to do safely if\n> everything runs in one process. We can continue have a separate\n> postmaster process that just monitors the main process and restarts it\n> on crash.\n>\n> # Thread-safe libraries\n>\n> Need to switch to thread-safe versions of library functions, e.g.\n> uselocale() instead of setlocale().\n>\n> The Python interpreter has a Global Interpreter Lock. It's not possible\n> to create two completely independent Python interpreters in the same\n> process, there will be some lock contention on the GIL. Fortunately, the\n> python community just accepted https://peps.python.org/pep-0684/. That's\n> exactly what we need: it makes it possible for separate interpreters to\n> have their own GILs. It's not clear to me if that's in Python 3.12\n> already, or under development for some future version, but by the time\n> we make the switch in Postgres, there probably will be a solution in\n> cpython.\n>\n> At a quick glance, I think perl and TCL are fine, you can have multiple\n> interpreters in one process. Need to check any other libraries we use.\n>\n>\n> [0]\n> https://www.postgresql.org/message-id/flat/9defcb14-a918-13fe-4b80-a0b02ff85527%40postgrespro.ru\n>\n> [1]\n> https://wiki.postgresql.org/wiki/Developer_FAQ#Why_don.27t_you_use_raw_devices.2C_async-I.2FO.2C_.3Cinsert_your_favorite_wizz-bang_feature_here.3E.3F\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n\n\n",
"msg_date": "Sat, 10 Jun 2023 23:53:24 +0100",
"msg_from": "James Addison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 11:32 PM Hannu Krosing <[email protected]> wrote:\n>\n> On Mon, Jun 5, 2023 at 4:52 PM Heikki Linnakangas <[email protected]> wrote:\n> >\n> > If there are no major objections, I'm going to update the developer FAQ,\n> > removing the excuses there for why we don't use threads [1].\n>\n> I think it is not wise to start the wholesale removal of the objections there.\n>\n> But I think it is worthwhile to revisit the section about threads and\n> maybe split out the historic part which is no more true, and provide\n> both pros and cons for these.\n>\n> I started with this short summary from the discussion in this thread,\n> feel free to expand, argue, fix :)\n> * is current excuse\n> -- is counterargument or ack\n> ----------------\n> As an example, threads are not yet used instead of multiple processes\n> for backends because:\n> * Historically, threads were poorly supported and buggy.\n> -- yes they were, not relevant now when threads are well-supported and non-buggy\n>\n> * An error in one backend can corrupt other backends if they're\n> threads within a single process\n> -- still valid for silent corruption\n> -- for detected crash - yes, but we are restarting all backends in\n> case of crash anyway.\n>\n> * Speed improvements using threads are small compared to the remaining\n> backend startup time.\n> -- we now have some measurements that show significant performance\n> improvements not related to startup time\n>\n> * The backend code would be more complex.\n> -- this is still the case\n> -- even more worrisome is that all extensions also need to be rewritten\n> -- and many incompatibilities will be silent and take potentially years to find\n>\n> * Terminating backend processes allows the OS to cleanly and quickly\n> free all resources, protecting against memory and file descriptor\n> leaks and making backend shutdown cheaper and faster\n> -- still true\n>\n> * Debugging threaded programs is much harder than debugging worker\n> processes, and core dumps are much less useful\n> -- this was countered by claiming that\n> -- by now we have reasonable debugger support for threads\n> -- there is no direct debugger support for debugging the exact\n> system set up like PostgreSQL processes + shared memory\n>\n> * Sharing of read-only executable mappings and the use of\n> shared_buffers means processes, like threads, are very memory\n> efficient\n> -- this seems to say that the current process model is as good as threads ?\n> -- there were a few counterarguments\n> -- per-backend virtual memory mapping can add up to significant\n> amount of extra RAM usage\n> -- the discussion did not yet touch various per-backend caches\n> (pg_catalog cache, statement cache) which are arguably easier to\n> implement in threaded model\n> -- TLB reload at each process switch is expensive and would be\n> mostly avoided in case of threads\n\nI think it is worth mentioning that parallel worker infrastructure\nwill be simplified with threaded models e.g. 'parallel query', and\n'parallel vacuum'.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 09:31:17 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 6/10/23 13:20, Dave Cramer wrote:\n> \n> \n> On Fri, 9 Jun 2023 at 18:29, Stephen Frost <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Greetings,\n> \n> * Dave Cramer ([email protected]) wrote:\n> > One thing I can think of is upgrading. AFAIK dump and restore is\n> the only\n> > way to change the on disk format.\n> > Presuming that eventually we will be forced to change the on disk\n> format it\n> > would be nice to be able to do so in a manner which does not force\n> long\n> > down times\n> \n> There is an ongoing effort moving in this direction. The $subject isn't\n> great, but this patch set (which we are currently working on\n> updating...): https://commitfest.postgresql.org/43/3986/\n> <https://commitfest.postgresql.org/43/3986/> attempts\n> changing a lot of currently compile-time block-size pieces to be\n> run-time which would open up the possibility to have a different page\n> format for, eg, different tablespaces. Possibly even different block\n> sizes. We'd certainly welcome discussion from others who are\n> interested.\n> \n> Thanks,\n> \n> Stephen\n> \n> \n> Upgrading was just one example of difficult problems that need to be \n> addressed. My thought was that before we commit to something as\n> potentially resource intensive as changing the threading model we\n> compile a list of other \"big issues\" and prioritize.\n> \n\nI doubt anyone expects the community to commit to the threading switch\nin this sense - drop everything else and just start working on this\n(pretty massive) change. Not going to happen.\n\n> I realize open source is more of a scratch your itch kind of development\n> model, but I'm not convinced the random walk that entails is the\n> appropriate way to move forward. At the very least I'd like us to\n> question it.\n\nI may be missing something, but it's not clear to me whether you argue\nfor the open source approach or against it. I personally think it's\nperfectly fine for people to work on scratching their itch and focus on\nstuff that yields value to them (or their customers).\n\nAnd I think the only way to succeed at the threading switch is within\nthis very framework - split it into (much) smaller steps that are\nbeneficial on their own and scratch some other itch.\n\nFor example, we have issues with large number of connections and we've\ndiscussed stuff like built-in connection pooling etc. for a very long\ntime (including this thread). But we have session state in various\nplaces in process private memory, which makes it borderline impossible\nand thus we don't have anything built-in. IIUC the threading would needs\nto isolate/define the session state anyway, so perhaps it could do it in\na way that'd also work for the connection pooling (with processes)?\n\nWhich would mean this particular change is immediately beneficial even\nwithout the threading switch (which I'd expect to take considerable\namount of time).\n\nIn a way, I think this \"split into independently beneficial steps\"\nstrategy is the only option with a meaningful chance of success.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Jun 2023 13:53:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 12, 2023, at 13:53, Tomas Vondra wrote:\n> In a way, I think this \"split into independently beneficial steps\"\n> strategy is the only option with a meaningful chance of success.\n\n+1\n\n/Joel\n\n\n",
"msg_date": "Mon, 12 Jun 2023 14:13:48 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Is the following true or not?\n\n1. If we switch processes to threads but leave the amount of session\nlocal variables unchanged, there would be hardly any performance gain.\n2. If we move some backend's local variables into shared memory then\nthe performance gain would be very near to what we get with threads\nhaving equal amount of session-local variables.\n\nIn other words, the overall goal in principle is to gain from less\nmemory copying wherever it doesn't add the burden of locks for\nconcurrent variables access?\n\nRegards,\nPavel Borisov,\nSupabase\n\n\n",
"msg_date": "Mon, 12 Jun 2023 16:23:14 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-12 16:23:14 +0400, Pavel Borisov wrote:\n> Is the following true or not?\n>\n> 1. If we switch processes to threads but leave the amount of session\n> local variables unchanged, there would be hardly any performance gain.\n\nFalse.\n\n\n> 2. If we move some backend's local variables into shared memory then\n> the performance gain would be very near to what we get with threads\n> having equal amount of session-local variables.\n\nFalse.\n\n\n> In other words, the overall goal in principle is to gain from less\n> memory copying wherever it doesn't add the burden of locks for\n> concurrent variables access?\n\nFalse.\n\nThose points seems pretty much unrelated to the potential gains from switching\nto a threading model. The main advantages are:\n\n1) We'd gain from being able to share state more efficiently (using normal\n pointers) and more dynamically (not needing to pre-allocate). That'd remove\n a good amount of complexity. As an example, consider the work we need to do\n to ferry tuples from one process to another. Even if we just continue to\n use shm_mq, in a threading world we could just put a pointer in the queue,\n but have the tuple data be shared between the processes etc.\n\n Eventually this could include removing the 1:1 connection<->process/thread\n model. That's possible to do with processes as well, but considerably\n harder.\n\n2) Making context switches cheaper / sharing more resources at the OS and\n hardware level.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Jun 2023 12:24:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 10/06/2023 21:01, Hannu Krosing wrote:\n> On Mon, Jun 5, 2023 at 4:52 PM Heikki Linnakangas <[email protected]> wrote:\n>>\n>> If there are no major objections, I'm going to update the developer FAQ,\n>> removing the excuses there for why we don't use threads [1].\n> \n> I think it is not wise to start the wholesale removal of the objections there.\n> \n> But I think it is worthwhile to revisit the section about threads and\n> maybe split out the historic part which is no more true, and provide\n> both pros and cons for these.\n\n> I started with this short summary from the discussion in this thread,\n> feel free to expand, argue, fix :)\n> * is current excuse\n> -- is counterargument or ack\n\nThanks, that's a good idea.\n\n> * Speed improvements using threads are small compared to the remaining\n> backend startup time.\n> -- we now have some measurements that show significant performance\n> improvements not related to startup time\n\nAlso, I don't expect much performance gain directly from switching to \nthreads. The point is that switching to a multi-threaded model makes \npossible, or at least greatly simplifies, a lot of other development. \nWhich can then help with the backend startup time, among other things. \nFor example, a shared catalog cache.\n\n> * The backend code would be more complex.\n> -- this is still the case\n\nI don't quite buy that. A multi-threaded model isn't inherently more \ncomplex than a multi-process model. Just different. Sure, the transition \nperiod will be more complex, when we need to support both models. But in \nthe long run, if we can remove the multi-process mode, we can make a lot \nof things *simpler*.\n\n> -- even more worrisome is that all extensions also need to be rewritten\n\n\"rewritten\" is an exaggeration. Yes, extensions will need adapt, similar \nto the core code. But I hope it will be pretty mechanical work, marking \nglobal variables as thread-local and such. Many extensions will work \nwith little to no changes.\n\n> -- and many incompatibilities will be silent and take potentially years to find\n\nIMO this is the most scary part of all this. I'm optimistic that we can \nhave enough compiler support and tooling to catch most issues. But we \ndon't know for sure at this point.\n\n> * Terminating backend processes allows the OS to cleanly and quickly\n> free all resources, protecting against memory and file descriptor\n> leaks and making backend shutdown cheaper and faster\n> -- still true\n\nYep. I'm not too worried about PostgreSQL code, our memory contexts and \nresource owners are very good at stopping leaks. But 3rd party libraries \ncould pose hard problems. IIRC we still have a leak with the LLVM JIT \ncode, for example. We should fix that anyway, of course, but the \nmulti-process model is more forgiving with leaks like that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 00:17:15 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 12:24:30PM -0700, Andres Freund wrote:\n> Those points seems pretty much unrelated to the potential gains from switching\n> to a threading model. The main advantages are:\n> \n> 1) We'd gain from being able to share state more efficiently (using normal\n> pointers) and more dynamically (not needing to pre-allocate). That'd remove\n> a good amount of complexity. As an example, consider the work we need to do\n> to ferry tuples from one process to another. Even if we just continue to\n> use shm_mq, in a threading world we could just put a pointer in the queue,\n> but have the tuple data be shared between the processes etc.\n> \n> Eventually this could include removing the 1:1 connection<->process/thread\n> model. That's possible to do with processes as well, but considerably\n> harder.\n> \n> 2) Making context switches cheaper / sharing more resources at the OS and\n> hardware level.\n\nYes. FWIW, while reading the thread, parallel workers stroke me as\nthe first area that would benefit from all that. Could it be easier\nto figure out the incremental pieces if working on a new node doing a\nGather based on threads, for instance?\n--\nMichael",
"msg_date": "Tue, 13 Jun 2023 07:24:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 12.06.2023 3:23 PM, Pavel Borisov wrote:\n> Is the following true or not?\n>\n> 1. If we switch processes to threads but leave the amount of session\n> local variables unchanged, there would be hardly any performance gain.\n> 2. If we move some backend's local variables into shared memory then\n> the performance gain would be very near to what we get with threads\n> having equal amount of session-local variables.\n>\n> In other words, the overall goal in principle is to gain from less\n> memory copying wherever it doesn't add the burden of locks for\n> concurrent variables access?\n>\n> Regards,\n> Pavel Borisov,\n> Supabase\n>\n>\nIMHO both statements are not true.\nSwitching to threads will cause less context switch overhead (because \nall threads are sharing the same memory space and so preserve TLB.\nHow big will be this advantage? In my prototype I got ~10%. But may be \nit is possible to fin workloads when it is larger.\n\nPostgres backend is \"thick\" not because of large number of local variables.\nIt is because of local caches: catalog cache, relation cache, prepared \nstatements cache,...\nIf they are not rewritten, then backend still may consume a lot of \nmemory even if it will be thread rather then process.\nBut threads simplify development of global caches, although it can be \ndone with DSM.\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:55:36 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik <[email protected]> wrote in \n> Postgres backend is \"thick\" not because of large number of local\n> variables.\n> It is because of local caches: catalog cache, relation cache, prepared\n> statements cache,...\n> If they are not rewritten, then backend still may consume a lot of\n> memory even if it will be thread rather then process.\n> But threads simplify development of global caches, although it can be\n> done with DSM.\n\nWith the process model, that local stuff are flushed out upon\nreconnection. If we switch to the thread model, we will need an\nexpiration mechanism for those stuff.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:55:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 13.06.2023 10:55 AM, Kyotaro Horiguchi wrote:\n> At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik <[email protected]> wrote in\n>> Postgres backend is \"thick\" not because of large number of local\n>> variables.\n>> It is because of local caches: catalog cache, relation cache, prepared\n>> statements cache,...\n>> If they are not rewritten, then backend still may consume a lot of\n>> memory even if it will be thread rather then process.\n>> But threads simplify development of global caches, although it can be\n>> done with DSM.\n> With the process model, that local stuff are flushed out upon\n> reconnection. If we switch to the thread model, we will need an\n> expiration mechanism for those stuff.\n\nWe already have invalidation mechanism. It will be also used in case of \nshared cache, but we do not need to send invalidations to all backends.\nI do not completely understand your point.\nRight now caches (for example catalog cache) is not limited at all.\nSo if you have very large database schema, then this cache will consume \na lot of memory (multiplied by number of\nbackends). The fact that it is flushed out upon reconnection can not \nhelp much: what if backends are not going to disconnect?\n\nIn case of shared cache we will have to address the same problem: \nwhether this cache should be limited (with some replacement discipline \nas LRU).\nOr it is unlimited. In case of shared cache, size of the cache is less \ncritical because it is not multiplied by number of backends.\nSo we can assume that catalog and relation cache should always fir in \nmemory (otherwise significant rewriting of all Postgtres code working \nwith relations will be needed).\n\nBut Postgres also have temporary tables. For them we may need local \nbackend cache in any case.\nGlobal temp table patch was not approved so we still have to deal with \nthis awful temp tables.\n\nIn any case I do not understand why do we need some expiration mechanism \nfor this caches.\nIf there is some relation than information about this relation should be \nkept in the cache as long as this relation is alive.\nIf there is not enough memory to cache information about all relations, \nthen we may need some replacement algorithm.\nBut I do not think that there is any sense to remove some item fro the \ncache just because it is too old.\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:20:56 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "At Tue, 13 Jun 2023 11:20:56 +0300, Konstantin Knizhnik <[email protected]> wrote in \n> \n> \n> On 13.06.2023 10:55 AM, Kyotaro Horiguchi wrote:\n> > At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik\n> > <[email protected]> wrote in\n> >> Postgres backend is \"thick\" not because of large number of local\n> >> variables.\n> >> It is because of local caches: catalog cache, relation cache, prepared\n> >> statements cache,...\n> >> If they are not rewritten, then backend still may consume a lot of\n> >> memory even if it will be thread rather then process.\n> >> But threads simplify development of global caches, although it can be\n> >> done with DSM.\n> > With the process model, that local stuff are flushed out upon\n> > reconnection. If we switch to the thread model, we will need an\n> > expiration mechanism for those stuff.\n> \n> We already have invalidation mechanism. It will be also used in case\n> of shared cache, but we do not need to send invalidations to all\n> backends.\n\nInvalidation is not expiration.\n\n> I do not completely understand your point.\n> Right now caches (for example catalog cache) is not limited at all.\n> So if you have very large database schema, then this cache will\n> consume a lot of memory (multiplied by number of\n> backends). The fact that it is flushed out upon reconnection can not\n> help much: what if backends are not going to disconnect?\n\nRight now, if one out of many backends creates a huge system catalog\ncahce, it can be cleard upon disconnection. The same client can\nrepeat this process, but users can ensure such situations don't\npersist. However, with the thread model, we won't be able to clear\nparts of the cache that aren't required by the active backends\nanymore. (Of course with threads, we can avoid duplications, though.)\n\n> In case of shared cache we will have to address the same problem:\n> whether this cache should be limited (with some replacement discipline\n> as LRU).\n> Or it is unlimited. In case of shared cache, size of the cache is less\n> critical because it is not multiplied by number of backends.\n\nYes.\n\n> So we can assume that catalog and relation cache should always fir in\n> memory (otherwise significant rewriting of all Postgtres code working\n> with relations will be needed).\n\nI'm not sure that is ture.. But likely to be?\n\n> But Postgres also have temporary tables. For them we may need local\n> backend cache in any case.\n> Global temp table patch was not approved so we still have to deal with\n> this awful temp tables.\n> \n> In any case I do not understand why do we need some expiration\n> mechanism for this caches.\n\nI don't think it is efficient that PostgreSQL to consume a large\namount of memory for seldom-used content. While we may not need\nexpiration mechanism for moderate use cases, I have observed instances\nwhere a single process hogs a significant amount of memory,\nparticularly for intermittent tasks.\n\n> If there is some relation than information about this relation should\n> be kept in the cache as long as this relation is alive.\n> If there is not enough memory to cache information about all\n> relations, then we may need some replacement algorithm.\n> But I do not think that there is any sense to remove some item fro the\n> cache just because it is too old.\n\nAh. I see. I am fine with a replacement mechanishm. But the evicition\nalgorithm seems almost identical to the exparation algorithm. The\nalgorithm will not be simply driven by object age, but I'm not sure we\nneed more than access frequency.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jun 2023 17:46:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/13/23 10:20, Konstantin Knizhnik wrote:\n> The fact that it is flushed out upon reconnection can not \n> help much: what if backends are not going to disconnect?\n\nThis is why many connection pools have a maximum connection lifetime \nwhich can be configured. So in practice flushing all caches on \ndisconnect helps a lot.\n\nThe nice proper solution might very well be adding a maximum cache sizes \nand replacement but it obviously makes the cache more complex and adds \nan new GUC. Probably worth it, but flushing caches on disconnect is a \nsimple solution which works well in practice for many but no all workloads.\n\nAndreas\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:05:48 +0200",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 13.06.2023 11:46 AM, Kyotaro Horiguchi wrote:\n> So we can assume that catalog and relation cache should always fit in \n> memory\n>> memory (otherwise significant rewriting of all Postgtres code working\n>> with relations will be needed).\n> I'm not sure that is ture.. But likely to be?\n\nSorry, looks like I was wrong.\nRight now access to sys/cat/rel caches is protected by reference counter.\nSo we can easily add some replacement algorithm for this caches.\n\n> I don't think it is efficient that PostgreSQL to consume a large\n> amount of memory for seldom-used content. While we may not need\n> expiration mechanism for moderate use cases, I have observed instances\n> where a single process hogs a significant amount of memory,\n> particularly for intermittent tasks.\n\nUsually system catalog is small enough and do not cause any problems \nwith memory consumption.\nBut partitioned and temporary tables can cause bloat of catalog.\nIn such cases some eviction mechanism will be really useful.\nBut I do not think that it is somehow related with using threads instead \nof process.\nThe question whether to use private or shared cache is not directly \nrelated to threads vs. process choice.\nYes, threads makes implementation of shared cache much easier. But it \ncan be also done using dynamic\nmemory segments, Definitely shared cache has its pros and cons, first if \nall it requires sycnhronization\nwhich may have negative impact o performance.\n\nI have made an attempt to combine both caches: use relatively small \nper-backend local cache\nand large shared cache.\nI wonder what people think about the idea to make backends less thick by \nusing shared cache.\n\n\n\n",
"msg_date": "Wed, 14 Jun 2023 08:46:05 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "At Wed, 14 Jun 2023 08:46:05 +0300, Konstantin Knizhnik <[email protected]> wrote in \n> But I do not think that it is somehow related with using threads\n> instead of process.\n> The question whether to use private or shared cache is not directly\n> related to threads vs. process choice.\n\nYeah, I unconsciously conflated the two things. We can use per-thread\ncache on multithreading.\n\n> Yes, threads makes implementation of shared cache much easier. But it\n> can be also done using dynamic\n> memory segments, Definitely shared cache has its pros and cons, first\n> if all it requires sycnhronization\n> which may have negative impact o performance.\n\nTrue.\n\n> I have made an attempt to combine both caches: use relatively small\n> per-backend local cache\n> and large shared cache.\n> I wonder what people think about the idea to make backends less thick\n> by using shared cache.\n\nI remember of a relatively old thread about that.\n\nhttps://www.postgresql.org/message-id/4E72940DA2BF16479384A86D54D0988A567B9245%40G01JPEXMBKW04\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Jun 2023 16:01:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 6/14/23 09:01, Kyotaro Horiguchi wrote:\n> At Wed, 14 Jun 2023 08:46:05 +0300, Konstantin Knizhnik <[email protected]> wrote in\n>> But I do not think that it is somehow related with using threads\n>> instead of process.\n>> The question whether to use private or shared cache is not directly\n>> related to threads vs. process choice.\n> \n> Yeah, I unconsciously conflated the two things. We can use per-thread\n> cache on multithreading.\n\nFor sure, and we can drop the cache when dropping the memory context. \nAnd in the first versions of an imagined threaded PostgreSQL I am sure \nthat is how things will work.\n\nThen later someone will have to investigate which caches are worth \nmaking shared and what the eviction/expiration strategy should be.\n\nAndreas\n\n\n",
"msg_date": "Wed, 14 Jun 2023 09:06:05 +0200",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, 12 Jun 2023 at 20:24, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-12 16:23:14 +0400, Pavel Borisov wrote:\n> > Is the following true or not?\n> >\n> > 1. If we switch processes to threads but leave the amount of session\n> > local variables unchanged, there would be hardly any performance gain.\n>\n> False.\n>\n>\n> > 2. If we move some backend's local variables into shared memory then\n> > the performance gain would be very near to what we get with threads\n> > having equal amount of session-local variables.\n>\n> False.\n>\n>\n> > In other words, the overall goal in principle is to gain from less\n> > memory copying wherever it doesn't add the burden of locks for\n> > concurrent variables access?\n>\n> False.\n>\n> Those points seems pretty much unrelated to the potential gains from switching\n> to a threading model. The main advantages are:\n\nI think that they're practical performance-related questions about the\nbenefits of performing a technical migration that could involve\nsignificant development time, take years to complete, and uncover\nproblems that cause reliability issues for a stable, proven database\nmanagement system.\n\n> 1) We'd gain from being able to share state more efficiently (using normal\n> pointers) and more dynamically (not needing to pre-allocate). That'd remove\n> a good amount of complexity. As an example, consider the work we need to do\n> to ferry tuples from one process to another. Even if we just continue to\n> use shm_mq, in a threading world we could just put a pointer in the queue,\n> but have the tuple data be shared between the processes etc.\n>\n> Eventually this could include removing the 1:1 connection<->process/thread\n> model. That's possible to do with processes as well, but considerably\n> harder.\n\nThis reads like a code quality argument: that's worthwhile, but I\ndon't see how it supports your 'False' assertions. Do two queries\nrunning in separate processes spend much time allocating and waiting\non resources that could be shared within a single thread?\n\n> 2) Making context switches cheaper / sharing more resources at the OS and\n> hardware level.\n\nThat seems valid. Even so, I would expect that for many queries, I/O\naccess and row processing time is the bulk of the work, and that\ncontext-switches to/from other query processes is relatively\nnegligible.\n\n\n",
"msg_date": "Wed, 14 Jun 2023 20:15:37 +0100",
"msg_from": "James Addison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 9:55 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik <[email protected]> wrote in\n> > Postgres backend is \"thick\" not because of large number of local\n> > variables.\n> > It is because of local caches: catalog cache, relation cache, prepared\n> > statements cache,...\n> > If they are not rewritten, then backend still may consume a lot of\n> > memory even if it will be thread rather then process.\n> > But threads simplify development of global caches, although it can be\n> > done with DSM.\n>\n> With the process model, that local stuff are flushed out upon\n> reconnection. If we switch to the thread model, we will need an\n> expiration mechanism for those stuff.\n\nThe part that can not be so easily solved is that \"the local stuff\"\ncan include some leakage that is not directly controlled by us.\n\nI remember a few times when memory leaks in some PostGIS packages\ncause slow memory exhaustion and the simple fix was limiting\nconnection lifetime to something between 15 min and an hour.\n\nThe main problem here is that PostGIS uses a few tens of other GPL GIS\nrelated packages which are all changing independently and thus it is\nquite hard to be sure that none of these have developed a leak. And\nyou also likely can not just stop upgrading these as they also contain\nsecurity fixes.\n\nI have no idea what the fix could be in case of threaded server.\n\n\n",
"msg_date": "Wed, 14 Jun 2023 21:45:44 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 3:16 PM James Addison <[email protected]> wrote:\n> I think that they're practical performance-related questions about the\n> benefits of performing a technical migration that could involve\n> significant development time, take years to complete, and uncover\n> problems that cause reliability issues for a stable, proven database\n> management system.\n\nI don't. I think they're reflecting confusion about what the actual,\npractical path forward is.\n\nFor a first cut at this, all of our global variables become\nthread-local. Every single last one of them. So there's no savings of\nthe type described in that email. We do each and every thing just as\nwe do it today, except that it's all in different parts of a single\naddress space instead of different address spaces with a chunk of\nshared memory mapped into each one. Syscaches don't change, catcaches\ndon't change, memory copying is not reduced, literally nothing\nchanges. The coding model is just as it is today. Except for\ndecorating global variables, virtually no backend code needs to notice\nor care about the transition. There are a few exceptions. For\ninstance, TopMemoryContext would need to be deleted explicitly, and\nthe FD caching stuff would have to be revised, because it uses up all\nthe FDs that the process can open, and having many threads doing that\nin a single process isn't going to work. There's probably some other\nthings that I'm forgetting, but the typical effect on the average bit\nof backend code should be very, very low. If it isn't, we're doing it\nwrong.\n\nSo, I think saying \"oh, this is going to destabliize PostgreSQL for\nyears\" is just fear-mongering. If someone proposes a patch that we\nthink is going to have that effect, we should (and certainly will)\nreject it. But I see no reason why we can't have a good patch for this\nwhere most code changes only in mechanical ways that are easy to\nvalidate.\n\n> This reads like a code quality argument: that's worthwhile, but I\n> don't see how it supports your 'False' assertions. Do two queries\n> running in separate processes spend much time allocating and waiting\n> on resources that could be shared within a single thread?\n\nI don't have any idea what this has to do with what Andres was talking\nabout, honestly. However, there certainly are cases of the thing\nyou're talking about here. Having many backends separately open the\nsame file means we've got a whole bunch of different file descriptors\naccessing the same file instead of just one. That does have a\nmeaningful cost on some workloads. Passing tuples between cooperating\nprocesses that are jointly executing a parallel query is costly in the\ncurrent scheme, too. There might be ways to improve on that somewhat\neven without threads, but if you don't think that the process model\nmade getting parallel query working harder and less efficient, I'm\nhere as the guy who wrote a lot of that code to tell you that it very\nmuch did.\n\n> That seems valid. Even so, I would expect that for many queries, I/O\n> access and row processing time is the bulk of the work, and that\n> context-switches to/from other query processes is relatively\n> negligible.\n\nThat's completely true, but there are ALSO many OTHER situations in\nwhich the overhead of frequent context switching is absolutely\ncrushing. You might as well argue that umbrellas don't need to exist\nbecause there are lots of sunny days.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 15:47:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-13 16:55:12 +0900, Kyotaro Horiguchi wrote:\n> At Tue, 13 Jun 2023 09:55:36 +0300, Konstantin Knizhnik <[email protected]> wrote in \n> > Postgres backend is \"thick\" not because of large number of local\n> > variables.\n> > It is because of local caches: catalog cache, relation cache, prepared\n> > statements cache,...\n> > If they are not rewritten, then backend still may consume a lot of\n> > memory even if it will be thread rather then process.\n> > But threads simplify development of global caches, although it can be\n> > done with DSM.\n> \n> With the process model, that local stuff are flushed out upon\n> reconnection. If we switch to the thread model, we will need an\n> expiration mechanism for those stuff.\n\nIsn't that just doing something like MemoryContextDelete(TopMemoryContext) at\nthe end of proc_exit() (or it's thread equivalent)?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:51:39 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 3:46 PM Hannu Krosing <[email protected]> wrote:\n> I remember a few times when memory leaks in some PostGIS packages\n> cause slow memory exhaustion and the simple fix was limiting\n> connection lifetime to something between 15 min and an hour.\n>\n> The main problem here is that PostGIS uses a few tens of other GPL GIS\n> related packages which are all changing independently and thus it is\n> quite hard to be sure that none of these have developed a leak. And\n> you also likely can not just stop upgrading these as they also contain\n> security fixes.\n>\n> I have no idea what the fix could be in case of threaded server.\n\nPresumably, when a thread exits, we\nMemoryContextDelete(TopMemoryContext). If the leak is into any memory\ncontext managed by PostgreSQL, this still frees the memory. But it\nmight not be. Right now, if a library does a malloc() that it doesn't\nfree() every once in a while, it's no big deal. If it does it too\noften, it's a problem now, too. But if it does it only every now and\nthen, process exit will prevent accumulation over time. In a threaded\nmodel, that isn't true any longer: those allocations will accumulate\nuntil we OOM.\n\nAnd IMHO that's definitely a very significant downside of this\ndirection. I don't think it should be dispositive because such\nproblems are, hopefully, fixable, whereas some of the problems caused\nby the process model are basically unfixable except by not using it\nany more. However, if we lived in a world where both models were\nsupported and a particular user said, \"hey, I'm sticking with the\nprocess model because I don't trust my third-party libraries not to\nleak,\" I would be like \"yep, I totally get it.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 15:56:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Wed, 14 Jun 2023 at 20:48, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jun 14, 2023 at 3:16 PM James Addison <[email protected]> wrote:\n> > I think that they're practical performance-related questions about the\n> > benefits of performing a technical migration that could involve\n> > significant development time, take years to complete, and uncover\n> > problems that cause reliability issues for a stable, proven database\n> > management system.\n>\n> I don't. I think they're reflecting confusion about what the actual,\n> practical path forward is.\n\nOk. My concern is that the balance between the downstream ecosystem\nimpact (people and processes that use PIDs to identify, monitor and\nmanage query and background processes, for example) compared to the\nbenefits (performance improvement for some -- but what kind of? --\nworkloads) seems unclear, and if it's unclear, it's less likely to be\ncompelling.\n\nPavel's message and questions seem to poke at some of the potential\nlimitations of the performance improvements, and Andres' response\nmentions reduced complexity and reduced context-switching. Elsewhere\nI also see that TLB (translation lookaside buffer?) lookups in\nparticular should see improvements. Those are good, but somewhat\nunquantified.\n\nThe benefits are less of an immediate concern if there's going to be a\nmigration/transition phase where both the process model and the thread\nmodel are available. But again, if the benefits of the threading\nmodel aren't clear, people are unlikely to want to switch, and I don't\nthink that the cost for people and systems to migrate from tooling and\nmethods built around processes will be zero. That could lead to a bad\noutcome, where the codebase includes both models and yet is unable to\nplan to simplify to one.\n\n\n",
"msg_date": "Wed, 14 Jun 2023 23:14:01 +0100",
"msg_from": "James Addison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Tue, 13 Jun 2023 at 07:55, Konstantin Knizhnik <[email protected]> wrote:\n>\n>\n>\n> On 12.06.2023 3:23 PM, Pavel Borisov wrote:\n> > Is the following true or not?\n> >\n> > 1. If we switch processes to threads but leave the amount of session\n> > local variables unchanged, there would be hardly any performance gain.\n> > 2. If we move some backend's local variables into shared memory then\n> > the performance gain would be very near to what we get with threads\n> > having equal amount of session-local variables.\n> >\n> > In other words, the overall goal in principle is to gain from less\n> > memory copying wherever it doesn't add the burden of locks for\n> > concurrent variables access?\n> >\n> > Regards,\n> > Pavel Borisov,\n> > Supabase\n> >\n> >\n> IMHO both statements are not true.\n> Switching to threads will cause less context switch overhead (because\n> all threads are sharing the same memory space and so preserve TLB.\n> How big will be this advantage? In my prototype I got ~10%. But may be\n> it is possible to fin workloads when it is larger.\n\nHi Konstantin - do you have code/links that you can share for the\nprototype and benchmarks used to gather those results?\n\n\n",
"msg_date": "Wed, 14 Jun 2023 23:23:45 +0100",
"msg_from": "James Addison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 15.06.2023 1:23 AM, James Addison wrote:\n> On Tue, 13 Jun 2023 at 07:55, Konstantin Knizhnik<[email protected]> wrote:\n>>\n>>\n>> On 12.06.2023 3:23 PM, Pavel Borisov wrote:\n>>> Is the following true or not?\n>>>\n>>> 1. If we switch processes to threads but leave the amount of session\n>>> local variables unchanged, there would be hardly any performance gain.\n>>> 2. If we move some backend's local variables into shared memory then\n>>> the performance gain would be very near to what we get with threads\n>>> having equal amount of session-local variables.\n>>>\n>>> In other words, the overall goal in principle is to gain from less\n>>> memory copying wherever it doesn't add the burden of locks for\n>>> concurrent variables access?\n>>>\n>>> Regards,\n>>> Pavel Borisov,\n>>> Supabase\n>>>\n>>>\n>> IMHO both statements are not true.\n>> Switching to threads will cause less context switch overhead (because\n>> all threads are sharing the same memory space and so preserve TLB.\n>> How big will be this advantage? In my prototype I got ~10%. But may be\n>> it is possible to fin workloads when it is larger.\n> Hi Konstantin - do you have code/links that you can share for the\n> prototype and benchmarks used to gather those results?\n\n\nSorry, I have already shared the link:\nhttps://github.com/postgrespro/postgresql.pthreads/\n\nAs you can see last commit was 6 years ago when I stopped work on this \nproject.\nWhy? I already tried to explain it:\n- benefits from switching to threads were not so large. May be I just \nfailed to fid proper workload, but is was more or less expected result,\nbecause most of the code was not changed - it uses the same sync \nprimitives, the same local catalog/relation caches,..\nTo take all advantage of multithreadig model it is necessary to rewrite \nmany components, especially related with interprocess communication.\nBut maintaining such fork of Postgres and synchronize it with mainstream \nrequires too much efforts and I was not able to do it myself.\n\nThere are three different but related directions of improving current \nPostgres:\n1. Replacing processes with threads\n2. Builtin connection pooler\n3. Lightweight backends (shared catalog/relation/prepared statements caches)\n\nThe motivation for such changes are also similar:\n1. Increase Postgres scalability\n2. Reduce memory consumption\n3. Make Postgres better fir cloud and serverless requirements\n\nI am not sure now which one should be addressed first or them can be \ndone together.\n\nReplacing static variables with thread-local is the first and may be the \neasiest step.\nIt requires more or less mechanical changes. More challenging thing is \nreplacing private per-backend data structures\nwith shared ones (caches, file descriptors,...)\n\n\n\n\n\n\n\n\nOn 15.06.2023 1:23 AM, James Addison\n wrote:\n\n\nOn Tue, 13 Jun 2023 at 07:55, Konstantin Knizhnik <[email protected]> wrote:\n\n\n\n\n\nOn 12.06.2023 3:23 PM, Pavel Borisov wrote:\n\n\nIs the following true or not?\n\n1. If we switch processes to threads but leave the amount of session\nlocal variables unchanged, there would be hardly any performance gain.\n2. If we move some backend's local variables into shared memory then\nthe performance gain would be very near to what we get with threads\nhaving equal amount of session-local variables.\n\nIn other words, the overall goal in principle is to gain from less\nmemory copying wherever it doesn't add the burden of locks for\nconcurrent variables access?\n\nRegards,\nPavel Borisov,\nSupabase\n\n\n\n\nIMHO both statements are not true.\nSwitching to threads will cause less context switch overhead (because\nall threads are sharing the same memory space and so preserve TLB.\nHow big will be this advantage? In my prototype I got ~10%. But may be\nit is possible to fin workloads when it is larger.\n\n\n\nHi Konstantin - do you have code/links that you can share for the\nprototype and benchmarks used to gather those results?\n\n\n\n\n Sorry, I have already shared the link:\nhttps://github.com/postgrespro/postgresql.pthreads/\n\n As you can see last commit was 6 years ago when I stopped work on\n this project.\n Why? I already tried to explain it:\n - benefits from switching to threads were not so large. May be I\n just failed to fid proper workload, but is was more or less expected\n result,\n because most of the code was not changed - it uses the same sync\n primitives, the same local catalog/relation caches,..\n To take all advantage of\n multithreadig model it is necessary to rewrite many components,\n especially related with interprocess communication.\n But maintaining such fork of Postgres and synchronize it with\n mainstream requires too much efforts and I was not able to do it\n myself.\n\n There are three different but related directions of improving\n current Postgres:\n 1. Replacing processes with threads\n 2. Builtin connection pooler\n 3. Lightweight backends (shared catalog/relation/prepared\n statements caches)\n\nThe\n motivation for such changes are also similar:\n 1. Increase Postgres scalability \n 2. Reduce memory consumption\n 3. Make Postgres better fir cloud and serverless requirements\n\nI am not sure now which one should be addressed first or\n them can be done together.\n\n Replacing static variables with thread-local is the first and may\n be the easiest step.\n It requires more or less mechanical changes. More challenging\n thing is replacing private per-backend data structures\n with shared ones (caches, file descriptors,...)",
"msg_date": "Thu, 15 Jun 2023 10:12:32 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 15 Jun 2023 at 08:12, Konstantin Knizhnik <[email protected]> wrote:\n>\n>\n>\n> On 15.06.2023 1:23 AM, James Addison wrote:\n>\n> On Tue, 13 Jun 2023 at 07:55, Konstantin Knizhnik <[email protected]> wrote:\n>\n>\n> On 12.06.2023 3:23 PM, Pavel Borisov wrote:\n>\n> Is the following true or not?\n>\n> 1. If we switch processes to threads but leave the amount of session\n> local variables unchanged, there would be hardly any performance gain.\n> 2. If we move some backend's local variables into shared memory then\n> the performance gain would be very near to what we get with threads\n> having equal amount of session-local variables.\n>\n> In other words, the overall goal in principle is to gain from less\n> memory copying wherever it doesn't add the burden of locks for\n> concurrent variables access?\n>\n> Regards,\n> Pavel Borisov,\n> Supabase\n>\n>\n> IMHO both statements are not true.\n> Switching to threads will cause less context switch overhead (because\n> all threads are sharing the same memory space and so preserve TLB.\n> How big will be this advantage? In my prototype I got ~10%. But may be\n> it is possible to fin workloads when it is larger.\n>\n> Hi Konstantin - do you have code/links that you can share for the\n> prototype and benchmarks used to gather those results?\n>\n>\n>\n> Sorry, I have already shared the link:\n> https://github.com/postgrespro/postgresql.pthreads/\n\nNope, my mistake for not locating the existing link - thank you.\n\nIs there a reason that parser-related files (flex/bison) are added as\npart of the changeset? (I'm trying to narrow it down to only the\nchanges necessary for the functionality. so far it looks mostly\nfairly minimal, which is good. the adjustments to progname are\nanother thing that look a bit unusual/maybe unnecessary for the\nfeature)\n\n> As you can see last commit was 6 years ago when I stopped work on this project.\n> Why? I already tried to explain it:\n> - benefits from switching to threads were not so large. May be I just failed to fid proper workload, but is was more or less expected result,\n> because most of the code was not changed - it uses the same sync primitives, the same local catalog/relation caches,..\n> To take all advantage of multithreadig model it is necessary to rewrite many components, especially related with interprocess communication.\n> But maintaining such fork of Postgres and synchronize it with mainstream requires too much efforts and I was not able to do it myself.\n\nI get the feeling that there are probably certain query types or\npatterns where a significant, order-of-magnitude speedup is possible\nwith threads - but yep, I haven't seen those described in detail yet\non the mailing list (but as hinted by my not noticing the github link\npreviously, maybe I'm not following the list closely enough).\n\nWhat workloads did you try with your version of the project?\n\n> There are three different but related directions of improving current Postgres:\n> 1. Replacing processes with threads\n> 2. Builtin connection pooler\n> 3. Lightweight backends (shared catalog/relation/prepared statements caches)\n>\n> The motivation for such changes are also similar:\n> 1. Increase Postgres scalability\n> 2. Reduce memory consumption\n> 3. Make Postgres better fir cloud and serverless requirements\n>\n> I am not sure now which one should be addressed first or them can be done together.\n>\n> Replacing static variables with thread-local is the first and may be the easiest step.\n> It requires more or less mechanical changes. More challenging thing is replacing private per-backend data structures\n> with shared ones (caches, file descriptors,...)\n\nThank you. Personally I think that motivation two (reducing memory\nconsumption) -- as long as it can be done without detrimentally\naffecting functionality or correctness, and without making the code\nharder to develop/understand -- could provide benefits for all three\nof the motivating cases (and, in fact, for non-cloud/serverful use\ncases too).\n\nThis is making me wonder about other performance/scalability areas\nthat might not have been considered due to focus on the details of the\nexisting codebase, but I'll save that for another thread and will try\nto learn more first.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 09:41:39 +0100",
"msg_from": "James Addison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 9:12 AM Konstantin Knizhnik <[email protected]> wrote:\n\n> There are three different but related directions of improving current Postgres:\n> 1. Replacing processes with threads\n\nHere we could likely start with making parallel query multi-threaded.\n\nThis would also remove the big blocker for parallelizing things like\nCREATE TABLE AS SELECT ... where we are currently held bac by the\nrestriction that only the leader process can write.\n\n> 2. Builtin connection pooler\n\nWould be definitely a nice thing to have. And we could even start by\nintegrating a non-threaded pooler like pgbouncer to run as a\npostgresql worker process (or two).\n\n> 3. Lightweight backends (shared catalog/relation/prepared statements caches)\n\nShared prepared statement caches (of course have to be per-user and\nper-database) would give additional benefit of lightweight connection\npoolers not needing to track these. Currently the missing support of\nnamed prepared statements is one of the main hindrances of using\npgbouncer with JDBC in transaction pooling mode (you can use it, but\nhave to turn off automatic statement preparing)\n\n>\n> The motivation for such changes are also similar:\n> 1. Increase Postgres scalability\n> 2. Reduce memory consumption\n> 3. Make Postgres better fit cloud and serverless requirements\n\nThe memory consumption reduction would be a big and clear win for many\nworkloads.\n\nAlso just moving more things in shared memory will also prepare us for\nmove to threaded server (if it will eventually happen)\n\n> I am not sure now which one should be addressed first or them can be done together.\n\nShared caches seem like a guaranteed win at least on memory usage.\nThere could be performance (and complexity) downsides for specific\nworkloads, but they would be the same as for the threaded model, so\nwould also be a good learning opportunity.\n\n> Replacing static variables with thread-local is the first and may be the easiest step.\n\nI think we got our first patch doing this (as part of patches for\nrunning PG threaded on Solaris) quite early in the OSS development ,\ncould have been even in the last century :)\n\n> It requires more or less mechanical changes. More challenging thing is replacing private per-backend data structures\n> with shared ones (caches, file descriptors,...)\n\nIndeed, sharing caches would be also part of the work that is needed\nfor the sharded model, so anyone feeling strongly about moving to\nthreads could start with this :)\n\n---\nHannu\n\n\n",
"msg_date": "Thu, 15 Jun 2023 10:50:31 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 10:41 AM James Addison <[email protected]> wrote:\n>\n> This is making me wonder about other performance/scalability areas\n> that might not have been considered due to focus on the details of the\n> existing codebase, but I'll save that for another thread and will try\n> to learn more first.\n\nA gradual move to more shared structures seems to be a way forward\n\nIt should get us all the benefits of threading minus the need for TLB\nreloading and (in some cases) reduction of per-process virtual memory\nmapping tables.\n\nIn any case we would need to implement all the locking and parallelism\nmanagement of these shared structures that are not there in the\ncurrent process architecture.\n\nSo a fair bit of work but also a clearly defined benefits of\n1) reduced memory usage\n2) no need to rebuild caches for each new connection\n3) no need to track PREPARE statements inside connection poolers.\n\nThere can be extra complexity when different connections use the same\nprepared statement name (say \"PREP001\") for different queries.\nFor this wel likely will need a good cooperation with connection\npooler where it passes some kind of client connection id along at the\ntransaction start\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:04:20 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "One more unexpected benefit of having shared caches would be easing\naccess to other databases.\n\nIf the system caches are there for all databases anyway, then it\nbecomes much easier to make queries using objects from multiple\ndatabases.\n\nNote that this does not strictly need threads, just shared caches.\n\nOn Thu, Jun 15, 2023 at 11:04 AM Hannu Krosing <[email protected]> wrote:\n>\n> On Thu, Jun 15, 2023 at 10:41 AM James Addison <[email protected]> wrote:\n> >\n> > This is making me wonder about other performance/scalability areas\n> > that might not have been considered due to focus on the details of the\n> > existing codebase, but I'll save that for another thread and will try\n> > to learn more first.\n>\n> A gradual move to more shared structures seems to be a way forward\n>\n> It should get us all the benefits of threading minus the need for TLB\n> reloading and (in some cases) reduction of per-process virtual memory\n> mapping tables.\n>\n> In any case we would need to implement all the locking and parallelism\n> management of these shared structures that are not there in the\n> current process architecture.\n>\n> So a fair bit of work but also a clearly defined benefits of\n> 1) reduced memory usage\n> 2) no need to rebuild caches for each new connection\n> 3) no need to track PREPARE statements inside connection poolers.\n>\n> There can be extra complexity when different connections use the same\n> prepared statement name (say \"PREP001\") for different queries.\n> For this wel likely will need a good cooperation with connection\n> pooler where it passes some kind of client connection id along at the\n> transaction start\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:07:30 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 15.06.2023 11:41 AM, James Addison wrote:\n> On Thu, 15 Jun 2023 at 08:12, Konstantin Knizhnik <[email protected]> wrote:\n>>\n>>\n>> On 15.06.2023 1:23 AM, James Addison wrote:\n>>\n>> On Tue, 13 Jun 2023 at 07:55, Konstantin Knizhnik <[email protected]> wrote:\n>>\n>>\n>> On 12.06.2023 3:23 PM, Pavel Borisov wrote:\n>>\n>> Is the following true or not?\n>>\n>> 1. If we switch processes to threads but leave the amount of session\n>> local variables unchanged, there would be hardly any performance gain.\n>> 2. If we move some backend's local variables into shared memory then\n>> the performance gain would be very near to what we get with threads\n>> having equal amount of session-local variables.\n>>\n>> In other words, the overall goal in principle is to gain from less\n>> memory copying wherever it doesn't add the burden of locks for\n>> concurrent variables access?\n>>\n>> Regards,\n>> Pavel Borisov,\n>> Supabase\n>>\n>>\n>> IMHO both statements are not true.\n>> Switching to threads will cause less context switch overhead (because\n>> all threads are sharing the same memory space and so preserve TLB.\n>> How big will be this advantage? In my prototype I got ~10%. But may be\n>> it is possible to fin workloads when it is larger.\n>>\n>> Hi Konstantin - do you have code/links that you can share for the\n>> prototype and benchmarks used to gather those results?\n>>\n>>\n>>\n>> Sorry, I have already shared the link:\n>> https://github.com/postgrespro/postgresql.pthreads/\n> Nope, my mistake for not locating the existing link - thank you.\n>\n> Is there a reason that parser-related files (flex/bison) are added as\n> part of the changeset? (I'm trying to narrow it down to only the\n> changes necessary for the functionality. so far it looks mostly\n> fairly minimal, which is good. the adjustments to progname are\n> another thing that look a bit unusual/maybe unnecessary for the\n> feature)\n\nSorry, absolutely no reason - just my fault.\n\n>> As you can see last commit was 6 years ago when I stopped work on this project.\n>> Why? I already tried to explain it:\n>> - benefits from switching to threads were not so large. May be I just failed to fid proper workload, but is was more or less expected result,\n>> because most of the code was not changed - it uses the same sync primitives, the same local catalog/relation caches,..\n>> To take all advantage of multithreadig model it is necessary to rewrite many components, especially related with interprocess communication.\n>> But maintaining such fork of Postgres and synchronize it with mainstream requires too much efforts and I was not able to do it myself.\n> I get the feeling that there are probably certain query types or\n> patterns where a significant, order-of-magnitude speedup is possible\n> with threads - but yep, I haven't seen those described in detail yet\n> on the mailing list (but as hinted by my not noticing the github link\n> previously, maybe I'm not following the list closely enough).\n>\n> What workloads did you try with your version of the project?\n\nI do not remember now precisely (6 years passed).\nBut definitely I tried pgbench, especially read-only pgbench (to be more \nCPU rather than disk bounded)\n\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 22:36:30 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "\n\nOn 15.06.2023 12:04 PM, Hannu Krosing wrote:\n> So a fair bit of work but also a clearly defined benefits of\n> 1) reduced memory usage\n> 2) no need to rebuild caches for each new connection\n> 3) no need to track PREPARE statements inside connection poolers.\n\nShared plan cache (not only prepared statements cache) also opens way to \nmore sophisticated query optimizations.\nRight now we are not performing some optimization (like constant \nexpression folding) just because them increase time of processing normal \nqueries.\nThis is why queries generated by ORMs or wizards, which can contain a \nlot of dumb stuff, are not well simplified by Postgres.\nWith MS-Sql it is quite frequent that query execution time is much \nsmaller than query optimization time.\nHaving shared plan cache allows us to spend more time in optimization \nwithout risk to degrade performance.\n\n\n\n",
"msg_date": "Thu, 15 Jun 2023 22:49:23 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "I think planner would also benefit from threads. There are many tasks\nin planner that are independent and can be scheduled using dependency\ngraph. They are too small to be parallelized through separate backends\nbut large enough to be performed by threads. Planning queries\ninvolving partitioned tables take longer time (in seconds) esp. when\nthere are thousands of partitions. That kind of planning will get\nimmensely benefited by threading. Of course we can use backends which\ncan pull tasks from queue but sharing the PlannerInfo and its\nsubstructure is easier through the same address space rather than\nshared memory.\n\nOn Sat, Jun 10, 2023 at 5:25 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Wed, Jun 7, 2023 at 06:38:38PM +0530, Ashutosh Bapat wrote:\n> > With multiple processes, we can use all the available cores (at least\n> > theoretically if all those processes are independent). But is that\n> > guaranteed with single process multi-thread model? Google didn't throw\n> > any definitive answer to that. Usually it depends upon the OS and\n> > architecture.\n> >\n> > Maybe a good start is to start using threads instead of parallel\n> > workers e.g. for parallel vacuum, parallel query and so on while\n> > leaving the processes for connections and leaders. that itself might\n> > take significant time. Based on that experience move to a completely\n> > threaded model. Based on my experience with other similar products, I\n> > think we will settle on a multi-process multi-thread model.\n>\n> I think we have a few known problem that we might be able to solve\n> without threads, but can help us eventually move to threads if we find\n> it useful:\n>\n> 1) Use threads for background workers rather than processes\n> 2) Allow sessions to be stopped and started by saving their state\n>\n> Ideally we would solve the problem of making shared structures\n> resizable, but I am not sure how that can be easily done without\n> threads.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 19 Jul 2023 20:16:52 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 6/7/23 23:37, Andres Freund wrote:\n> I think we're starting to hit quite a few limits related to the process model,\n> particularly on bigger machines. The overhead of cross-process context\n> switches is inherently higher than switching between threads in the same\n> process - and my suspicion is that that overhead will continue to\n> increase. Once you have a significant number of connections we end up spending\n> a *lot* of time in TLB misses, and that's inherent to the process model,\n> because you can't share the TLB across processes.\n\nAnother problem I haven't seen mentioned yet is the excessive kernel \nmemory usage because every process has its own set of page table entries \n(PTEs). Without huge pages the amount of wasted memory can be huge if \nshared buffers are big.\n\nFor example with 256 GiB of used shared buffers a single process needs \nabout 256 MiB for the PTEs (for simplicity I ignored the tree structure \nof the page tables and just took the number of 4k pages times 4 bytes \nper PTE). With 512 connections, which is not uncommon for machines with \nmany cores, a total of 128 GiB of memory is just spent on page tables.\n\nWe used non-transparent huge pages to work around this limitation but \nthey come with plenty of provisioning challenges, especially in cloud \ninfrastructures where different services run next to each other on the \nsame server. Transparent huge pages have unpredictable performance \ndisadvantages. Also if some backends only use shared buffers sparsely, \nmemory is wasted for the remaining, unused range inside the huge page.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 27 Jul 2023 15:27:57 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, 15 Jun 2023 at 11:07, Hannu Krosing <[email protected]> wrote:\n>\n> One more unexpected benefit of having shared caches would be easing\n> access to other databases.\n>\n> If the system caches are there for all databases anyway, then it\n> becomes much easier to make queries using objects from multiple\n> databases.\n\nWe have several optimizations in our visibility code that allow us to\nremove dead tuples from this database when another database still has\na connection that has an old snapshot in which the deleting\ntransaction of this database has not yet committed. This is allowed\nbecause we can say with confidence that other database's connections\nwill never be able to see this database's tables. If we were to allow\ncross-database data access, that would require cross-database snapshot\nvisibility checks, and that would severely hinder these optimizations.\nAs an example, it would increase the work we need to do for snapshots:\nFor the snapshot data of tables that aren't shared catalogs, we only\nneed to consider our own database's backends for visibility. With\ncross-database visibility, we would need to consider all active\nbackends for all snapshots, and this can be significantly more work.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Fri, 28 Jul 2023 20:10:44 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Thu, Jul 27, 2023 at 8:28 AM David Geier <[email protected]> wrote:\n\n> Hi,\n>\n> On 6/7/23 23:37, Andres Freund wrote:\n> > I think we're starting to hit quite a few limits related to the process\n> model,\n> > particularly on bigger machines. The overhead of cross-process context\n> > switches is inherently higher than switching between threads in the same\n> > process - and my suspicion is that that overhead will continue to\n> > increase. Once you have a significant number of connections we end up\n> spending\n> > a *lot* of time in TLB misses, and that's inherent to the process model,\n> > because you can't share the TLB across processes.\n>\n> Another problem I haven't seen mentioned yet is the excessive kernel\n> memory usage because every process has its own set of page table entries\n> (PTEs). Without huge pages the amount of wasted memory can be huge if\n> shared buffers are big.\n\n\nHm, noted this upthread, but asking again, does this\nhelp/benefit interactions with the operating system make oom kill\nsituations less likely? These things are the bane of my existence, and\nI'm having a hard time finding a solution that prevents them other than\nrunning pgbouncer and lowering max_connections, which adds complexity. I\nsuspect I'm not the only one dealing with this. What's really scary about\nthese situations is they come without warning. Here's a pretty typical\nexample per sar -r.\n\n kbmemfree kbmemused %memused kbbuffers kbcached kbcommit\n%commit kbactive kbinact kbdirty\n 14:20:02 461612 15803476 97.16 0 11120280 12346980\n 60.35 10017820 4806356 220\n 14:30:01 378244 15886844 97.67 0 11239012 12296276\n 60.10 10003540 4909180 240\n 14:40:01 308632 15956456 98.10 0 11329516 12295892\n 60.10 10015044 4981784 200\n 14:50:01 458956 15806132 97.18 0 11383484 12101652\n 59.15 9853612 5019916 112\n 15:00:01 10592736 5672352 34.87 0 4446852 8378324\n 40.95 1602532 3473020 264 <-- reboot!\n 15:10:01 9151160 7113928 43.74 0 5298184 8968316\n 43.83 2714936 3725092 124\n 15:20:01 8629464 7635624 46.94 0 6016936 8777028\n 42.90 2881044 4102888 148\n 15:30:01 8467884 7797204 47.94 0 6285856 8653908\n 42.30 2830572 4323292 436\n 15:40:02 8077480 8187608 50.34 0 6828240 8482972\n 41.46 2885416 4671620 320\n 15:50:01 7683504 8581584 52.76 0 7226132 8511932\n 41.60 2998752 4958880 308\n 16:00:01 7239068 9026020 55.49 0 7649948 8496764\n 41.53 3032140 5358388 232\n 16:10:01 7030208 9234880 56.78 0 7899512 8461588\n 41.36 3108692 5492296 216\n\nTriggering query was heavy (maybe even runaway), server load was minimal\notherwise:\n\n CPU %user %nice %system %iowait %steal\n%idle\n 14:30:01 all 9.55 0.00 0.63 0.02 0.00\n89.81\n\n 14:40:01 all 9.95 0.00 0.69 0.02 0.00\n89.33\n\n 14:50:01 all 10.22 0.00 0.83 0.02 0.00\n88.93\n\n 15:00:01 all 10.62 0.00 1.63 0.76 0.00\n86.99\n\n 15:10:01 all 8.55 0.00 0.72 0.12 0.00\n90.61\n\nThe conjecture here is that lots of idle connections make the server appear\nto have less memory available than it looks, and sudden transient demands\ncan cause it to destabilize.\n\nJust throwing it out there, if it can be shown to help it may be supportive\nof moving forward with something like this, either instead of, or along\nwith, O_DIRECT or other internalized database memory management\nstrategies. Lowering context switches, faster page access etc are of\ncourse nice would not be a game changer for the workloads we see which are\npretty varied (OLTP, analytics) although we don't extremely high\ntransaction rates.\n\nmerlin\n\nOn Thu, Jul 27, 2023 at 8:28 AM David Geier <[email protected]> wrote:Hi,\n\r\nOn 6/7/23 23:37, Andres Freund wrote:\r\n> I think we're starting to hit quite a few limits related to the process model,\r\n> particularly on bigger machines. The overhead of cross-process context\r\n> switches is inherently higher than switching between threads in the same\r\n> process - and my suspicion is that that overhead will continue to\r\n> increase. Once you have a significant number of connections we end up spending\r\n> a *lot* of time in TLB misses, and that's inherent to the process model,\r\n> because you can't share the TLB across processes.\n\r\nAnother problem I haven't seen mentioned yet is the excessive kernel \r\nmemory usage because every process has its own set of page table entries \r\n(PTEs). Without huge pages the amount of wasted memory can be huge if \r\nshared buffers are big.Hm, noted this upthread, but asking again, does this help/benefit interactions with the operating system make oom kill situations less likely? These things are the bane of my existence, and I'm having a hard time finding a solution that prevents them other than running pgbouncer and lowering max_connections, which adds complexity. I suspect I'm not the only one dealing with this. What's really scary about these situations is they come without warning. Here's a pretty typical example per sar -r. kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:20:02 461612 15803476 97.16 0 11120280 12346980 60.35 10017820 4806356 220 14:30:01 378244 15886844 97.67 0 11239012 12296276 60.10 10003540 4909180 240 14:40:01 308632 15956456 98.10 0 11329516 12295892 60.10 10015044 4981784 200 14:50:01 458956 15806132 97.18 0 11383484 12101652 59.15 9853612 5019916 112 15:00:01 10592736 5672352 34.87 0 4446852 8378324 40.95 1602532 3473020 264 <-- reboot! 15:10:01 9151160 7113928 43.74 0 5298184 8968316 43.83 2714936 3725092 124 15:20:01 8629464 7635624 46.94 0 6016936 8777028 42.90 2881044 4102888 148 15:30:01 8467884 7797204 47.94 0 6285856 8653908 42.30 2830572 4323292 436 15:40:02 8077480 8187608 50.34 0 6828240 8482972 41.46 2885416 4671620 320 15:50:01 7683504 8581584 52.76 0 7226132 8511932 41.60 2998752 4958880 308 16:00:01 7239068 9026020 55.49 0 7649948 8496764 41.53 3032140 5358388 232 16:10:01 7030208 9234880 56.78 0 7899512 8461588 41.36 3108692 5492296 216 Triggering query was heavy (maybe even runaway), server load was minimal otherwise: CPU %user %nice %system %iowait %steal %idle 14:30:01 all 9.55 0.00 0.63 0.02 0.00 89.81 14:40:01 all 9.95 0.00 0.69 0.02 0.00 89.33 14:50:01 all 10.22 0.00 0.83 0.02 0.00 88.93 15:00:01 all 10.62 0.00 1.63 0.76 0.00 86.99 15:10:01 all 8.55 0.00 0.72 0.12 0.00 90.61The conjecture here is that lots of idle connections make the server appear to have less memory available than it looks, and sudden transient demands can cause it to destabilize. Just throwing it out there, if it can be shown to help it may be supportive of moving forward with something like this, either instead of, or along with, O_DIRECT or other internalized database memory management strategies. Lowering context switches, faster page access etc are of course nice would not be a game changer for the workloads we see which are pretty varied (OLTP, analytics) although we don't extremely high transaction rates.merlin",
"msg_date": "Fri, 11 Aug 2023 07:05:17 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 5:17 PM Heikki Linnakangas <[email protected]> wrote:\n\n> On 10/06/2023 21:01, Hannu Krosing wrote:\n> > On Mon, Jun 5, 2023 at 4:52 PM Heikki Linnakangas <[email protected]>\n> wrote:\n>\n> <<<SNIP>>>\n\n>\n> > * The backend code would be more complex.\n> > -- this is still the case\n>\n> I don't quite buy that. A multi-threaded model isn't inherently more\n> complex than a multi-process model. Just different. Sure, the transition\n> period will be more complex, when we need to support both models. But in\n> the long run, if we can remove the multi-process mode, we can make a lot\n> of things *simpler*.\n>\n\nIf I may weigh in here:\nMaking a previously unthreaded process able to handle multiple threads, is\na tedious process.\n\n>\n> > -- even more worrisome is that all extensions also need to be rewritten\n>\n> \"rewritten\" is an exaggeration. Yes, extensions will need adapt, similar\n> to the core code. But I hope it will be pretty mechanical work, marking\n> global variables as thread-local and such. Many extensions will work\n> with little to no changes.\n>\n\nI can tell you from experience it isn't that easy. In my career I have\ntaken a few \"old\" technologies and made them multithreaded and it is really\na complex and laborious undertaking.\nMany operations that you do just fine without threads will break in a\nmultithreaded system. You need to make sure every function in every library\nthat you use is \"thread safe.\" Take a file handle, if you read, seek, or\nwrite a file handle you are fine in a single process, but this breaks in a\nmultithreaded environment if the file handle is shared. That's a very\nsimple example. Openssl operations will almost certainly break and you will\nneed to rewrite your ssl stuff and protect some things with mutexes. When\nyou fork() a lot is essentially duplicated (COW) between the parent and\nchild that will ultimately be shared in a threaded model. Decades old\nassumptions in the design and architecture will break and you will need to\nrethink what you are doing and how it is done. You will need to change file\nhandling to get beyond the 1024 file limit in calls like \"select.\" There is\na LOT of this kind of stuff, it is not mechanical. I even call into\nquestion \"Many extensions will work with little to no changes\" as those too\nwill need to be audited for thread safety. Think about loading extensions,\nextensions are typically not loaded until they are used. In a\nmulti-threaded model, a shared library will only be loaded once. Think\nabout memory management, you will have multiple threads fighting over the\nglobal heap as they allocate memory. The list is virtually endless.\n\n\n>\n> > -- and many incompatibilities will be silent and take potentially years\n> to find\n>\n> IMO this is the most scary part of all this. I'm optimistic that we can\n> have enough compiler support and tooling to catch most issues. But we\n> don't know for sure at this point.\n>\n\nWe absolutely do not know and it *is* very scary.\n\n\n>\n> > * Terminating backend processes allows the OS to cleanly and quickly\n> > free all resources, protecting against memory and file descriptor\n> > leaks and making backend shutdown cheaper and faster\n> > -- still true\n>\n> Yep. I'm not too worried about PostgreSQL code, our memory contexts and\n> resource owners are very good at stopping leaks. But 3rd party libraries\n> could pose hard problems. IIRC we still have a leak with the LLVM JIT\n> code, for example. We should fix that anyway, of course, but the\n> multi-process model is more forgiving with leaks like that.\n>\n> Again, we believe that this is true.\n\n\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n>\n>\n\nOn Mon, Jun 12, 2023 at 5:17 PM Heikki Linnakangas <[email protected]> wrote:On 10/06/2023 21:01, Hannu Krosing wrote:\n> On Mon, Jun 5, 2023 at 4:52 PM Heikki Linnakangas <[email protected]> wrote:<<<SNIP>>> \n\n> * The backend code would be more complex.\n> -- this is still the case\n\nI don't quite buy that. A multi-threaded model isn't inherently more \ncomplex than a multi-process model. Just different. Sure, the transition \nperiod will be more complex, when we need to support both models. But in \nthe long run, if we can remove the multi-process mode, we can make a lot \nof things *simpler*.If I may weigh in here: Making a previously unthreaded process able to handle multiple threads, is a tedious process.\n\n> -- even more worrisome is that all extensions also need to be rewritten\n\n\"rewritten\" is an exaggeration. Yes, extensions will need adapt, similar \nto the core code. But I hope it will be pretty mechanical work, marking \nglobal variables as thread-local and such. Many extensions will work \nwith little to no changes.I can tell you from experience it isn't that easy. In my career I have taken a few \"old\" technologies and made them multithreaded and it is really a complex and laborious undertaking.Many operations that you do just fine without threads will break in a multithreaded system. You need to make sure every function in every library that you use is \"thread safe.\" Take a file handle, if you read, seek, or write a file handle you are fine in a single process, but this breaks in a multithreaded environment if the file handle is shared. That's a very simple example. Openssl operations will almost certainly break and you will need to rewrite your ssl stuff and protect some things with mutexes. When you fork() a lot is essentially duplicated (COW) between the parent and child that will ultimately be shared in a threaded model. Decades old assumptions in the design and architecture will break and you will need to rethink what you are doing and how it is done. You will need to change file handling to get beyond the 1024 file limit in calls like \"select.\" There is a LOT of this kind of stuff, it is not mechanical. I even call into question \"Many extensions will work with little to no changes\" as those too will need to be audited for thread safety. Think about loading extensions, extensions are typically not loaded until they are used. In a multi-threaded model, a shared library will only be loaded once. Think about memory management, you will have multiple threads fighting over the global heap as they allocate memory. The list is virtually endless. \n\n> -- and many incompatibilities will be silent and take potentially years to find\n\nIMO this is the most scary part of all this. I'm optimistic that we can \nhave enough compiler support and tooling to catch most issues. But we \ndon't know for sure at this point. We absolutely do not know and it *is* very scary. \n\n> * Terminating backend processes allows the OS to cleanly and quickly\n> free all resources, protecting against memory and file descriptor\n> leaks and making backend shutdown cheaper and faster\n> -- still true\n\nYep. I'm not too worried about PostgreSQL code, our memory contexts and \nresource owners are very good at stopping leaks. But 3rd party libraries \ncould pose hard problems. IIRC we still have a leak with the LLVM JIT \ncode, for example. We should fix that anyway, of course, but the \nmulti-process model is more forgiving with leaks like that.\nAgain, we believe that this is true. \n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 23 Aug 2023 16:42:27 -0400",
"msg_from": "Mark Woodward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi,\n\nOn 8/11/23 14:05, Merlin Moncure wrote:\n> On Thu, Jul 27, 2023 at 8:28 AM David Geier <[email protected]> wrote:\n>\n> Hi,\n>\n> On 6/7/23 23:37, Andres Freund wrote:\n> > I think we're starting to hit quite a few limits related to the\n> process model,\n> > particularly on bigger machines. The overhead of cross-process\n> context\n> > switches is inherently higher than switching between threads in\n> the same\n> > process - and my suspicion is that that overhead will continue to\n> > increase. Once you have a significant number of connections we\n> end up spending\n> > a *lot* of time in TLB misses, and that's inherent to the\n> process model,\n> > because you can't share the TLB across processes.\n>\n> Another problem I haven't seen mentioned yet is the excessive kernel\n> memory usage because every process has its own set of page table\n> entries\n> (PTEs). Without huge pages the amount of wasted memory can be huge if\n> shared buffers are big.\n>\n>\n> Hm, noted this upthread, but asking again, does this \n> help/benefit interactions with the operating system make oom kill \n> situations less likely? These things are the bane of my existence, \n> and I'm having a hard time finding a solution that prevents them other \n> than running pgbouncer and lowering max_connections, which adds \n> complexity. I suspect I'm not the only one dealing with this. \n> What's really scary about these situations is they come without \n> warning. Here's a pretty typical example per sar -r.\n>\n> The conjecture here is that lots of idle connections make the server \n> appear to have less memory available than it looks, and sudden \n> transient demands can cause it to destabilize.\n\nIt does in the sense that your server will have more memory available in \ncase you have many long living connections around. Every connection has \nless kernel memory overhead if you will. Of course even then a runaway \nquery will be able to invoke the OOM killer. The unfortunate thing with \nthe OOM killer is that, in my experience, it often kills the \ncheckpointer. That's because the checkpointer will touch all of shared \nbuffers over time which makes it likely to get selected by the OOM \nkiller. Have you tried disabling memory overcommit?\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Fri, 25 Aug 2023 14:01:23 +0200",
"msg_from": "David Geier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Greetings,\n\n* David Geier ([email protected]) wrote:\n> On 8/11/23 14:05, Merlin Moncure wrote:\n> > Hm, noted this upthread, but asking again, does this\n> > help/benefit interactions with the operating system make oom kill\n> > situations less likely? These things are the bane of my existence, and\n> > I'm having a hard time finding a solution that prevents them other than\n> > running pgbouncer and lowering max_connections, which adds complexity. \n> > I suspect I'm not the only one dealing with this. What's really scary\n> > about these situations is they come without warning. Here's a pretty\n> > typical example per sar -r.\n> > \n> > The conjecture here is that lots of idle connections make the server\n> > appear to have less memory available than it looks, and sudden transient\n> > demands can cause it to destabilize.\n> \n> It does in the sense that your server will have more memory available in\n> case you have many long living connections around. Every connection has less\n> kernel memory overhead if you will. Of course even then a runaway query will\n> be able to invoke the OOM killer. The unfortunate thing with the OOM killer\n> is that, in my experience, it often kills the checkpointer. That's because\n> the checkpointer will touch all of shared buffers over time which makes it\n> likely to get selected by the OOM killer. Have you tried disabling memory\n> overcommit?\n\nThis is getting a bit far afield in terms of this specific thread, but\nthere's an ongoing effort to give PG administrators knobs to be able to\ncontrol how much actual memory is used rather than depending on the\nkernel to actually tell us when we're \"out\" of memory. There'll be new\npatches for the September commitfest posted soon. If you're interested\nin this issue, it'd be great to get more folks involved in review and\ntesting.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 25 Aug 2023 09:35:00 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
}
] |
[
{
"msg_contents": "This patch fixes a rare parsing bug with unicode characters on Mac OS X.\nThe problem is that isspace() on Mac OS X changes its behaviour with the\nlocale. Use scanner_isspace instead, which only returns true for ASCII\nwhitespace. It appears other places in the Postgres code have already run\ninto this, since a number of places use scanner_isspace instead. However,\nthere are still a lot of other calls to isspace(). I'll try to take a quick\nlook to see if there might be other instances of this bug.\n\nThe bug is that in the following hstore value, the unicode character\n\"disappears\", and is replaced with \"key\\xc4\", because it is parsed\nincorrectly:\n\nselect E'keyą=>value'::hstore;\n hstore\n-----------------\n \"keyą\"=>\"value\"\n(1 row)\n\nselect 'keyą=>value'::hstore::text::bytea;\n bytea\n----------------------------------\n \\x226b6579c4223d3e2276616c756522\n(1 row)\n\nThe correct result should be:\n\n hstore\n-----------------\n \"keyą\"=>\"value\"\n(1 row)\n\nThat query is added to the regression test. The query works on Linux, but\nfailed on Mac OS X.\n\nFor a more detailed explanation of how isspace() works, on Mac OS X, see:\nhttps://github.com/evanj/isspace_locale\n\nThanks!\n\nEvan Jones",
"msg_date": "Mon, 5 Jun 2023 11:26:56 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale specific"
},
{
"msg_contents": "On Mon, Jun 05, 2023 at 11:26:56AM -0400, Evan Jones wrote:\n> This patch fixes a rare parsing bug with unicode characters on Mac OS X.\n> The problem is that isspace() on Mac OS X changes its behaviour with the\n> locale. Use scanner_isspace instead, which only returns true for ASCII\n> whitespace. It appears other places in the Postgres code have already run\n> into this, since a number of places use scanner_isspace instead. However,\n> there are still a lot of other calls to isspace(). I'll try to take a quick\n> look to see if there might be other instances of this bug.\n\nIndeed. It looks like 9ae2661 missed this spot.\n--\nMichael",
"msg_date": "Tue, 6 Jun 2023 20:37:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 7:37 AM Michael Paquier <[email protected]> wrote:\n\n> Indeed. It looks like 9ae2661 missed this spot.\n>\n\nI didn't think to look for a previous fix, thanks for finding this commit\nid!\n\nI did a quick look at the places found with \"git grep isspace\" yesterday. I\nagree with the comment from commit 9ae2661: \"I've left alone isspace()\ncalls in places that aren't really expecting any non-ASCII input\ncharacters, such as float8in().\" There are a number of other calls where I\nthink it would likely be safe, and possibly even a good idea, to replace\nisspace() with scanner_isspace(). However, I couldn't find any where I\ncould cause a bug like the one I hit in hstore parsing.\n\nOriginal mailing list post for commit 9ae2661 in case it is helpful for\nothers: https://www.postgresql.org/message-id/[email protected]\n\nOn Tue, Jun 6, 2023 at 7:37 AM Michael Paquier <[email protected]> wrote:\nIndeed. It looks like 9ae2661 missed this spot.I didn't think to look for a previous fix, thanks for finding this commit id!I did a quick look at the places found with \"git grep isspace\" yesterday. I agree with the comment from commit 9ae2661: \"I've left alone isspace() calls in places that aren't really expecting any non-ASCII input characters, such as float8in().\" There are a number of other calls where I think it would likely be safe, and possibly even a good idea, to replace isspace() with scanner_isspace(). However, I couldn't find any where I could cause a bug like the one I hit in hstore parsing.Original mailing list post for commit 9ae2661 in case it is helpful for others: https://www.postgresql.org/message-id/[email protected]",
"msg_date": "Tue, 6 Jun 2023 10:16:09 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Tue, Jun 06, 2023 at 10:16:09AM -0400, Evan Jones wrote:\n> I did a quick look at the places found with \"git grep isspace\" yesterday. I\n> agree with the comment from commit 9ae2661: \"I've left alone isspace()\n> calls in places that aren't really expecting any non-ASCII input\n> characters, such as float8in().\" There are a number of other calls where I\n> think it would likely be safe, and possibly even a good idea, to replace\n> isspace() with scanner_isspace(). However, I couldn't find any where I\n> could cause a bug like the one I hit in hstore parsing.\n\nYes, I agree with this feeling. Like 9ae2661, I can't get really\nexcited about plastering more of that, especially if it were for\ntimezone value input or dictionary options. One area with a new\nisspace() since 2017 is multirangetypes.c, but it is just a copy of\nrangetypes.c.\n\n> Original mailing list post for commit 9ae2661 in case it is helpful for\n> others: https://www.postgresql.org/message-id/[email protected]\n\nI have reproduced the original problem reported on macOS 13.4, which\nis close to the top of what's available.\n\nPassing to pg_regress some options to use something else than UTF-8\nleads to a failure in the tests, so we need a split like\nfussyztrmatch to test that:\nREGRESS_OPTS='--encoding=SQL_ASCII --no-locale' make check\n\nAn other error pattern without a split could be found on Windows, as\nof:\n select E'key\\u0105=>value'::hstore;\n- hstore \n------------------\n- \"keyÄ…\"=>\"value\"\n-(1 row)\n-\n+ERROR: character with byte sequence 0xc4 0x85 in encoding \"UTF8\" has\nno equivalent in encoding \"WIN1252\"\n+LINE 1: select E'key\\u0105=>value'::hstore;\n\nWe don't do that for unaccent, actually, leading to similar failures..\nI'll launch a separate thread about that shortly.\n\nWith that fixed, the fix has been applied and backpatched. Thanks for\nthe report, Evan!\n--\nMichael",
"msg_date": "Mon, 12 Jun 2023 09:17:53 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Unfortunately I just noticed a possible \"bug\" with this change. The\nscanner_isspace() function only recognizes *five* ASCII space characters: '\n' \\t \\n \\r \\f. It *excludes* VTAB \\v, which the C standard function\nisspace() includes. This means this patch changed the behavior of hstore\nparsing for some \"unusual\" cases where the \\v character was previously\nignored, and now is not, such as: \"select 'k=>\\vv'::hstore\" . It seems\nunlikely to me that anyone would be depending on this. The\napplication/programming language library would need to be explicitly\ndepending on VTAB being ignored as leading/trailing characters for hstore\nkey/values. I am hopeful that most implementations encode hstore values the\nsame way Postgres does: always using quoted strings, which avoids this\nproblem.\n\nHowever, if we think this change could be a problem, one fix would be to\nswitch scanner_isspace() to array_isspace(), which returns true for these\n*six* ASCII characters. I am happy to submit a patch to do this.\n\nHowever, I am now wondering if the fact that scanner_isspace() and\narray_isspace() disagree with each other could be problematic somewhere,\nbut so far I haven found anything.\n\n\nProblematic example before my hstore change:\n\n$ printf \"select 'k=>\\vv'::hstore\" | psql\n hstore\n----------\n \"k\"=>\"v\"\n(1 row)\n\nSame example after my hstore change on postgres master commit a14e75eb0b\nfrom 2023-06-16:\n\n$ printf \"select 'k=>\\vv'::hstore\" | psql\n hstore\n--------------\n \"k\"=>\"\\x0Bv\"\n(1 row)\n\n\n\n\nOn Sun, Jun 11, 2023 at 8:18 PM Michael Paquier <[email protected]> wrote:\n\n> On Tue, Jun 06, 2023 at 10:16:09AM -0400, Evan Jones wrote:\n> > I did a quick look at the places found with \"git grep isspace\"\n> yesterday. I\n> > agree with the comment from commit 9ae2661: \"I've left alone isspace()\n> > calls in places that aren't really expecting any non-ASCII input\n> > characters, such as float8in().\" There are a number of other calls where\n> I\n> > think it would likely be safe, and possibly even a good idea, to replace\n> > isspace() with scanner_isspace(). However, I couldn't find any where I\n> > could cause a bug like the one I hit in hstore parsing.\n>\n> Yes, I agree with this feeling. Like 9ae2661, I can't get really\n> excited about plastering more of that, especially if it were for\n> timezone value input or dictionary options. One area with a new\n> isspace() since 2017 is multirangetypes.c, but it is just a copy of\n> rangetypes.c.\n>\n> > Original mailing list post for commit 9ae2661 in case it is helpful for\n> > others:\n> https://www.postgresql.org/message-id/[email protected]\n>\n> I have reproduced the original problem reported on macOS 13.4, which\n> is close to the top of what's available.\n>\n> Passing to pg_regress some options to use something else than UTF-8\n> leads to a failure in the tests, so we need a split like\n> fussyztrmatch to test that:\n> REGRESS_OPTS='--encoding=SQL_ASCII --no-locale' make check\n>\n> An other error pattern without a split could be found on Windows, as\n> of:\n> select E'key\\u0105=>value'::hstore;\n> - hstore\n> ------------------\n> - \"keyÄ…\"=>\"value\"\n> -(1 row)\n> -\n> +ERROR: character with byte sequence 0xc4 0x85 in encoding \"UTF8\" has\n> no equivalent in encoding \"WIN1252\"\n> +LINE 1: select E'key\\u0105=>value'::hstore;\n>\n> We don't do that for unaccent, actually, leading to similar failures..\n> I'll launch a separate thread about that shortly.\n>\n> With that fixed, the fix has been applied and backpatched. Thanks for\n> the report, Evan!\n> --\n> Michael\n>\n\nUnfortunately I just noticed a possible \"bug\" with this change. The scanner_isspace() function only recognizes *five* ASCII space characters: ' ' \\t \\n \\r \\f. It *excludes* VTAB \\v, which the C standard function isspace() includes. This means this patch changed the behavior of hstore parsing for some \"unusual\" cases where the \\v character was previously ignored, and now is not, such as: \"select 'k=>\\vv'::hstore\" . It seems unlikely to me that anyone would be depending on this. The application/programming language library would need to be explicitly depending on VTAB being ignored as leading/trailing characters for hstore key/values. I am hopeful that most implementations encode hstore values the same way Postgres does: always using quoted strings, which avoids this problem.However, if we think this change could be a problem, one fix would be to switch scanner_isspace() to array_isspace(), which returns true for these *six* ASCII characters. I am happy to submit a patch to do this.However, I am now wondering if the fact that scanner_isspace() and array_isspace() disagree with each other could be problematic somewhere, but so far I haven found anything. Problematic example before my hstore change:$ printf \"select 'k=>\\vv'::hstore\" | psql hstore ---------- \"k\"=>\"v\"(1 row)Same example after my hstore change on postgres master commit a14e75eb0b from 2023-06-16:$ printf \"select 'k=>\\vv'::hstore\" | psql hstore -------------- \"k\"=>\"\\x0Bv\"(1 row)On Sun, Jun 11, 2023 at 8:18 PM Michael Paquier <[email protected]> wrote:On Tue, Jun 06, 2023 at 10:16:09AM -0400, Evan Jones wrote:\n> I did a quick look at the places found with \"git grep isspace\" yesterday. I\n> agree with the comment from commit 9ae2661: \"I've left alone isspace()\n> calls in places that aren't really expecting any non-ASCII input\n> characters, such as float8in().\" There are a number of other calls where I\n> think it would likely be safe, and possibly even a good idea, to replace\n> isspace() with scanner_isspace(). However, I couldn't find any where I\n> could cause a bug like the one I hit in hstore parsing.\n\nYes, I agree with this feeling. Like 9ae2661, I can't get really\nexcited about plastering more of that, especially if it were for\ntimezone value input or dictionary options. One area with a new\nisspace() since 2017 is multirangetypes.c, but it is just a copy of\nrangetypes.c.\n\n> Original mailing list post for commit 9ae2661 in case it is helpful for\n> others: https://www.postgresql.org/message-id/[email protected]\n\nI have reproduced the original problem reported on macOS 13.4, which\nis close to the top of what's available.\n\nPassing to pg_regress some options to use something else than UTF-8\nleads to a failure in the tests, so we need a split like\nfussyztrmatch to test that:\nREGRESS_OPTS='--encoding=SQL_ASCII --no-locale' make check\n\nAn other error pattern without a split could be found on Windows, as\nof:\n select E'key\\u0105=>value'::hstore;\n- hstore \n------------------\n- \"keyÄ…\"=>\"value\"\n-(1 row)\n-\n+ERROR: character with byte sequence 0xc4 0x85 in encoding \"UTF8\" has\nno equivalent in encoding \"WIN1252\"\n+LINE 1: select E'key\\u0105=>value'::hstore;\n\nWe don't do that for unaccent, actually, leading to similar failures..\nI'll launch a separate thread about that shortly.\n\nWith that fixed, the fix has been applied and backpatched. Thanks for\nthe report, Evan!\n--\nMichael",
"msg_date": "Sat, 17 Jun 2023 10:57:05 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 10:57:05AM -0400, Evan Jones wrote:\n> However, if we think this change could be a problem, one fix would be to\n> switch scanner_isspace() to array_isspace(), which returns true for these\n> *six* ASCII characters. I am happy to submit a patch to do this.\n\nThe difference between scanner_isspace() and array_isspace() is that\nthe former matches with what scan.l stores as rules for whitespace\ncharacters, but the latter works on values. For hstore, we want the\nlatter, with something that works on values. To keep the change\nlocale to hstore, I think that we should just introduce an\nhstore_isspace() which is a copy of array_isspace. That's a\nduplication, sure, but I think that we may want to think harder about\n\\v in the flex scanner, and that's just a few extra lines for \nsomething that has not changed in 13 years for arrays. That's also\neasier to think about for stable branches. If you can send a patch,\nthat helps a lot, for sure!\n\nWorth noting that the array part has been changed in 2010, with\n95cacd1, for the same reason as what you've proposed for hstore.\nThread is here, and it does not mention our flex rules, either:\nhttps://www.postgresql.org/message-id/[email protected]\n\nPerhaps we could consider \\v as a whitespace in the flex scanner\nitself, but I am scared to do that in any stable branch. Perhaps\nwe could consider that for HEAD in 17~? That's a lot to work around\nan old BSD bug that macOS has inherited, though.\n--\nMichael",
"msg_date": "Sun, 18 Jun 2023 10:50:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Sun, Jun 18, 2023 at 10:50:16AM +0900, Michael Paquier wrote:\n> The difference between scanner_isspace() and array_isspace() is that\n> the former matches with what scan.l stores as rules for whitespace\n> characters, but the latter works on values. For hstore, we want the\n> latter, with something that works on values. To keep the change\n> locale to hstore, I think that we should just introduce an\n> hstore_isspace() which is a copy of array_isspace. That's a\n> duplication, sure, but I think that we may want to think harder about\n> \\v in the flex scanner, and that's just a few extra lines for \n> something that has not changed in 13 years for arrays. That's also\n> easier to think about for stable branches. If you can send a patch,\n> that helps a lot, for sure!\n\nAt the end, no need to do that. I have been able to hack the\nattached, that shows the difference of treatment for \\v when running\nin macOS. Evan, what do you think?\n--\nMichael",
"msg_date": "Sun, 18 Jun 2023 17:32:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> At the end, no need to do that. I have been able to hack the\n> attached, that shows the difference of treatment for \\v when running\n> in macOS. Evan, what do you think?\n\nFWIW, I think the status quo is fine. Having hstore do something that\nis neither its historical behavior nor aligned with the core parser\ndoesn't seem like a great idea. I don't buy this argument that\nsomebody might be depending on the handling of \\v in particular. It's\nnot any stronger than the argument that they might be depending on,\nsay, recognizing no-break space (0xA0) in LATIN1, which the old code\ndid (probably, depending on platform) and scanner_isspace will not.\n\nIf anything, the answer for these concerns is that d522b05c8\nshould not have been back-patched. But I'm okay with where we are.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Jun 2023 12:38:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Sun, Jun 18, 2023 at 12:38:12PM -0400, Tom Lane wrote:\n> FWIW, I think the status quo is fine. Having hstore do something that\n> is neither its historical behavior nor aligned with the core parser\n> doesn't seem like a great idea.\n\nOkay. Fine by me.\n\n> I don't buy this argument that\n> somebody might be depending on the handling of \\v in particular. It's\n> not any stronger than the argument that they might be depending on,\n> say, recognizing no-break space (0xA0) in LATIN1, which the old code\n> did (probably, depending on platform) and scanner_isspace will not.\n\nAnother thing that I was wondering, though.. Do you think that there\nwould be an argument in being stricter in the hstore code regarding\nthe handling of multi-byte characters with some checks based on\nIS_HIGHBIT_SET() when parsing the keys and values?\n--\nMichael",
"msg_date": "Mon, 19 Jun 2023 08:28:43 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> Another thing that I was wondering, though.. Do you think that there\n> would be an argument in being stricter in the hstore code regarding\n> the handling of multi-byte characters with some checks based on\n> IS_HIGHBIT_SET() when parsing the keys and values?\n\nWhat have you got in mind? We should already have validated encoding\ncorrectness before the text ever gets to hstore_in, and I'm not clear\nwhat additional checks would be useful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Jun 2023 21:10:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Sun, Jun 18, 2023 at 09:10:59PM -0400, Tom Lane wrote:\n> What have you got in mind? We should already have validated encoding\n> correctness before the text ever gets to hstore_in, and I'm not clear\n> what additional checks would be useful.\n\nI was staring at the hstore parsing code and got the impression that\nmulti-byte character handling could be improved, but looking closer it\nseems that I got that wrong. Apologies for the noise.\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 15:02:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Thanks for the detailed discussion. To confirm that I've understood\neverything:\n\n* Michael's proposed patch to add hstore_isspace() would be a potential\nfix: it resolves my original bug, and does not change the behavior of '\\v'.\n* We believe the change to '\\v' is not a problem, and may be an improvement\nbecause it now follows the \"core\" Postgres parser.\n\nIn conclusion: we don't need to make an additional change. Thank you all\nfor investigating!\n\nMy one last suggestion: We *could* revert the backpatching if we are\nconcerned about this change, but I'm not personally sure that is necessary.\nAs we discussed, this is an unusual corner case in an \"extension\" type that\nmany won't even have enabled.\n\nEvan\n\n\nOn Tue, Jun 20, 2023 at 2:02 AM Michael Paquier <[email protected]> wrote:\n\n> On Sun, Jun 18, 2023 at 09:10:59PM -0400, Tom Lane wrote:\n> > What have you got in mind? We should already have validated encoding\n> > correctness before the text ever gets to hstore_in, and I'm not clear\n> > what additional checks would be useful.\n>\n> I was staring at the hstore parsing code and got the impression that\n> multi-byte character handling could be improved, but looking closer it\n> seems that I got that wrong. Apologies for the noise.\n> --\n> Michael\n>\n\nThanks for the detailed discussion. To confirm that I've understood everything:* Michael's proposed patch to add hstore_isspace() would be a potential fix: it resolves my original bug, and does not change the behavior of '\\v'.* We believe the change to '\\v' is not a problem, and may be an improvement because it now follows the \"core\" Postgres parser.In conclusion: we don't need to make an additional change. Thank you all for investigating!My one last suggestion: We *could* revert the backpatching if we are concerned about this change, but I'm not personally sure that is necessary. As we discussed, this is an unusual corner case in an \"extension\" type that many won't even have enabled.EvanOn Tue, Jun 20, 2023 at 2:02 AM Michael Paquier <[email protected]> wrote:On Sun, Jun 18, 2023 at 09:10:59PM -0400, Tom Lane wrote:\n> What have you got in mind? We should already have validated encoding\n> correctness before the text ever gets to hstore_in, and I'm not clear\n> what additional checks would be useful.\n\nI was staring at the hstore parsing code and got the impression that\nmulti-byte character handling could be improved, but looking closer it\nseems that I got that wrong. Apologies for the noise.\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 09:04:26 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 09:04:26AM -0400, Evan Jones wrote:\n> My one last suggestion: We *could* revert the backpatching if we are\n> concerned about this change, but I'm not personally sure that is necessary.\n> As we discussed, this is an unusual corner case in an \"extension\" type that\n> many won't even have enabled.\n\nAs a whole, I'd like to think that this is an improvement even for\nstable branches with these weird isspace() handlings, so I'm OK with\nthe current status in all the branches. There's an argument about \\v,\nIMO, but I won't fight hard for it either even if it would be more\nconsistent with the way array values are handled.\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 09:02:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> As a whole, I'd like to think that this is an improvement even for\n> stable branches with these weird isspace() handlings, so I'm OK with\n> the current status in all the branches.\n\nSounds like we're all content with that.\n\n> There's an argument about \\v,\n> IMO, but I won't fight hard for it either even if it would be more\n> consistent with the way array values are handled.\n\nI'd be okay with adding \\v to the set of whitespace characters in\nscan.l and scanner_isspace (and other affected places) for v17.\nDon't want to back-patch it though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Jun 2023 23:39:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 11:39:31PM -0400, Tom Lane wrote:\n> I'd be okay with adding \\v to the set of whitespace characters in\n> scan.l and scanner_isspace (and other affected places) for v17.\n> Don't want to back-patch it though.\n\nOkay. No idea where this will lead, but for now I have sent a patch\nthat adds \\v to the parser paths where it would be needed, as far as I\nchecked:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 15:49:06 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "FTR I ran into a benign case of the phenomenon in this thread when\ndealing with row types. In rowtypes.c, we double-quote stuff\ncontaining spaces, but we detect them by passing individual bytes of\nUTF-8 sequences to isspace(). Like macOS, Windows thinks that 0xa0 is\na space when you do that, so for example the Korean character '점'\n(code point C810, UTF-8 sequence EC A0 90) gets quotes on Windows but\nnot on Linux. That confused a migration/diff tool while comparing\nWindows and Linux database servers using that representation. Not a\nbig deal, I guess no one ever promised that the format was stable\nacross platforms, and I don't immediately see a way for anything more\nserious to go wrong (though I may lack imagination). It does seem a\nbit weird to be using locale-aware tokenising for a machine-readable\nformat, and then making sure its behaviour is undefined by feeding it\nchopped up bytes.\n\n\n",
"msg_date": "Tue, 10 Oct 2023 16:17:57 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "Thanks for bringing this up! I just looked at the uses if isspace() in that\nfile. It looks like it is the usual thing: it is allowing leading or\ntrailing whitespace when parsing values, or for this \"needs quoting\" logic\non output. The fix would be the same: this *should* be\nusing scanner_isspace. This has the same disadvantage: it would change\nPostgres's results for some inputs that contain these non-ASCII \"space\"\ncharacters.\n\n\nHere is a quick demonstration of this issue, showing that the quoting\nbehavior is different between these two. Mac OS X with the \"default\" locale\nincludes quotes because ą includes 0x85 in its UTF-8 encoding:\n\npostgres=# SELECT ROW('keyą');\n row\n----------\n (\"keyą\")\n(1 row)\n\nOn Mac OS X with the LANG=C environment variable set, it does not include\nquotes:\n\npostgres=# SELECT ROW('keyą');\n row\n--------\n (keyą)\n(1 row)\n\n\nOn Mon, Oct 9, 2023 at 11:18 PM Thomas Munro <[email protected]> wrote:\n\n> FTR I ran into a benign case of the phenomenon in this thread when\n> dealing with row types. In rowtypes.c, we double-quote stuff\n> containing spaces, but we detect them by passing individual bytes of\n> UTF-8 sequences to isspace(). Like macOS, Windows thinks that 0xa0 is\n> a space when you do that, so for example the Korean character '점'\n> (code point C810, UTF-8 sequence EC A0 90) gets quotes on Windows but\n> not on Linux. That confused a migration/diff tool while comparing\n> Windows and Linux database servers using that representation. Not a\n> big deal, I guess no one ever promised that the format was stable\n> across platforms, and I don't immediately see a way for anything more\n> serious to go wrong (though I may lack imagination). It does seem a\n> bit weird to be using locale-aware tokenising for a machine-readable\n> format, and then making sure its behaviour is undefined by feeding it\n> chopped up bytes.\n>\n\nThanks for bringing this up! I just looked at the uses if isspace() in that file. It looks like it is the usual thing: it is allowing leading or trailing whitespace when parsing values, or for this \"needs quoting\" logic on output. The fix would be the same: this *should* be using scanner_isspace. This has the same disadvantage: it would change Postgres's results for some inputs that contain these non-ASCII \"space\" characters.Here is a quick demonstration of this issue, showing that the quoting behavior is different between these two. Mac OS X with the \"default\" locale includes quotes because ą includes 0x85 in its UTF-8 encoding:postgres=# SELECT ROW('keyą'); row ---------- (\"keyą\")(1 row)On Mac OS X with the LANG=C environment variable set, it does not include quotes:postgres=# SELECT ROW('keyą'); row -------- (keyą)(1 row) On Mon, Oct 9, 2023 at 11:18 PM Thomas Munro <[email protected]> wrote:FTR I ran into a benign case of the phenomenon in this thread when\ndealing with row types. In rowtypes.c, we double-quote stuff\ncontaining spaces, but we detect them by passing individual bytes of\nUTF-8 sequences to isspace(). Like macOS, Windows thinks that 0xa0 is\na space when you do that, so for example the Korean character '점'\n(code point C810, UTF-8 sequence EC A0 90) gets quotes on Windows but\nnot on Linux. That confused a migration/diff tool while comparing\nWindows and Linux database servers using that representation. Not a\nbig deal, I guess no one ever promised that the format was stable\nacross platforms, and I don't immediately see a way for anything more\nserious to go wrong (though I may lack imagination). It does seem a\nbit weird to be using locale-aware tokenising for a machine-readable\nformat, and then making sure its behaviour is undefined by feeding it\nchopped up bytes.",
"msg_date": "Tue, 10 Oct 2023 10:51:10 -0400",
"msg_from": "Evan Jones <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 10:51:10AM -0400, Evan Jones wrote:\n> Here is a quick demonstration of this issue, showing that the quoting\n> behavior is different between these two. Mac OS X with the \"default\" locale\n> includes quotes because ą includes 0x85 in its UTF-8 encoding:\n\nUgh. rowtypes.c has reminded me as well of gistfuncs.c in pageinspect\nwhere included columns are printed in a ROW-like fashion. And it also\nuses isspace() when we check if double quotes are needed or not. So\nthe use of the quotes would equally depend on what macos thinks is\na correct space in this case.\n--\nMichael",
"msg_date": "Wed, 11 Oct 2023 08:34:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] hstore: Fix parsing on Mac OS X: isspace() is locale\n specific"
}
] |
[
{
"msg_contents": "Everyone,\n After recently deep diving on some readline features and optimizing my\nbash environment to have a static set of \"snippets\" that I can always\nfind...\n\n it takes just a couple of history API calls to add some interesting\nfeatures for those that want them. The result of adding 3-4 such commands\n(all under \\history, and with compatible flags):\n\n- Saving your current history without exiting (currently doable as \\s\n:HISTFILE)\n- Reloading your history file (so you can easily share something across\nsessions) w/o exiting.\n- Stack Loading of specific history (like a shared snippets library, and a\npersonal snippets library) [clearing your history, then loading them in a\ncustom order]\n\n The upside is really about clearly identifying and sharing permanent\nsnippets, while having that list be editable externally. Again, bringing\nteams online who don't always know the PG way of doing things (Waits,\nLocks, Space, High CPU queries, Running Queries).\n\n My intention is to leverage the way PSQL keeps the Comment above the SQL\nwith the SQL.\nThen I can step backwards searching for \"context\" markers (Ctrl-R) or\n-- <CONTEXT> [F8] {history-search-backward}\n\n To rip through my snippets\n\nKirk...\nPS: I could do all of this under \\s [options] [filename] it's just less\nclear...\n\nEveryone, After recently deep diving on some readline features and optimizing my bash environment to have a static set of \"snippets\" that I can always find... it takes just a couple of history API calls to add some interesting features for those that want them. The result of adding 3-4 such commands (all under \\history, and with compatible flags):- Saving your current history without exiting (currently doable as \\s :HISTFILE)- Reloading your history file (so you can easily share something across sessions) w/o exiting.- Stack Loading of specific history (like a shared snippets library, and a personal snippets library) [clearing your history, then loading them in a custom order] The upside is really about clearly identifying and sharing permanent snippets, while having that list be editable externally. Again, bringing teams online who don't always know the PG way of doing things (Waits, Locks, Space, High CPU queries, Running Queries). My intention is to leverage the way PSQL keeps the Comment above the SQL with the SQL.Then I can step backwards searching for \"context\" markers (Ctrl-R) or-- <CONTEXT> [F8] {history-search-backward} To rip through my snippetsKirk...PS: I could do all of this under \\s [options] [filename] it's just less clear...",
"msg_date": "Mon, 5 Jun 2023 11:50:04 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": true,
"msg_subject": "RFC: Adding \\history [options] [filename] to psql (Snippets and\n Shared Queries)"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 8:58 AM Kirk Wolak <[email protected]> wrote:\n>\n> Everyone,\n> After recently deep diving on some readline features and optimizing my bash environment to have a static set of \"snippets\" that I can always find...\n>\n> it takes just a couple of history API calls to add some interesting features for those that want them. The result of adding 3-4 such commands (all under \\history, and with compatible flags):\n>\n> - Saving your current history without exiting (currently doable as \\s :HISTFILE)\n> - Reloading your history file (so you can easily share something across sessions) w/o exiting.\n> - Stack Loading of specific history (like a shared snippets library, and a personal snippets library) [clearing your history, then loading them in a custom order]\n>\n> The upside is really about clearly identifying and sharing permanent snippets, while having that list be editable externally. Again, bringing teams online who don't always know the PG way of doing things (Waits, Locks, Space, High CPU queries, Running Queries).\n>\n> My intention is to leverage the way PSQL keeps the Comment above the SQL with the SQL.\n> Then I can step backwards searching for \"context\" markers (Ctrl-R) or\n> -- <CONTEXT> [F8] {history-search-backward}\n>\n> To rip through my snippets\n>\n> Kirk...\n> PS: I could do all of this under \\s [options] [filename] it's just less clear...\n\nUnderstandably, there doesn't seem to be a lot of enthusiasm for this.\nIf you could show others a sample/demo session of what the UI and UX\nwould look like, maybe others can chime in with either their opinion\nof the behaviour, or perhaps a better/different way of achieving that.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:58:54 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RFC: Adding \\history [options] [filename] to psql (Snippets and\n Shared Queries)"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 1:59 AM Gurjeet Singh <[email protected]> wrote:\n\n> On Mon, Jun 5, 2023 at 8:58 AM Kirk Wolak <[email protected]> wrote:\n> >\n> > Everyone,\n> > After recently deep diving on some readline features and optimizing my\n> bash environment to have a static set of \"snippets\" that I can always\n> find...\n> >\n> > it takes just a couple of history API calls to add some interesting\n> features for those that want them. The result of adding 3-4 such commands\n> (all under \\history, and with compatible flags):\n> >\n> > - Saving your current history without exiting (currently doable as \\s\n> :HISTFILE)\n> > - Reloading your history file (so you can easily share something across\n> sessions) w/o exiting.\n> > - Stack Loading of specific history (like a shared snippets library, and\n> a personal snippets library) [clearing your history, then loading them in a\n> custom order]\n> >\n> > The upside is really about clearly identifying and sharing permanent\n> snippets, while having that list be editable externally. Again, bringing\n> teams online who don't always know the PG way of doing things (Waits,\n> Locks, Space, High CPU queries, Running Queries).\n> >\n> > My intention is to leverage the way PSQL keeps the Comment above the\n> SQL with the SQL.\n> > Then I can step backwards searching for \"context\" markers (Ctrl-R) or\n> > -- <CONTEXT> [F8] {history-search-backward}\n> >\n> > To rip through my snippets\n> >\n> > Kirk...\n> > PS: I could do all of this under \\s [options] [filename] it's just less\n> clear...\n>\n> Understandably, there doesn't seem to be a lot of enthusiasm for this.\n> If you could show others a sample/demo session of what the UI and UX\n> would look like, maybe others can chime in with either their opinion\n> of the behaviour, or perhaps a better/different way of achieving that.\n>\n> Gurjeet,\n I agree. I've decided to do an implementation, and then explain its\nusage. There are 2-3 different use cases.\nLike pasting a huge script of one liners into \\e and then executing them.\nBut not wanting them in your history.\n\\s -c -- Clear the history\n\\s -r :HISTFILE\n\n The magic I want for snippets is (inside .psqlrc):\n\\s -r snippets.sql\nand then let the normal histfile load.\nor use\n\\s -r :HISTFILE and force it to load.\n\n Then the hard examples and the invocation will make more sense.\n\nThanks for the feedback!\n\n\n\n\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>\n\nOn Tue, Jun 13, 2023 at 1:59 AM Gurjeet Singh <[email protected]> wrote:On Mon, Jun 5, 2023 at 8:58 AM Kirk Wolak <[email protected]> wrote:\n>\n> Everyone,\n> After recently deep diving on some readline features and optimizing my bash environment to have a static set of \"snippets\" that I can always find...\n>\n> it takes just a couple of history API calls to add some interesting features for those that want them. The result of adding 3-4 such commands (all under \\history, and with compatible flags):\n>\n> - Saving your current history without exiting (currently doable as \\s :HISTFILE)\n> - Reloading your history file (so you can easily share something across sessions) w/o exiting.\n> - Stack Loading of specific history (like a shared snippets library, and a personal snippets library) [clearing your history, then loading them in a custom order]\n>\n> The upside is really about clearly identifying and sharing permanent snippets, while having that list be editable externally. Again, bringing teams online who don't always know the PG way of doing things (Waits, Locks, Space, High CPU queries, Running Queries).\n>\n> My intention is to leverage the way PSQL keeps the Comment above the SQL with the SQL.\n> Then I can step backwards searching for \"context\" markers (Ctrl-R) or\n> -- <CONTEXT> [F8] {history-search-backward}\n>\n> To rip through my snippets\n>\n> Kirk...\n> PS: I could do all of this under \\s [options] [filename] it's just less clear...\n\nUnderstandably, there doesn't seem to be a lot of enthusiasm for this.\nIf you could show others a sample/demo session of what the UI and UX\nwould look like, maybe others can chime in with either their opinion\nof the behaviour, or perhaps a better/different way of achieving that.\nGurjeet, I agree. I've decided to do an implementation, and then explain its usage. There are 2-3 different use cases.Like pasting a huge script of one liners into \\e and then executing them. But not wanting them in your history.\\s -c -- Clear the history\\s -r :HISTFILE The magic I want for snippets is (inside .psqlrc):\\s -r snippets.sqland then let the normal histfile load.or use\\s -r :HISTFILE and force it to load. Then the hard examples and the invocation will make more sense.Thanks for the feedback! \nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sun, 25 Jun 2023 23:26:38 -0400",
"msg_from": "Kirk Wolak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RFC: Adding \\history [options] [filename] to psql (Snippets and\n Shared Queries)"
}
] |
[
{
"msg_contents": "On 05/06/2023 11:18, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka(at)iki(dot)fi> writes:\n>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,\n>> so that the whole server runs in a single process, with multiple\n>> threads. It has been discussed many times in the past, last thread on\n>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].\n>\n>> I feel that there is now pretty strong consensus that it would be a good\n>> thing, more so than before. Lots of work to get there, and lots of\n>> details to be hashed out, but no objections to the idea at a high level.\n>\n>> The purpose of this email is to make that silent consensus explicit. If\n>> you have objections to switching from the current multi-process\n>> architecture to a single-process, multi-threaded architecture, please\n>> speak up.\n>\n> For the record, I think this will be a disaster. There is far too much\n> code that will get broken, largely silently, and much of it is not\n> under our control.\n\nI fully agreed with Tom.\n\nFirst, it is not clear what are the benefits of architecture change?\n\nPerformance?\n\nDevelopment becomes much more complicated and error-prone.\n\nThere are still many low-hanging fruit to be had that can improve\nperformance.\nAnd the code can gradually and safely remove multithreading barriers.\n\n1. gradual reduction of global variables\n2. introduction of local context structures\n3. shrink current structures (to fit in 32, 64 boundaries)\n\n4. scope reduction\n\nMy 2c.\n\nregards,\n\nRanier Vilela\n\n\nOn 05/06/2023 11:18, Tom Lane wrote:> Heikki Linnakangas <hlinnaka(at)iki(dot)fi> writes:>> I spoke with some folks at PGCon about making PostgreSQL multi-threaded,>> so that the whole server runs in a single process, with multiple>> threads. It has been discussed many times in the past, last thread on>> pgsql-hackers was back in 2017 when Konstantin made some experiments [0].> >> I feel that there is now pretty strong consensus that it would be a good>> thing, more so than before. Lots of work to get there, and lots of>> details to be hashed out, but no objections to the idea at a high level.> >> The purpose of this email is to make that silent consensus explicit. If>> you have objections to switching from the current multi-process>> architecture to a single-process, multi-threaded architecture, please>> speak up.> > For the record, I think this will be a disaster. There is far too much> code that will get broken, largely silently, and much of it is not> under our control.I fully agreed with Tom.First, it is not clear what are the benefits of architecture change?Performance?Development becomes much more complicated and error-prone.There are still many low-hanging fruit to be had that can improve performance.And the code can gradually and safely remove multithreading barriers.1. gradual reduction of global variables2. introduction of local context structures3. shrink current structures (to fit in 32, 64 boundaries)4. scope reduction\nMy 2c.regards,Ranier Vilela",
"msg_date": "Mon, 5 Jun 2023 13:26:00 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 01:26:00PM -0300, Ranier Vilela wrote:\n> On 05/06/2023 11:18, Tom Lane wrote:\n> > For the record, I think this will be a disaster. There is far too much\n> > code that will get broken, largely silently, and much of it is not\n> > under our control.\n> \n> I fully agreed with Tom.\n> \n> First, it is not clear what are the benefits of architecture change?\n> \n> Performance?\n> \n> Development becomes much more complicated and error-prone.\n\nI agree the costs of going threaded have been reduced with compiler and\nlibrary improvements, but I don't know if they are reduced enough for\nthe change to be a net benefit, except on Windows where the process\ncreation overhead is high.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 5 Jun 2023 12:42:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Em seg., 5 de jun. de 2023 às 13:42, Bruce Momjian <[email protected]>\nescreveu:\n\n> On Mon, Jun 5, 2023 at 01:26:00PM -0300, Ranier Vilela wrote:\n> > On 05/06/2023 11:18, Tom Lane wrote:\n> > > For the record, I think this will be a disaster. There is far too much\n> > > code that will get broken, largely silently, and much of it is not\n> > > under our control.\n> >\n> > I fully agreed with Tom.\n> >\n> > First, it is not clear what are the benefits of architecture change?\n> >\n> > Performance?\n> >\n> > Development becomes much more complicated and error-prone.\n>\n> I agree the costs of going threaded have been reduced with compiler and\n> library improvements, but I don't know if they are reduced enough for\n> the change to be a net benefit, except on Windows where the process\n> creation overhead is high.\n>\nYeah, but process creation, even on windows, is a tiny part of response\ntime.\nSGDB has one connection per user, so one process or thread.\n\nUnlike a webserver like Nginx, with hundreds of thousands connections.\nFor the record, Nginx is multithread and uses -Werror for default. (Make\nall warnings into errors)\n\nregards,\nRanier Vilela\n\nEm seg., 5 de jun. de 2023 às 13:42, Bruce Momjian <[email protected]> escreveu:On Mon, Jun 5, 2023 at 01:26:00PM -0300, Ranier Vilela wrote:\n> On 05/06/2023 11:18, Tom Lane wrote:\n> > For the record, I think this will be a disaster. There is far too much\n> > code that will get broken, largely silently, and much of it is not\n> > under our control.\n> \n> I fully agreed with Tom.\n> \n> First, it is not clear what are the benefits of architecture change?\n> \n> Performance?\n> \n> Development becomes much more complicated and error-prone.\n\nI agree the costs of going threaded have been reduced with compiler and\nlibrary improvements, but I don't know if they are reduced enough for\nthe change to be a net benefit, except on Windows where the process\ncreation overhead is high.Yeah, but process creation, even on windows, is a tiny part of response time.SGDB has one connection per user, so one process or thread.Unlike a webserver like Nginx, with hundreds of thousands connections.For the record, Nginx is multithread and uses -Werror for default. (Make all warnings into errors)regards,Ranier Vilela",
"msg_date": "Mon, 5 Jun 2023 14:05:10 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 12:26, Ranier Vilela wrote:\n> First, it is not clear what are the benefits of architecture change?\n> \n> Performance?\n\nI doubt it makes much performance difference, at least not initially. It \nmight help a little with backend startup time, and maybe some other \nthings. And might reduce the overhead of context switches and TLB cache \nmisses.\n\nIn the long run, a single-process architecture makes it easier to have \nshared catalog caches, plan cache, etc. which can improve performance. \nAnd it can make it easier to launch helper threads for things where \nworker processes would be too heavy-weight. But those benefits will \nrequire more work, they won't happen just by replacing processes with \nthreads.\n\nThe ease of developing things like that is my motivation.\n\n> Development becomes much more complicated and error-prone.\n\nI don't agree with that.\n\nWe currently bend over backwards to make all allocations fixed-sized in \nshared memory. You learn to live with that, but a lot of things would be \nsimpler if you could allocate and free in shared memory more freely. \nIt's no panacea, you still need to be careful with locking and \nconcurrency. But a lot simpler.\n\nWe have built dynamic shared memory etc. over the years to work around \nthe limitations of shared memory. But it's still a lot more complicated.\n\nCode that doesn't need to communicate with other processes/threads is \nsimple to write in either model.\n\n> There are still many low-hanging fruit to be had that can improve \n> performance.\n> And the code can gradually and safely remove multithreading barriers.\n> \n> 1. gradual reduction of global variables\n> 2. introduction of local context structures\n> 3. shrink current structures (to fit in 32, 64 boundaries)\n> \n> 4. scope reduction\n\nRight, the reason I started this thread is to explicitly note that it is \na worthy goal. If it's not, the above steps would be pointless. But if \nwe agree that it is a worthy goal, we can start to incrementally work \ntowards it.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 20:25:05 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 12:25 PM Heikki Linnakangas <[email protected]> wrote:\n\n> We currently bend over backwards to make all allocations fixed-sized in\n> shared memory. You learn to live with that, but a lot of things would be\n> simpler if you could allocate and free in shared memory more freely.\n> It's no panacea, you still need to be careful with locking and\n> concurrency. But a lot simpler.\n\n\nWould this help with oom killer in linux?\n\nIsn't it true that pgbouncer provides a lot of the same benefits?\n\nmerlin\n\nOn Mon, Jun 5, 2023 at 12:25 PM Heikki Linnakangas <[email protected]> wrote:\nWe currently bend over backwards to make all allocations fixed-sized in \nshared memory. You learn to live with that, but a lot of things would be \nsimpler if you could allocate and free in shared memory more freely. \nIt's no panacea, you still need to be careful with locking and \nconcurrency. But a lot simpler.Would this help with oom killer in linux?Isn't it true that pgbouncer provides a lot of the same benefits?merlin",
"msg_date": "Mon, 5 Jun 2023 12:32:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "On 05/06/2023 13:32, Merlin Moncure wrote:\n> Would this help with oom killer in linux?\n\nHmm, I guess the OOM killer would better understand what Postgres is \ndoing, it's not very smart about accounting shared memory. You still \nwouldn't want the OOM killer to kill Postgres, though, so I think you'd \nstill want to disable it in production systems.\n\n> Isn't it true that pgbouncer provides a lot of the same benefits?\n\nI guess there is some overlap, although I don't really think of it that \nway. Firstly, pgbouncer has its own set of problems. Secondly, switching \nto threads would not make connection poolers obsolete. Maybe in the \ndistant future, Postgres could handle thousands of connections with \nease, and threads would make that easier to achieve that, but that would \nneed a lot of more work.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 20:43:17 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
},
{
"msg_contents": "Hi\n\nIn the long run, a single-process architecture makes it easier to have\n> shared catalog caches, plan cache, etc. which can improve performance.\n> And it can make it easier to launch helper threads for things where\n> worker processes would be too heavy-weight. But those benefits will\n> require more work, they won't happen just by replacing processes with\n> threads.\n>\n\nThe shared plan cache is not a silver bullet. The good management of shared\nplan cache is very very difficult. Our heuristic about custom plans in\nprepared statements is nothing, and you should reduce the usage of custom\nplans too.\n\nThere are a lot of issues known from Oracle. The benefits can be just for\nvery primitive very fast queries, or extra complex queries where generic\nplan is used. Current implementation of local plan caches has lot of\nissues (that cannot be fixed), but shared plan cache is another level of\ncomplexity\n\nRegards\n\nPavel\n\n\n\n>\n>\n\nHi\nIn the long run, a single-process architecture makes it easier to have \nshared catalog caches, plan cache, etc. which can improve performance. \nAnd it can make it easier to launch helper threads for things where \nworker processes would be too heavy-weight. But those benefits will \nrequire more work, they won't happen just by replacing processes with \nthreads.The shared plan cache is not a silver bullet. The good management of shared plan cache is very very difficult. Our heuristic about custom plans in prepared statements is nothing, and you should reduce the usage of custom plans too.There are a lot of issues known from Oracle. The benefits can be just for very primitive very fast queries, or extra complex queries where generic plan is used. Current implementation of local plan caches has lot of issues (that cannot be fixed), but shared plan cache is another level of complexityRegardsPavel",
"msg_date": "Mon, 5 Jun 2023 21:03:37 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Let's make PostgreSQL multi-threaded"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently the libc collation version for Windows has two components\ncoming from the NLSVERSIONINFOEX structure [1]\ndwNLSVersion and dwDefinedVersion\n\nSo we get version numbers looking like this (with 16 beta1):\n\npostgres=# select collversion,count(*) from pg_collation group by\ncollversion;\n collversion | count \n---------------+-------\n\t | 5\n 1539.5,1539.5 | 1457\n(2 rows)\n\nAccording to [1] the second number is obsolete, and AFAICS we should\nexpose only the first.\n\n<quote>\ndwDefinedVersion\n\n Defined version. This value is used to track changes in the repertoire\n of Unicode code points. The value increments when the Unicode\n repertoire is extended, for example, if more characters are defined.\n\n Starting with Windows 8: Deprecated. Use dwNLSVersion instead.\n</quote>\n\nPFA a patch implementing that suggestion.\n\n\n[1]\nhttps://learn.microsoft.com/en-us/windows/win32/api/winnls/ns-winnls-nlsversioninfoex\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Mon, 05 Jun 2023 18:55:56 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simplify pg_collation.collversion for Windows libc"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 12:56 PM Daniel Verite <[email protected]> wrote:\n> postgres=# select collversion,count(*) from pg_collation group by\n> collversion;\n> collversion | count\n> ---------------+-------\n> | 5\n> 1539.5,1539.5 | 1457\n> (2 rows)\n>\n> According to [1] the second number is obsolete, and AFAICS we should\n> expose only the first.\n\nWould it be a good idea to remove or ignore the trailing /,*$/\nsomewhere, perhaps during pg_upgrade, to avoid bogus version mismatch\nwarnings?\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:21:06 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify pg_collation.collversion for Windows libc"
},
{
"msg_contents": "On 06.06.23 03:21, Thomas Munro wrote:\n> On Mon, Jun 5, 2023 at 12:56 PM Daniel Verite <[email protected]> wrote:\n>> postgres=# select collversion,count(*) from pg_collation group by\n>> collversion;\n>> collversion | count\n>> ---------------+-------\n>> | 5\n>> 1539.5,1539.5 | 1457\n>> (2 rows)\n>>\n>> According to [1] the second number is obsolete, and AFAICS we should\n>> expose only the first.\n> \n> Would it be a good idea to remove or ignore the trailing /,*$/\n> somewhere, perhaps during pg_upgrade, to avoid bogus version mismatch\n> warnings?\n\nI wonder whether it's worth dealing with this, versus just leaving it \nall alone.\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:09:00 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simplify pg_collation.collversion for Windows libc"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile trying pg16beta1 libc collations on Windows, I noticed that UTF-8\ntext sorts sometimes differently across invocations with the same\nlocales, which is wrong since these collations are deterministic.\n\nThe OS is Windows 10 Home, version 10.0.19045 Build 19045,\nself-built 16beta1 with VS Community 2022, without ICU, default\nconfiguration in postgresql.conf.\n\nIt seems to occur more or less randomly with all libc locales except\nC/POSIX, with the probability of getting differences being seemingly\nmuch higher when the data gets larger in number of rows and uses\nhigher codepoints (like if all character are in [U+0001,U+0400] the\nsorts never differ with 40k rows, but they do if there are much more\nrows or if the range is [U+0001,U+2000]).\n\nAlso, it does not occur at all if parallel scan is disabled.\n\nI've come up with a self-contained script that generates random words\nand repeatedly sorts and feed them to md5sum. It takes the number of\nrows and the highest Unicode codepoint as arguments, and shows when the\nchecksums differ across consecutive invocations.\n\nHere's a typical run showing how it goes wrong after the 14th sort:\n\n$ bash repro-coll-windows.sh 40000 16383\nNOTICE: relation \"random_words\" already exists, skipping\nCREATE TABLE\nTRUNCATE TABLE\nCREATE FUNCTION\nDROP COLLATION\nCREATE COLLATION\nINSERT 0 40000\nANALYZE\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 \n35050d858f4c590788132627e74f62c8 -> e746b626fcc848cbbc67570a7dde03bb\n(iter=15)\n16 \ne746b626fcc848cbbc67570a7dde03bb -> 35050d858f4c590788132627e74f62c8\n(iter=16)\n17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 \n35050d858f4c590788132627e74f62c8 -> 6bf38563d1267339122154bd7d4fbfce\n(iter=38)\n39 \n6bf38563d1267339122154bd7d4fbfce -> 35050d858f4c590788132627e74f62c8\n(iter=39)\n40 41 42 43 44 45 46 47 48 49 50 51 \n35050d858f4c590788132627e74f62c8 -> 3d2072698054d0bd57beefea0248b7e6\n(iter=51)\n52 \n3d2072698054d0bd57beefea0248b7e6 -> 35050d858f4c590788132627e74f62c8\n(iter=52)\n53 54 55 56 57 58 59 ^C\n\nWould anyone be able to reproduce this? That might be a local problem\nalthough there's nothing special installed AFAICS.\nInitially I saw this with a larger dataset that I can't share, and the diffs\nbetween outputs showed that only a few lines out of 2 million lines\nwere getting displaced across sorts.\nIt also happens on the same OS\twith Pg15.3 (EDB build) and the default\nlibc collation, so I would not immediately suspect new code in Pg16.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Tue, 06 Jun 2023 00:07:58 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On 6/5/23 18:07, Daniel Verite wrote:\n> While trying pg16beta1 libc collations on Windows, I noticed that UTF-8\n> text sorts sometimes differently across invocations with the same\n> locales, which is wrong since these collations are deterministic.\n\n<snip>\n\n> Also, it does not occur at all if parallel scan is disabled.\n\nThis is a wild shot in the dark, but I wonder if somehow the locale is \nbeing initialized (i.e. setlocale) differently in the parallel workers \nthan the backend due to some Windows specific behavior differences?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 5 Jun 2023 18:32:42 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 10:08 AM Daniel Verite <[email protected]> wrote:\n> Also, it does not occur at all if parallel scan is disabled.\n\nCould this be a clue that it is failing to be transitive?\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:30:17 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Tue, Jun 6, 2023 at 1:30 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Jun 6, 2023 at 10:08 AM Daniel Verite <[email protected]> wrote:\n> > Also, it does not occur at all if parallel scan is disabled.\n>\n> Could this be a clue that it is failing to be transitive?\n\nThat vaguely rang a bell for me... and then I remembered this thread:\n\nhttps://www.postgresql.org/message-id/flat/20191206063401.GB1629883%40rfd.leadboat.com\n\n\n",
"msg_date": "Tue, 6 Jun 2023 13:52:58 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "\tThomas Munro wrote:\n\n> > > Also, it does not occur at all if parallel scan is disabled.\n> >\n> > Could this be a clue that it is failing to be transitive?\n> \n> That vaguely rang a bell for me... and then I remembered this thread:\n> \n> https://www.postgresql.org/message-id/flat/20191206063401.GB1629883%40rfd.leadboat.com\n\nThanks for the pointer, non-transitive comparisons seem a likely cause\nindeed.\n\nThe parallel scan appears to imply some randomness in the sequence of\ncomparisons, which makes the problem more visible.\nAfter changing the test to shuffle the rows before each sort,\nnon-parallel scans also produce outputs that differ, proving that\nparallelism is not a root cause.\n\nRunning the test with all the libc collations with collencoding in\n(-1,6) shows that the only ones not affected are C/POSIX/ucs_basic.\nOtherwise the 569 other pre-created libc collations that can be used\nwith UTF-8 are affected, plus the default collation\n(French_France.1252 in my case).\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 07 Jun 2023 13:58:13 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On 6/7/23 07:58, Daniel Verite wrote:\n> \tThomas Munro wrote:\n> \n>> > > Also, it does not occur at all if parallel scan is disabled.\n>> >\n>> > Could this be a clue that it is failing to be transitive?\n>> \n>> That vaguely rang a bell for me... and then I remembered this thread:\n>> \n>> https://www.postgresql.org/message-id/flat/20191206063401.GB1629883%40rfd.leadboat.com\n> \n> Thanks for the pointer, non-transitive comparisons seem a likely cause\n> indeed.\n> \n> The parallel scan appears to imply some randomness in the sequence of\n> comparisons, which makes the problem more visible.\n> After changing the test to shuffle the rows before each sort,\n> non-parallel scans also produce outputs that differ, proving that\n> parallelism is not a root cause.\n> \n> Running the test with all the libc collations with collencoding in\n> (-1,6) shows that the only ones not affected are C/POSIX/ucs_basic.\n> Otherwise the 569 other pre-created libc collations that can be used\n> with UTF-8 are affected, plus the default collation\n> (French_France.1252 in my case).\n\n\nWow, that sounds pretty horrid\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 08:42:13 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "> On 6/7/23 07:58, Daniel Verite wrote:\n> > Thomas Munro wrote:\n> >\n> >> > > Also, it does not occur at all if parallel scan is disabled.\n> >> >\n> >> > Could this be a clue that it is failing to be transitive?\n> >>\n> >> That vaguely rang a bell for me... and then I remembered this thread:\n> >>\n> >>\n> https://www.postgresql.org/message-id/flat/20191206063401.GB1629883%40rfd.leadboat.com\n> >\n> > Thanks for the pointer, non-transitive comparisons seem a likely cause\n> > indeed.\n>\n\nJust to make sure we are all seeing the same problem, does the attached\npatch fix your test?\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Thu, 8 Jun 2023 15:45:33 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "\tJuan José Santamaría Flecha wrote:\n\n> Just to make sure we are all seeing the same problem, does the attached\n> patch fix your test?\n\nThe problem of the random changes in sorting disappears for all libc\nlocales in pg_collation, so this is very promising.\n\nHowever it persists for the default collation.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 09 Jun 2023 11:18:41 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 11:18 AM Daniel Verite <[email protected]>\nwrote:\n\n> Juan José Santamaría Flecha wrote:\n>\n> > Just to make sure we are all seeing the same problem, does the attached\n> > patch fix your test?\n>\n> The problem of the random changes in sorting disappears for all libc\n> locales in pg_collation, so this is very promising.\n>\n\nGreat to hear, I'll try to push the patch to something reviewable as per\nattached.\n\nHowever it persists for the default collation.\n>\n\nWe can apply a similar approach there.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 12 Jun 2023 18:12:36 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "Trying to follow along here... you're doing the moral equivalent of\nstrxfrm(), so sort keys have the transitive property but direct string\ncomparisons don't? Or is this because LCIDs reach a different\nalgorithm somehow (or otherwise why do you need to use LCIDs for this,\nwhen there is a non-LCID version of that function, with a warning not\nto use the older LCID version[1]?)\n\n[1] https://learn.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-lcmapstringw\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:58:37 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 5:59 PM Thomas Munro <[email protected]> wrote:\n> Trying to follow along here... you're doing the moral equivalent of\n> strxfrm(), so sort keys have the transitive property but direct string\n> comparisons don't? Or is this because LCIDs reach a different\n> algorithm somehow (or otherwise why do you need to use LCIDs for this,\n> when there is a non-LCID version of that function, with a warning not\n> to use the older LCID version[1]?)\n\nI'm reminded of the fact that the abbreviated keys strxfrm() debacle\n(back when 9.5 was released) was caused by a bug in strcoll() -- not a\nbug in strxfrm() itself. From our point of view the problem was that\nstrxfrm() failed to be bug compatible with strcoll() due to a buggy\nstrcoll() optimization.\n\nI believe that strxfrm() is generally less likely to have bugs than\nstrcoll(). There are far fewer opportunities to dodge unnecessary work\nin the case of strxfrm()-like algorithms (offering something like\nICU's pg_strnxfrm_prefix_icu() prefix optimization is the only one).\nOn the other hand, collation library implementers are likely to\nheavily optimize strcoll() for typical use-cases such as sorting and\nbinary search. Using strxfrm() for everything is discouraged [1].\n\nThere is an important difference between this issue and the various\nglibc collation related bugs that I've come across, though: to the\nbest of my knowledge there was never a glibc bug that caused strcoll()\nto violate transitive consistency -- it always agreed with itself. So\nthis is a new one on me. Seems like that might make \"always use\nstrxfrm()\" (or whatever it's actually called on this platform)\nacceptable; strxfrm() can't really violate transitive consistency in\nthe same way. (I think -- I'm assuming that it'll always produce a\nconditioned binary string in a deterministic fashion, since AFAICT\neven this buggy strcoll()-like function won't ever give an\ninconsistent answer when it compares the same two strings.)\n\n[1] https://unicode-org.github.io/icu/userguide/collation/concepts#sortkeys-vs-comparison\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Jun 2023 19:13:17 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 4:13 AM Peter Geoghegan <[email protected]> wrote:\n\n> On Tue, Jun 13, 2023 at 5:59 PM Thomas Munro <[email protected]>\n> wrote:\n> > Trying to follow along here... you're doing the moral equivalent of\n> > strxfrm(), so sort keys have the transitive property but direct string\n> > comparisons don't? Or is this because LCIDs reach a different\n> > algorithm somehow (or otherwise why do you need to use LCIDs for this,\n> > when there is a non-LCID version of that function, with a warning not\n> > to use the older LCID version[1]?)\n>\n> I'm reminded of the fact that the abbreviated keys strxfrm() debacle\n> (back when 9.5 was released) was caused by a bug in strcoll() -- not a\n> bug in strxfrm() itself. From our point of view the problem was that\n> strxfrm() failed to be bug compatible with strcoll() due to a buggy\n> strcoll() optimization.\n>\n> I believe that strxfrm() is generally less likely to have bugs than\n> strcoll(). There are far fewer opportunities to dodge unnecessary work\n> in the case of strxfrm()-like algorithms (offering something like\n> ICU's pg_strnxfrm_prefix_icu() prefix optimization is the only one).\n> On the other hand, collation library implementers are likely to\n> heavily optimize strcoll() for typical use-cases such as sorting and\n> binary search. Using strxfrm() for everything is discouraged [1].\n>\n\nYes, I think the situation is quite similar to what you describe, with its\nWIN32 peculiarities. Take for example the attached program, it'll output:\n\ns1 = s2\ns2 = s3\ns1 > s3\nc1 > c2\nc2 > c3\nc1 > c3\n\nAs you can see the test for CompareStringEx() is broken, but we get a sane\nanswer with LCMapStringEx().\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 14 Jun 2023 12:50:28 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 10:50 PM Juan José Santamaría Flecha\n<[email protected]> wrote:\n> Yes, I think the situation is quite similar to what you describe, with its WIN32 peculiarities. Take for example the attached program, it'll output:\n>\n> s1 = s2\n> s2 = s3\n> s1 > s3\n> c1 > c2\n> c2 > c3\n> c1 > c3\n>\n> As you can see the test for CompareStringEx() is broken, but we get a sane answer with LCMapStringEx().\n\nGiven that the documented behaviour is that \".. the sort key produces\nthe same order as when the source string is used in CompareString or\nCompareStringEx\"[1], this seems like a reportable bug, unless perhaps\nyour test program is hiding an error with that default case you have.\n\n[1] https://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications\n\n\n",
"msg_date": "Thu, 15 Jun 2023 11:56:55 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 1:57 AM Thomas Munro <[email protected]> wrote:\n\n>\n> Given that the documented behaviour is that \".. the sort key produces\n> the same order as when the source string is used in CompareString or\n> CompareStringEx\"[1], this seems like a reportable bug, unless perhaps\n> your test program is hiding an error with that default case you have.\n>\n> [1]\n> https://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications\n\n\nOk, let's see where the report goes:\n\nhttps://developercommunity.visualstudio.com/t/CompareStringEx-non-transitive/10393003?q=comparestringex\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Jun 15, 2023 at 1:57 AM Thomas Munro <[email protected]> wrote:\nGiven that the documented behaviour is that \".. the sort key produces\nthe same order as when the source string is used in CompareString or\nCompareStringEx\"[1], this seems like a reportable bug, unless perhaps\nyour test program is hiding an error with that default case you have.\n\n[1] https://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applicationsOk, let's see where the report goes:https://developercommunity.visualstudio.com/t/CompareStringEx-non-transitive/10393003?q=comparestringexRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 19 Jun 2023 09:42:04 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On 2023-Jun-19, Juan José Santamaría Flecha wrote:\n\n> Ok, let's see where the report goes:\n> \n> https://developercommunity.visualstudio.com/t/CompareStringEx-non-transitive/10393003?q=comparestringex\n\nHm, so this appears to have been marked as solved by Microsoft. Can you\nrecheck? Also, what does the resolution mean for Postgres, in practical\nterms?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n",
"msg_date": "Mon, 3 Jul 2023 17:42:49 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "El lun, 3 jul 2023, 17:42, Alvaro Herrera <[email protected]>\nescribió:\n\n> On 2023-Jun-19, Juan José Santamaría Flecha wrote:\n>\n> > Ok, let's see where the report goes:\n> >\n> >\n> https://developercommunity.visualstudio.com/t/CompareStringEx-non-transitive/10393003?q=comparestringex\n>\n> Hm, so this appears to have been marked as solved by Microsoft. Can you\n> recheck? Also, what does the resolution mean for Postgres, in practical\n> terms?\n>\n\n\nIt's not really solved, they have just pushed the issue to another team\nthat's only reachable through the Windows feedback hub. I've already\nprovided the feedback, but it's only visible through the proprietary app.\n\nI would say there haven't been any real progress on that front so far.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n\nEl lun, 3 jul 2023, 17:42, Alvaro Herrera <[email protected]> escribió:On 2023-Jun-19, Juan José Santamaría Flecha wrote:\n\n> Ok, let's see where the report goes:\n> \n> https://developercommunity.visualstudio.com/t/CompareStringEx-non-transitive/10393003?q=comparestringex\n\nHm, so this appears to have been marked as solved by Microsoft. Can you\nrecheck? Also, what does the resolution mean for Postgres, in practical\nterms?It's not really solved, they have just pushed the issue to another team that's only reachable through the Windows feedback hub. I've already provided the feedback, but it's only visible through the proprietary app.I would say there haven't been any real progress on that front so far.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 3 Jul 2023 18:37:05 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 12:50:28PM +0200, Juan José Santamaría Flecha wrote:\n> On Wed, Jun 14, 2023 at 4:13 AM Peter Geoghegan <[email protected]> wrote:\n> \n> > On Tue, Jun 13, 2023 at 5:59 PM Thomas Munro <[email protected]>\n> > wrote:\n> > > Trying to follow along here... you're doing the moral equivalent of\n> > > strxfrm(), so sort keys have the transitive property but direct string\n> > > comparisons don't? Or is this because LCIDs reach a different\n> > > algorithm somehow (or otherwise why do you need to use LCIDs for this,\n> > > when there is a non-LCID version of that function, with a warning not\n> > > to use the older LCID version[1]?)\n> >\n> > I'm reminded of the fact that the abbreviated keys strxfrm() debacle\n> > (back when 9.5 was released) was caused by a bug in strcoll() -- not a\n> > bug in strxfrm() itself. From our point of view the problem was that\n> > strxfrm() failed to be bug compatible with strcoll() due to a buggy\n> > strcoll() optimization.\n> >\n> > I believe that strxfrm() is generally less likely to have bugs than\n> > strcoll(). There are far fewer opportunities to dodge unnecessary work\n> > in the case of strxfrm()-like algorithms (offering something like\n> > ICU's pg_strnxfrm_prefix_icu() prefix optimization is the only one).\n> > On the other hand, collation library implementers are likely to\n> > heavily optimize strcoll() for typical use-cases such as sorting and\n> > binary search. Using strxfrm() for everything is discouraged [1].\n> >\n> \n> Yes, I think the situation is quite similar to what you describe, with its\n> WIN32 peculiarities. Take for example the attached program, it'll output:\n> \n> s1 = s2\n> s2 = s3\n> s1 > s3\n> c1 > c2\n> c2 > c3\n> c1 > c3\n> \n> As you can see the test for CompareStringEx() is broken, but we get a sane\n> answer with LCMapStringEx().\n\nThe LCMapStringEx() solution is elegant. I do see\nhttps://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications\nsays, \"If an application calls the function to create a sort key for a string\ncontaining an Arabic kashida, the function creates no sort key value.\" That's\naggravating.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 22:29:44 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 7:29 AM Noah Misch <[email protected]> wrote:\n\n>\n> The LCMapStringEx() solution is elegant. I do see\n>\n> https://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications\n> says, \"If an application calls the function to create a sort key for a\n> string\n> containing an Arabic kashida, the function creates no sort key value.\"\n> That's\n> aggravating.\n>\n\nI think the problem there is that it is just poorly explained.\nTake for example the attached program that compares \"normal\" and \"kashida\"\n'Raħīm' taken from [1], you'll get:\n\nc1 = c2\nc2 = c1\n\nmeaning that \"normal\" and \"kashida\" are the same string. So, probably that\nphrase should read something like: \"If an application calls the function to\ncreate a sort key for a string containing an Arabic kashida character, the\nfunction will ignore that character and no sort key value will be generated\nfor it.\"\n\n[1] https://en.wikipedia.org/wiki/Kashida\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 11 Aug 2023 11:48:18 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 11:48:18AM +0200, Juan José Santamaría Flecha wrote:\n> On Fri, Aug 11, 2023 at 7:29 AM Noah Misch <[email protected]> wrote:\n> \n> >\n> > The LCMapStringEx() solution is elegant. I do see\n> >\n> > https://learn.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications\n> > says, \"If an application calls the function to create a sort key for a\n> > string\n> > containing an Arabic kashida, the function creates no sort key value.\"\n> > That's\n> > aggravating.\n> >\n> \n> I think the problem there is that it is just poorly explained.\n> Take for example the attached program that compares \"normal\" and \"kashida\"\n> 'Raħīm' taken from [1], you'll get:\n> \n> c1 = c2\n> c2 = c1\n> \n> meaning that \"normal\" and \"kashida\" are the same string. So, probably that\n> phrase should read something like: \"If an application calls the function to\n> create a sort key for a string containing an Arabic kashida character, the\n> function will ignore that character and no sort key value will be generated\n> for it.\"\n\nGood. That sounds fine. Thanks for clarifying.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 06:18:48 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistent results with libc sorting on Windows"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nWhen a transaction reads tuples from the heap, I would like to keep track\nof the maximum commit LSN of the xmin transactions of tuples as they are\nread. For e.g., consider a transaction T that reads tuples [t1, t2, t3]\nwith respective xmins [700, 705, 702] and respective commit LSNs [8000,\n9000, 10000]. I would like to record that T has read a max commit LSN of\n10000.\n\nCurrently, I have changed TransactionIdCommitTree() so that all\ntransactions have their commit LSN stored (and not just synchornous_commit\n= off ones). I am also able to read a recent transaction's commit LSN using\nTransactionIdGetCommitLSN().\n\nI have looked at heapam.c, and sort of understand where this logic needs to\ngo. However, I'm not fully sure how to keep track of this \"maxLSN\" among\nall commit LSNs of tuple xmins. My first attempt was to update a new member\n\"maxLSN\" of the SnapshotData struct as tuples are read, and I get their\ncommit LSNs. However, this slowed down reads by a lot.\n\nAs I understand, looking up the clog for every tuple's xmin is an expensive\noperation. So I was thinking of adding a new member to the HeapTupleHeader\ncalled 'commitLSN' which gets updated as hint bits are set. I'm not sure,\nhowever, if this is the right way to go. Then, there's also the question of\nwhat to use to update the 'maxLSN' -- should I update SnapshotData? Or do\nsomething else?\n\nI would greatly appreciate any pointers on how to go about this :)\n\nSincerely,\nTej Kashi\nMMath CS (thesis) @ UWaterloo\nWaterloo, Canada\n\nP.S. I am not trying to submit a patch/feature. I am modifying Postgres for\nmy master's thesis.\n\nHi everyone,When a transaction reads tuples from the heap, I would like to keep track of the maximum commit LSN of the xmin transactions of tuples as they are read. For e.g., consider a transaction T that reads tuples [t1, t2, t3] with respective xmins [700, 705, 702] and respective commit LSNs [8000, 9000, 10000]. I would like to record that T has read a max commit LSN of 10000.Currently, I have changed TransactionIdCommitTree() so that all transactions have their commit LSN stored (and not just synchornous_commit = off ones). I am also able to read a recent transaction's commit LSN using TransactionIdGetCommitLSN().I have looked at heapam.c, and sort of understand where this logic needs to go. However, I'm not fully sure how to keep track of this \"maxLSN\" among all commit LSNs of tuple xmins. My first attempt was to update a new member \"maxLSN\" of the SnapshotData struct as tuples are read, and I get their commit LSNs. However, this slowed down reads by a lot.As I understand, looking up the clog for every tuple's xmin is an expensive operation. So I was thinking of adding a new member to the HeapTupleHeader called 'commitLSN' which gets updated as hint bits are set. I'm not sure, however, if this is the right way to go. Then, there's also the question of what to use to update the 'maxLSN' -- should I update SnapshotData? Or do something else?I would greatly appreciate any pointers on how to go about this :)Sincerely,Tej KashiMMath CS (thesis) @ UWaterlooWaterloo, CanadaP.S. I am not trying to submit a patch/feature. I am modifying Postgres for my master's thesis.",
"msg_date": "Mon, 5 Jun 2023 18:18:01 -0400",
"msg_from": "Tejasvi Kashi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tracking commit LSNs of tuple xmins for read txns"
}
] |
[
{
"msg_contents": "The Standard defines time zone conversion as follows:\n\n<datetime factor> ::=\n <datetime primary> [ <time zone> ]\n\n<time zone> ::=\n AT <time zone specifier>\n\n<time zone specifier> ::=\n LOCAL\n | TIME ZONE <interval primary>\n\n\nWhile looking at something else, I noticed we do not support AT LOCAL. \nThe local time zone is defined as that of *the session*, not the server, \nwhich can make this quite interesting in views where the view will \nautomatically adjust to the session's time zone.\n\nPatch against 3f1aaaa180 attached.\n-- \nVik Fearing",
"msg_date": "Mon, 5 Jun 2023 23:13:52 -0400",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, 2023-06-05 at 23:13 -0400, Vik Fearing wrote:\n> The Standard defines time zone conversion as follows:\n> \n> <datetime factor> ::=\n> <datetime primary> [ <time zone> ]\n> \n> <time zone> ::=\n> AT <time zone specifier>\n> \n> <time zone specifier> ::=\n> LOCAL\n> | TIME ZONE <interval primary>\n> \n> \n> While looking at something else, I noticed we do not support AT LOCAL. \n> The local time zone is defined as that of *the session*, not the server, \n> which can make this quite interesting in views where the view will \n> automatically adjust to the session's time zone.\n> \n> Patch against 3f1aaaa180 attached.\n\n+1 on the idea; it should be faily trivial, if not very useful.\n\nAt a quick glance, it looks like you resolve \"timezone\" at the time\nthe query is parsed. Shouldn't the resolution happen at query\nexecution time?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 06 Jun 2023 09:56:42 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 6/6/23 03:56, Laurenz Albe wrote:\n> On Mon, 2023-06-05 at 23:13 -0400, Vik Fearing wrote:\n>> The Standard defines time zone conversion as follows:\n>>\n>> <datetime factor> ::=\n>> <datetime primary> [ <time zone> ]\n>>\n>> <time zone> ::=\n>> AT <time zone specifier>\n>>\n>> <time zone specifier> ::=\n>> LOCAL\n>> | TIME ZONE <interval primary>\n>>\n>>\n>> While looking at something else, I noticed we do not support AT LOCAL.\n>> The local time zone is defined as that of *the session*, not the server,\n>> which can make this quite interesting in views where the view will\n>> automatically adjust to the session's time zone.\n>>\n>> Patch against 3f1aaaa180 attached.\n> \n> +1 on the idea; it should be faily trivial, if not very useful.\n\nThanks.\n\n> At a quick glance, it looks like you resolve \"timezone\" at the time\n> the query is parsed. Shouldn't the resolution happen at query\n> execution time?\n\ncurrent_setting(text) is stable, and my tests show that it is calculated \nat execution time.\n\n\npostgres=# prepare x as values (now() at local);\nPREPARE\npostgres=# set timezone to 'UTC';\nSET\npostgres=# execute x;\n column1\n----------------------------\n 2023-06-06 08:23:02.088634\n(1 row)\n\npostgres=# set timezone to 'Asia/Pyongyang';\nSET\npostgres=# execute x;\n column1\n----------------------------\n 2023-06-06 17:23:14.837219\n(1 row)\n\n\nAm I missing something?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 04:24:55 -0400",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Tue, 2023-06-06 at 04:24 -0400, Vik Fearing wrote:\n> > At a quick glance, it looks like you resolve \"timezone\" at the time\n> > the query is parsed. Shouldn't the resolution happen at query\n> > execution time?\n> \n> current_setting(text) is stable, and my tests show that it is calculated \n> at execution time.\n\nAh, ok, then sorry for the noise. I misread the code then.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 06 Jun 2023 19:02:26 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 2023-Jun-06, Laurenz Albe wrote:\n\n> At a quick glance, it looks like you resolve \"timezone\" at the time\n> the query is parsed. Shouldn't the resolution happen at query\n> execution time?\n\nSounds like it -- consider the case where the timestamp value is a\npartition key and one of the partition boundaries falls in between two\ntimezone offsets for some particular ts value; then you use a prepared\nquery to read from a view defined with AT LOCAL. Partition pruning\nwould need to compute partitions to read from at runtime, not plan time.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 12 Jun 2023 17:37:07 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 6/12/23 17:37, Alvaro Herrera wrote:\n> On 2023-Jun-06, Laurenz Albe wrote:\n> \n>> At a quick glance, it looks like you resolve \"timezone\" at the time\n>> the query is parsed. Shouldn't the resolution happen at query\n>> execution time?\n> \n> Sounds like it -- consider the case where the timestamp value is a\n> partition key and one of the partition boundaries falls in between two\n> timezone offsets for some particular ts value; then you use a prepared\n> query to read from a view defined with AT LOCAL. Partition pruning\n> would need to compute partitions to read from at runtime, not plan time.\n\n\nCan you show me an example of that happening with my patch?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 25 Jun 2023 02:14:52 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "> On 6 Jun 2023, at 05:13, Vik Fearing <[email protected]> wrote:\n\n> Patch against 3f1aaaa180 attached.\n\nThis patch fails to compile, the declaration of variables in the switch block\nneeds to be scoped within a { } block. I've fixed this trivial error in the\nattached v2 and also reflowed the comments which now no longer fit.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 3 Jul 2023 15:42:44 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 7/3/23 15:42, Daniel Gustafsson wrote:\n>> On 6 Jun 2023, at 05:13, Vik Fearing <[email protected]> wrote:\n> \n>> Patch against 3f1aaaa180 attached.\n> \n> This patch fails to compile, the declaration of variables in the switch block\n> needs to be scoped within a { } block.\n\n\nInteresting. It compiles for me.\n\n\n> I've fixed this trivial error in the\n> attached v2 and also reflowed the comments which now no longer fit.\n\n\nThank you.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 16:23:50 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHello\r\n\r\nI think this feature can be a useful addition in dealing with time zones. I have applied and tried out the patch, The feature works as described and seems promising. The problem with compilation failure was probably reported on CirrusCI when compiled on different platforms. I have run the latest patch on my own Cirrus CI environment and everything checked out fine. \r\n\r\nThank you\r\n\r\nCary Huang\r\n------------------\r\nHighgo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 22 Sep 2023 21:46:41 +0000",
"msg_from": "cary huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 9/22/23 23:46, cary huang wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> Hello\n> \n> I think this feature can be a useful addition in dealing with time zones. I have applied and tried out the patch, The feature works as described and seems promising. The problem with compilation failure was probably reported on CirrusCI when compiled on different platforms. I have run the latest patch on my own Cirrus CI environment and everything checked out fine.\n> \n> Thank you\n\nThank you for reviewing!\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 23 Sep 2023 00:54:01 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sat, Sep 23, 2023 at 12:54:01AM +0200, Vik Fearing wrote:\n> On 9/22/23 23:46, cary huang wrote:\n>> I think this feature can be a useful addition in dealing with time\n>> zones. I have applied and tried out the patch, The feature works as\n>> described and seems promising. The problem with compilation failure\n>> was probably reported on CirrusCI when compiled on different\n>> platforms. I have run the latest patch on my own Cirrus CI environment\n>> and everything checked out fine. \n> \n> Thank you for reviewing!\n\n+ | a_expr AT LOCAL %prec AT\n+ {\n+ /* Use the value of the session's time zone */\n+ FuncCall *tz = makeFuncCall(SystemFuncName(\"current_setting\"),\n+ list_make1(makeStringConst(\"TimeZone\", -1)),\n+ COERCE_SQL_SYNTAX,\n+ -1);\n+ $$ = (Node *) makeFuncCall(SystemFuncName(\"timezone\"),\n+ list_make2(tz, $1),\n+ COERCE_SQL_SYNTAX,\n+ @2);\n\nAs the deparsing code introduced by this patch is showing, this leads\nto a lot of extra complexity. And, actually, this can be quite\nexpensive as well with these two layers of functions. Note also that\nin comparison to SQLValueFunctions, COERCE_SQL_SYNTAX does less\ninlining. So here comes my question: why doesn't this stuff just use \none underlying function to do this job?\n--\nMichael",
"msg_date": "Fri, 29 Sep 2023 16:27:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 9/29/23 09:27, Michael Paquier wrote:\n> On Sat, Sep 23, 2023 at 12:54:01AM +0200, Vik Fearing wrote:\n>> On 9/22/23 23:46, cary huang wrote:\n>>> I think this feature can be a useful addition in dealing with time\n>>> zones. I have applied and tried out the patch, The feature works as\n>>> described and seems promising. The problem with compilation failure\n>>> was probably reported on CirrusCI when compiled on different\n>>> platforms. I have run the latest patch on my own Cirrus CI environment\n>>> and everything checked out fine.\n>>\n>> Thank you for reviewing!\n> \n> + | a_expr AT LOCAL %prec AT\n> + {\n> + /* Use the value of the session's time zone */\n> + FuncCall *tz = makeFuncCall(SystemFuncName(\"current_setting\"),\n> + list_make1(makeStringConst(\"TimeZone\", -1)),\n> + COERCE_SQL_SYNTAX,\n> + -1);\n> + $$ = (Node *) makeFuncCall(SystemFuncName(\"timezone\"),\n> + list_make2(tz, $1),\n> + COERCE_SQL_SYNTAX,\n> + @2);\n> \n> As the deparsing code introduced by this patch is showing, this leads\n> to a lot of extra complexity. And, actually, this can be quite\n> expensive as well with these two layers of functions. Note also that\n> in comparison to SQLValueFunctions, COERCE_SQL_SYNTAX does less\n> inlining. So here comes my question: why doesn't this stuff just use\n> one underlying function to do this job?\n\nI guess I don't understand the question. Why do you think a single \nfunction that repeats what these functions do would be preferable? I am \nnot sure how doing it a different way would be better.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 3 Oct 2023 02:10:48 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Tue, Oct 03, 2023 at 02:10:48AM +0200, Vik Fearing wrote:\n> On 9/29/23 09:27, Michael Paquier wrote:\n>> As the deparsing code introduced by this patch is showing, this leads\n>> to a lot of extra complexity. And, actually, this can be quite\n>> expensive as well with these two layers of functions. Note also that\n>> in comparison to SQLValueFunctions, COERCE_SQL_SYNTAX does less\n>> inlining. So here comes my question: why doesn't this stuff just use\n>> one underlying function to do this job?\n> \n> I guess I don't understand the question. Why do you think a single function\n> that repeats what these functions do would be preferable? I am not sure how\n> doing it a different way would be better.\n\nLeaving aside the ruleutils.c changes introduced by the patch that are\nquite confusing, having one function in the executor stack is going to\nbe more efficient than two (aka less ExecInitFunc), and this syntax\ncould be used in SQL queries where the operations is repeated a lot.\nThis patch introduces two COERCE_SQL_SYNTAX, meaning that we would do\ntwice the ACL check, twice the function hook, etc, so this could lead\nto significant differences. I think that we should be careful with\nthe approach taken, and do benchmarks to choose an efficient approach\nfrom the start. See for example:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Wed, 4 Oct 2023 08:28:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 9/29/23 09:27, Michael Paquier wrote:\n> On Sat, Sep 23, 2023 at 12:54:01AM +0200, Vik Fearing wrote:\n>> On 9/22/23 23:46, cary huang wrote:\n>>> I think this feature can be a useful addition in dealing with time\n>>> zones. I have applied and tried out the patch, The feature works as\n>>> described and seems promising. The problem with compilation failure\n>>> was probably reported on CirrusCI when compiled on different\n>>> platforms. I have run the latest patch on my own Cirrus CI environment\n>>> and everything checked out fine.\n>>\n>> Thank you for reviewing!\n> \n> + | a_expr AT LOCAL %prec AT\n> + {\n> + /* Use the value of the session's time zone */\n> + FuncCall *tz = makeFuncCall(SystemFuncName(\"current_setting\"),\n> + list_make1(makeStringConst(\"TimeZone\", -1)),\n> + COERCE_SQL_SYNTAX,\n> + -1);\n> + $$ = (Node *) makeFuncCall(SystemFuncName(\"timezone\"),\n> + list_make2(tz, $1),\n> + COERCE_SQL_SYNTAX,\n> + @2);\n> \n> As the deparsing code introduced by this patch is showing, this leads\n> to a lot of extra complexity. And, actually, this can be quite\n> expensive as well with these two layers of functions. Note also that\n> in comparison to SQLValueFunctions, COERCE_SQL_SYNTAX does less\n> inlining. So here comes my question: why doesn't this stuff just use\n> one underlying function to do this job?\n\nOkay. Here is a v3 using that approach.\n-- \nVik Fearing",
"msg_date": "Wed, 4 Oct 2023 15:49:03 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 03:49:03PM +0100, Vik Fearing wrote:\n> Okay. Here is a v3 using that approach.\n\nYou have not posted any numbers to show if there's a difference in\nperformance, so I have run a simple test:\nPREPARE test AS SELECT TIMESTAMP '1978-07-07 19:38' AT LOCAL;\nDO $$ BEGIN\n FOR i IN 1..1000000 LOOP\n EXECUTE 'EXECUTE test';\n END LOOP;\nEND $$;\n\nOn a medium-ish benchmark machine I have (16 vCPUs, 32GB of memory,\n-O2, no asserts), this DO block takes in average 4.3s to run with v2,\nversus 3.6s with v3. So yes, that's faster.\n\nI haven't yet finished my review of the patch, still looking at it.\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 15:05:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 10/6/23 07:05, Michael Paquier wrote:\n> \n> I haven't yet finished my review of the patch, still looking at it.\n\nI realized that my regression tests are not exercising what I originally \nintended them to after this change. They are supposed to show that \ncalling the function explicitly or using AT LOCAL is correctly \nreproduced by ruleutils.c.\n\nThe attached v4 changes the regression tests (and nothing else).\n-- \nVik Fearing",
"msg_date": "Sat, 7 Oct 2023 02:35:06 +0100",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sat, Oct 07, 2023 at 02:35:06AM +0100, Vik Fearing wrote:\n> I realized that my regression tests are not exercising what I originally\n> intended them to after this change. They are supposed to show that calling\n> the function explicitly or using AT LOCAL is correctly reproduced by\n> ruleutils.c.\n\nYes, I don't really see the point in adding more tests for the\ndeparsing of AT TIME ZONE in this context. I would not expect one to\ncall directly timezone() with the grammar in place, but I have no\nobjections either to do that in the view for the regression tests as\nyou are suggesting in v4. The minimal set of changes to test is to\nmake sure that both paths (ts->tstz and tstz->tz) are exercised, and\nthat's what you do.\n\nAnyway, upon review, I have a few issues with this patch. First is\nthe documentation that I find too light:\n- There is no description for AT LOCAL and what kind of result one\ngets back depending on the input given, while AT TIME ZONE has more\ndetails about the arguments that can be used and the results\nobtained.\n- The function timezone(zone, timestamp) is mentioned, and I think\nthat we should do the same with timezone(timestamp) for AT LOCAL.\n\nAnother thing that I have been surprised with is the following, which\nis inconsistent with AT TIME ZONE because we are lacking one system\nfunction:\n=# select time with time zone '05:34:17-05' at local;\nERROR: 42883: function pg_catalog.timezone(time with time zone) does not exist\n\nI think that we should include that to have a full set of operations\nsupported, similar to AT TIME ZONE (see \\df+ timezone). It looks like\nthis would need one extra timetz_at_local(), which would require a bit\nmore refactoring in date.c so as an equivalent of timetz_zone() could\nfeed on the current session's TimeZone instead. I guess that you\ncould just have an timetz_zone_internal() that optionally takes a\ntimezone provided by the user or gets the current session's Timezone\n(session_timezone). Am I missing something?\n\nI am attaching a v5 that addresses the documentation bits, could you\nlook at the business with date.c?\n--\nMichael",
"msg_date": "Tue, 10 Oct 2023 13:34:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 10/10/23 05:34, Michael Paquier wrote:\n> I am attaching a v5 that addresses the documentation bits, could you\n> look at the business with date.c?\n\nHere is a v6 which hopefully addresses all of your concerns.\n-- \nVik Fearing",
"msg_date": "Fri, 13 Oct 2023 02:20:59 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 02:20:59AM +0200, Vik Fearing wrote:\n> On 10/10/23 05:34, Michael Paquier wrote:\n> > I am attaching a v5 that addresses the documentation bits, could you\n> > look at the business with date.c?\n> \n> Here is a v6\n\nThanks for the new version.\n\n> which hopefully addresses all of your concerns.\n\nMostly ;)\n\nThe first thing I did was to extract the doc bits about timezone(zone,\ntime) for AT TIME ZONE from v6 and applied it independently.\n\nI have then looked at the rest and it looked mostly OK to me,\nincluding the extra description you have added for the fifth example\nin the docs. I have tweaked a few things: the regression tests to\nmake the views a bit more appealing to the eye, an indentation to not\nhave koel complain and did a catalog bump. Then applied it.\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 13:07:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On 10/13/23 05:07, Michael Paquier wrote:\n> On Fri, Oct 13, 2023 at 02:20:59AM +0200, Vik Fearing wrote:\n>> On 10/10/23 05:34, Michael Paquier wrote:\n>>> I am attaching a v5 that addresses the documentation bits, could you\n>>> look at the business with date.c?\n>>\n>> Here is a v6\n> \n> Thanks for the new version.\n> \n>> which hopefully addresses all of your concerns.\n> \n> Mostly ;)\n> \n> The first thing I did was to extract the doc bits about timezone(zone,\n> time) for AT TIME ZONE from v6 and applied it independently.\n> \n> I have then looked at the rest and it looked mostly OK to me,\n> including the extra description you have added for the fifth example\n> in the docs. I have tweaked a few things: the regression tests to\n> make the views a bit more appealing to the eye, an indentation to not\n> have koel complain and did a catalog bump. Then applied it.\n\n\nThank you, Michael君!\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Fri, 13 Oct 2023 07:03:20 +0200",
"msg_from": "Vik Fearing <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Fri, Oct 13, 2023 at 07:03:20AM +0200, Vik Fearing wrote:\n> Thank you, Michael君!\n\nNo pb, ヴィックさん。\n--\nMichael",
"msg_date": "Fri, 13 Oct 2023 14:07:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "One of the AIX animals gave a strange result here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2023-10-15%2011%3A40%3A01\n\nIf you ignore the diffs due to change in column width, the interesting\nchange seems to be:\n\n- 23:59:00-07 | 06:59:00+00 | 06:59:00+00 | 06:59:00+00\n- 23:59:59.99-07 | 06:59:59.99+00 | 06:59:59.99+00 | 06:59:59.99+00\n+ 23:59:00-07 | 4294966103:4294967295:00+00 |\n4294966103:4294967295:00+00 | 4294966103:4294967295:00+00\n+ 23:59:59.99-07 | 4294966103:00:00.01+00 |\n4294966103:00:00.01+00 | 4294966103:00:00.01+00\n\nBut the other AIX animal 'sungazer' was OK with it. They're both on\nthe same AIX7.1 host IIRC, both 64 bit builds, but the former is using\nxlc and the latter gcc. I don't immediately see what would cause that\nunderflow on that old compiler but not elsewhere. I have a shell\nthere (cfarm111) if someone has an idea...\n\n\n",
"msg_date": "Mon, 16 Oct 2023 10:16:47 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> One of the AIX animals gave a strange result here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2023-10-15%2011%3A40%3A01\n\n> If you ignore the diffs due to change in column width, the interesting\n> change seems to be:\n\n> - 23:59:00-07 | 06:59:00+00 | 06:59:00+00 | 06:59:00+00\n> - 23:59:59.99-07 | 06:59:59.99+00 | 06:59:59.99+00 | 06:59:59.99+00\n> + 23:59:00-07 | 4294966103:4294967295:00+00 |\n> 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00\n> + 23:59:59.99-07 | 4294966103:00:00.01+00 |\n> 4294966103:00:00.01+00 | 4294966103:00:00.01+00\n\n> But the other AIX animal 'sungazer' was OK with it. They're both on\n> the same AIX7.1 host IIRC, both 64 bit builds, but the former is using\n> xlc and the latter gcc. I don't immediately see what would cause that\n> underflow on that old compiler but not elsewhere. I have a shell\n> there (cfarm111) if someone has an idea...\n\nHmm. Seems like the error has to be creeping in during this part\nof timetz_zone():\n\n\tresult->time = t->time + (t->zone - tz) * USECS_PER_SEC;\n\twhile (result->time < INT64CONST(0))\n\t\tresult->time += USECS_PER_DAY;\n\twhile (result->time >= USECS_PER_DAY)\n\t\tresult->time -= USECS_PER_DAY;\n\nAccording to my machine, the initial computation of result->time\n(for the '23:59:59.99-07' input) yields 111599990000, and then we\niterate the second loop once to get 25199990000, which is the right\nanswer. If I force a second iteration to get -61200010000, I get\n\n# select '23:59:59.99-07'::timetz at local;\n timezone \n------------------------\n 4294967279:00:00.01+00\n(1 row)\n\nwhich doesn't quite match hornet's result but it seems\nsuggestively close.\n\nAnother line of thought is that while the time fields are int64,\nt->zone and tz are only int32. Multiplying by the INT64CONST\nUSECS_PER_SEC ought to be enough to make the compiler widen\nthe subtraction result to int64, but maybe that's screwing up?\nI'm tempted to wonder if this helps:\n\n-\tresult->time = t->time + (t->zone - tz) * USECS_PER_SEC;\n+\tresult->time = t->time + (int64) (t->zone - tz) * USECS_PER_SEC;\n\nForcing the wrong thing to happen there doesn't produce a match\nto hornet's result either, so I don't have a lot of hope for that\ntheory, but it seems like the explanation has to be somewhere here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 17:57:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 10:57 AM Tom Lane <[email protected]> wrote:\n> I'm tempted to wonder if this helps:\n>\n> - result->time = t->time + (t->zone - tz) * USECS_PER_SEC;\n> + result->time = t->time + (int64) (t->zone - tz) * USECS_PER_SEC;\n\nI wanted to be able to try this and any other theories and managed to\nbuild the tip of master on cfarm111 with the same CC and CFLAGS as\nNoah used, but the problem didn't reproduce! Hmm, I didn't enable any\nextra options, so now I'm wondering if something in some random header\nsomewhere is involved here... trying again with more stuff turned\non...\n\n\n",
"msg_date": "Mon, 16 Oct 2023 11:24:13 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Another possibly interesting factoid: it appears that before\n97957fdba, we had zero regression test coverage of timetz_zone ---\nand we still have none of timetz_izone, which contains essentially\nthe same code. So if there is a problem here, whether it's ours or\nthe compiler's, it's not hard to see why we didn't notice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 18:47:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 11:24 AM Thomas Munro <[email protected]> wrote:\n> On Mon, Oct 16, 2023 at 10:57 AM Tom Lane <[email protected]> wrote:\n> > I'm tempted to wonder if this helps:\n> >\n> > - result->time = t->time + (t->zone - tz) * USECS_PER_SEC;\n> > + result->time = t->time + (int64) (t->zone - tz) * USECS_PER_SEC;\n>\n> I wanted to be able to try this and any other theories and managed to\n> build the tip of master on cfarm111 with the same CC and CFLAGS as\n> Noah used, but the problem didn't reproduce! Hmm, I didn't enable any\n> extra options, so now I'm wondering if something in some random header\n> somewhere is involved here... trying again with more stuff turned\n> on...\n\nOh, I can't use any of the handrolled packages in ~nm due to\npermissions. I tried enabling perl from /opt/freeware (perl is my\nusual first guess for who is !@#$ing with the system headers), but the\ntest passes.\n\n\n",
"msg_date": "Mon, 16 Oct 2023 11:50:08 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 06:47:04PM -0400, Tom Lane wrote:\n> Another possibly interesting factoid: it appears that before\n> 97957fdba, we had zero regression test coverage of timetz_zone ---\n> and we still have none of timetz_izone, which contains essentially\n> the same code. So if there is a problem here, whether it's ours or\n> the compiler's, it's not hard to see why we didn't notice.\n\nRight. This one is just a lucky, or say unlucky find. I didn't\nnotice that this path was entirely missing coverage, planting an\nassertion in the middle of timetz_zone() passes check-world.\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 10:28:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 11:50:08AM +1300, Thomas Munro wrote:\n> On Mon, Oct 16, 2023 at 11:24 AM Thomas Munro <[email protected]> wrote:\n>> On Mon, Oct 16, 2023 at 10:57 AM Tom Lane <[email protected]> wrote:\n>>> I'm tempted to wonder if this helps:\n>>>\n>>> - result->time = t->time + (t->zone - tz) * USECS_PER_SEC;\n>>> + result->time = t->time + (int64) (t->zone - tz) * USECS_PER_SEC;\n\nAll that should use TZNAME_FIXED_OFFSET as timezone type, and I don't\nreally see why this would overflow..\n\nPerhaps a more aggressive (int64) ((t->zone - (int64) tz) *\nUSECS_PER_SEC) would help?\n\n>> I wanted to be able to try this and any other theories and managed to\n>> build the tip of master on cfarm111 with the same CC and CFLAGS as\n>> Noah used, but the problem didn't reproduce! Hmm, I didn't enable any\n>> extra options, so now I'm wondering if something in some random header\n>> somewhere is involved here... trying again with more stuff turned\n>> on...\n> \n> Oh, I can't use any of the handrolled packages in ~nm due to\n> permissions. I tried enabling perl from /opt/freeware (perl is my\n> usual first guess for who is !@#$ing with the system headers), but the\n> test passes.\n\nAnother theory would be one of these weird compiler optimization issue\nfrom xlc? In recent history, there was 8d2a01ae12cd.\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 10:58:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 2:58 PM Michael Paquier <[email protected]> wrote:\n> Another theory would be one of these weird compiler optimization issue\n> from xlc? In recent history, there was 8d2a01ae12cd.\n\nYeah, there are more like that too. xlc 12.1 is dead (like the OS\nversion it shipped with). New versions are available on cfarm if we\ncare about this target. But I am conscious of the cosmic law that if\nyou blame the compiler too soon you can cause the bug to move into\nyour code...\n\n\n",
"msg_date": "Mon, 16 Oct 2023 15:53:56 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Oct 16, 2023 at 2:58 PM Michael Paquier <[email protected]> wrote:\n>> Another theory would be one of these weird compiler optimization issue\n>> from xlc? In recent history, there was 8d2a01ae12cd.\n\n> Yeah, there are more like that too. xlc 12.1 is dead (like the OS\n> version it shipped with). New versions are available on cfarm if we\n> care about this target. But I am conscious of the cosmic law that if\n> you blame the compiler too soon you can cause the bug to move into\n> your code...\n\nI'm having a hard time not believing that this is a compiler bug.\nLooking back at 8d2a01ae12cd and its speculation that xlc is overly\nliberal about reordering code around sequence points ... I wonder\nif it'd help to do this calculation in a local variable, and only\nassign the final value to result->time ? But we have to reproduce\nthe problem first.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 23:02:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 4:02 PM Tom Lane <[email protected]> wrote:\n> I'm having a hard time not believing that this is a compiler bug.\n> Looking back at 8d2a01ae12cd and its speculation that xlc is overly\n> liberal about reordering code around sequence points ... I wonder\n> if it'd help to do this calculation in a local variable, and only\n> assign the final value to result->time ? But we have to reproduce\n> the problem first.\n\nIf that can be shown I would vote for switching to /opt/IBM/xlc/16.1.0\nand not changing a single bit of PostgreSQL.\n\n\n",
"msg_date": "Mon, 16 Oct 2023 16:07:20 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Oct 16, 2023 at 4:02 PM Tom Lane <[email protected]> wrote:\n>> I'm having a hard time not believing that this is a compiler bug.\n>> Looking back at 8d2a01ae12cd and its speculation that xlc is overly\n>> liberal about reordering code around sequence points ... I wonder\n>> if it'd help to do this calculation in a local variable, and only\n>> assign the final value to result->time ? But we have to reproduce\n>> the problem first.\n\n> If that can be shown I would vote for switching to /opt/IBM/xlc/16.1.0\n> and not changing a single bit of PostgreSQL.\n\nIf switching to 16.1 removes the failure, I'd agree. It's hard\nto believe that any significant number of users still care about\nbuilding PG with xlc 12.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Oct 2023 23:30:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 11:30:17PM -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n>> If that can be shown I would vote for switching to /opt/IBM/xlc/16.1.0\n>> and not changing a single bit of PostgreSQL.\n> \n> If switching to 16.1 removes the failure, I'd agree. It's hard\n> to believe that any significant number of users still care about\n> building PG with xlc 12.\n\nFWIW, I really wish that we were less conservative here and just drop\nthat rather than waste resources in debugging things.\n\nNow, I'm also OK to put this one aside and put a WHERE clause to\ntimetz_local_view to only fetch one value, as the test has the same\nvalue as long as we check that AT LOCAL converts the result to UTC for\nthe three expression patterns tested.\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 13:22:54 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 11:30:17PM -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Mon, Oct 16, 2023 at 4:02 PM Tom Lane <[email protected]> wrote:\n> >> I'm having a hard time not believing that this is a compiler bug.\n> >> Looking back at 8d2a01ae12cd and its speculation that xlc is overly\n> >> liberal about reordering code around sequence points ... I wonder\n> >> if it'd help to do this calculation in a local variable, and only\n> >> assign the final value to result->time ? But we have to reproduce\n> >> the problem first.\n> \n> > If that can be shown I would vote for switching to /opt/IBM/xlc/16.1.0\n> > and not changing a single bit of PostgreSQL.\n> \n> If switching to 16.1 removes the failure, I'd agree. It's hard\n> to believe that any significant number of users still care about\n> building PG with xlc 12.\n\nWorks for me. I've started a test run with the xlc version change.\n\n\n",
"msg_date": "Sun, 15 Oct 2023 21:58:04 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 09:58:04PM -0700, Noah Misch wrote:\n> On Sun, Oct 15, 2023 at 11:30:17PM -0400, Tom Lane wrote:\n> > Thomas Munro <[email protected]> writes:\n> > > On Mon, Oct 16, 2023 at 4:02 PM Tom Lane <[email protected]> wrote:\n> > >> I'm having a hard time not believing that this is a compiler bug.\n> > >> Looking back at 8d2a01ae12cd and its speculation that xlc is overly\n> > >> liberal about reordering code around sequence points ... I wonder\n> > >> if it'd help to do this calculation in a local variable, and only\n> > >> assign the final value to result->time ? But we have to reproduce\n> > >> the problem first.\n> > \n> > > If that can be shown I would vote for switching to /opt/IBM/xlc/16.1.0\n> > > and not changing a single bit of PostgreSQL.\n> > \n> > If switching to 16.1 removes the failure, I'd agree. It's hard\n> > to believe that any significant number of users still care about\n> > building PG with xlc 12.\n> \n> Works for me. I've started a test run with the xlc version change.\n\nIt failed similarly:\n\n+ 23:59:00-07 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00\n+ 23:59:59.99-07 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00\n\n\n",
"msg_date": "Sun, 15 Oct 2023 22:50:51 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> On Sun, Oct 15, 2023 at 09:58:04PM -0700, Noah Misch wrote:\n>> Works for me. I've started a test run with the xlc version change.\n\n> It failed similarly:\n\n> + 23:59:00-07 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00\n> + 23:59:59.99-07 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00\n\nUgh. So if the failure is robust enough to persist across\nseveral major xlc versions, why couldn't Thomas reproduce it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Oct 2023 01:54:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 01:54:23AM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Sun, Oct 15, 2023 at 09:58:04PM -0700, Noah Misch wrote:\n> >> Works for me. I've started a test run with the xlc version change.\n> \n> > It failed similarly:\n> \n> > + 23:59:00-07 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00 | 4294966103:4294967295:00+00\n> > + 23:59:59.99-07 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00 | 4294966103:00:00.01+00\n> \n> Ugh. So if the failure is robust enough to persist across\n> several major xlc versions, why couldn't Thomas reproduce it?\n\nBeats me. hornet wipes its starting environment down to OBJECT_MODE=32_64\nPERL5LIB=/home/nm/sw/cpan64/lib/perl5 SPECIES=xlc64 PATH=/usr/bin, then\napplies all the environment settings seen in buildfarm logs.\n\n\n",
"msg_date": "Sun, 15 Oct 2023 23:05:10 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Sun, Oct 15, 2023 at 11:05:10PM -0700, Noah Misch wrote:\n> On Mon, Oct 16, 2023 at 01:54:23AM -0400, Tom Lane wrote:\n>> Ugh. So if the failure is robust enough to persist across\n>> several major xlc versions, why couldn't Thomas reproduce it?\n> \n> Beats me. hornet wipes its starting environment down to OBJECT_MODE=32_64\n> PERL5LIB=/home/nm/sw/cpan64/lib/perl5 SPECIES=xlc64 PATH=/usr/bin, then\n> applies all the environment settings seen in buildfarm logs.\n\nPerhaps that's a stupid question.. But a server running under this\nenvironment fails the two following queries even for older branches,\nright?\nselect timezone('UTC', '23:59:59.99-07'::timetz);\nselect timezone('UTC', '23:59:00-07'::timetz);\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 16:29:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> Perhaps that's a stupid question.. But a server running under this\n> environment fails the two following queries even for older branches,\n> right?\n> select timezone('UTC', '23:59:59.99-07'::timetz);\n> select timezone('UTC', '23:59:00-07'::timetz);\n\nOne would expect, since the AT LOCAL syntax is just sugar for that.\n\nI'm mighty tempted though to (a) add coverage for timetz_izone\nto HEAD, and (b) backpatch the new tests, sans the AT LOCAL case,\nto the back branches (maybe not v11).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Oct 2023 09:54:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 09:54:41AM -0400, Tom Lane wrote:\n> I'm mighty tempted though to (a) add coverage for timetz_izone\n> to HEAD, and (b) backpatch the new tests, sans the AT LOCAL case,\n> to the back branches (maybe not v11).\n\nI see that you've already done (a) with 2f04720307. I'd be curious to\nsee what happens for (b), as well, once (a) is processed on hornet\nonce..\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 08:25:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Oct 16, 2023 at 09:54:41AM -0400, Tom Lane wrote:\n>> I'm mighty tempted though to (a) add coverage for timetz_izone\n>> to HEAD, and (b) backpatch the new tests, sans the AT LOCAL case,\n>> to the back branches (maybe not v11).\n\n> I see that you've already done (a) with 2f04720307. I'd be curious to\n> see what happens for (b), as well, once (a) is processed on hornet\n> once..\n\nSure enough, timetz_izone has exactly the same behavior [1].\n\nI'd kind of decided that back-patching wouldn't be worth the trouble;\ndo you foresee that it'd teach us anything new?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2023-10-17%2000%3A07%3A36\n\n\n",
"msg_date": "Mon, 16 Oct 2023 22:11:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> On Mon, Oct 16, 2023 at 01:54:23AM -0400, Tom Lane wrote:\n>> Ugh. So if the failure is robust enough to persist across\n>> several major xlc versions, why couldn't Thomas reproduce it?\n\n> Beats me. hornet wipes its starting environment down to OBJECT_MODE=32_64\n> PERL5LIB=/home/nm/sw/cpan64/lib/perl5 SPECIES=xlc64 PATH=/usr/bin, then\n> applies all the environment settings seen in buildfarm logs.\n\nI was able to reproduce the failure on cfarm111 after adopting\nthese settings from hornet's configuration:\n\nexport OBJECT_MODE=64\nexport CC='xlc_r -D_LARGE_FILES=1 '\nexport CFLAGS='-O2 -qmaxmem=33554432 -qsuppress=1500-010:1506-995 -qsuppress=1506-010:1506-416:1506-450:1506-480:1506-481:1506-492:1506-944:1506-1264 -qinfo=all:nocnd:noeff:noext:nogot:noini:noord:nopar:noppc:norea:nouni:nouse -qsuppress=1506-374:1506-419:1506-434:1506-438:1506-451:1506-452:1506-453:1506-495:1506-786'\n\nand doing\n\n./configure --enable-cassert --enable-debug --without-icu --prefix=/home/tgl/testversion\n\netc etc.\n\nIt is absolutely, gold-platedly, a compiler bug, because inserting\na debug printf into the loop\n\n\twhile (result->time >= USECS_PER_DAY)\n\t\tresult->time -= USECS_PER_DAY;\n\nmakes the failure go away. Unfortunately, I've not yet found another\nway to make it go away :-(. My upthread idea of using a local variable\ninstead of result->time is no help, and some other random code\nalterations didn't change the results either.\n\nNot sure where we go from here. While I don't have much hesitation\nabout blowing off xlc_r 12.1, it would be sad if their latest\ntoolchain doesn't work either. (I didn't try permuting the code\nwhile using the newer compiler, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 01:40:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 01:40:18AM -0400, Tom Lane wrote:\n> makes the failure go away. Unfortunately, I've not yet found another\n> way to make it go away :-(. My upthread idea of using a local variable\n> instead of result->time is no help, and some other random code\n> alterations didn't change the results either.\n\nThat may be a long shot, but even a modulo? Say in these two code\npaths:\n- while (result->time >= USECS_PER_DAY)\n- result->time -= USECS_PER_DAY;\n+ if (result->time >= USECS_PER_DAY)\n+ result->time %= USECS_PER_DAY;\n\n> Not sure where we go from here. While I don't have much hesitation\n> about blowing off xlc_r 12.1, it would be sad if their latest\n> toolchain doesn't work either. (I didn't try permuting the code\n> while using the newer compiler, though.)\n\nWe've spent a lot of time on that. I'm OK to just give up, trim the\nvalues of the view with a qual, and call it a day.\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 15:25:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Oct 17, 2023 at 01:40:18AM -0400, Tom Lane wrote:\n>> makes the failure go away. Unfortunately, I've not yet found another\n>> way to make it go away :-(. My upthread idea of using a local variable\n>> instead of result->time is no help, and some other random code\n>> alterations didn't change the results either.\n\n> That may be a long shot, but even a modulo?\n\nYeah, the same thing occurred to me in the shower this morning, and it\ndoes seem to work! We can replace both loops with a %= operator, at\nleast if we're willing to assume C99 division semantics, which seems\npretty safe in 2023. Your idea of doing a range check to skip the\ndivision in typical cases is a refinement I'd not thought of, but\nit seems like a good idea for performance.\n\n(I see that the negative-starting-point case isn't covered in the\ncurrent regression tests, so maybe we better add a test for that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:31:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "I wrote:\n> Yeah, the same thing occurred to me in the shower this morning, and it\n> does seem to work! We can replace both loops with a %= operator, at\n> least if we're willing to assume C99 division semantics, which seems\n> pretty safe in 2023.\n\nWhoops, no: for negative starting values we'd need truncate-towards-\nminus-infinity division whereas C99 specifies truncate-towards-zero.\nHowever, the attached does pass for me on cfarm111 as well as my\nusual dev machine.\n\nPresumably this is a pre-existing bug that also appears in back\nbranches. But in the interests of science I propose that we\nback-patch only the test case and see which machine(s) fail it\nbefore back-patching the code change.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Oct 2023 12:45:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Hmm, I guess I must have missed some important flag or environment\nvariable when trying to reproduce it, sorry.\n\nGiven that IBM describes xlc as \"legacy\" (replaced by xlclang, but\nstill supported for some unspecified period of time for the benefit of\npeople who need C++ ABI compatibility with old code), I wonder how\nlong we plan to support it... Anecdotally, from a time 1-2 decades\nago when I used AIX daily, I can report that vast amounts of open\nsource stuff couldn't build with xlc, so gcc was used for pretty much\nanything that didn't have a C++ ABI requirement. I kinda wonder if a\nsingle person in the entire world appreciates that we support this.\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:38:58 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n\n> Given that IBM describes xlc as \"legacy\" (replaced by xlclang, but\n> still supported for some unspecified period of time for the benefit of\n> people who need C++ ABI compatibility with old code), I wonder how\n> long we plan to support it...\n\nShould we be testing against xlclang instead?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 18:54:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 11:54 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n>\n> > Given that IBM describes xlc as \"legacy\" (replaced by xlclang, but\n> > still supported for some unspecified period of time for the benefit of\n> > people who need C++ ABI compatibility with old code), I wonder how\n> > long we plan to support it...\n>\n> Should we be testing against xlclang instead?\n\nI hesitated to suggest it because it's not my animal/time we're\ntalking about but it seems to make more sense. It appears to be IBM's\nanswer to the nothing-builds-with-this-thing phenomenon, since it\naccepts a lot of GCCisms via Clang's adoption of them. From a quick\nglance at [1], it lacks the atomics builtins but we have our own\nassembler magic for POWER. So maybe it'd all just work™.\n\n[1] https://www.ibm.com/docs/en/xl-c-and-cpp-aix/16.1?topic=migration-checklist-when-moving-from-xl-based-front-end-clang-based-front-end\n\n\n",
"msg_date": "Wed, 18 Oct 2023 12:11:35 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Wed, Oct 18, 2023 at 11:54 AM Tom Lane <[email protected]> wrote:\n>> Should we be testing against xlclang instead?\n\n> I hesitated to suggest it because it's not my animal/time we're\n> talking about but it seems to make more sense. It appears to be IBM's\n> answer to the nothing-builds-with-this-thing phenomenon, since it\n> accepts a lot of GCCisms via Clang's adoption of them. From a quick\n> glance at [1], it lacks the atomics builtins but we have our own\n> assembler magic for POWER. So maybe it'd all just work™.\n\nDiscounting the Windows animals, it looks like the xlc animals are\nour only remaining ones that use anything except gcc or clang.\nThat feels uncomfortably like a compiler monoculture to me, so\nI can understand the reasoning for keeping hornet/mandrill going.\nStill, maybe we should just accept the fact that gcc/clang have\noutcompeted everything else in the C compiler universe. It's\ngetting hard to imagine that anyone would bring out some new product\nthat didn't try to be bug-compatible with gcc, for precisely the\nreason you mention.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 19:32:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 12:45:28PM -0400, Tom Lane wrote:\n> Whoops, no: for negative starting values we'd need truncate-towards-\n> minus-infinity division whereas C99 specifies truncate-towards-zero.\n> However, the attached does pass for me on cfarm111 as well as my\n> usual dev machine.\n\nI guess that the following trick could be used for the negative case,\nwith one modulo followed by one extra addition: \nif (result->time < INT64CONST(0))\n{\n result->time %= USECS_PER_DAY;\n result->time += USECS_PER_DAY;\n}\n\n> Presumably this is a pre-existing bug that also appears in back\n> branches. But in the interests of science I propose that we\n> back-patch only the test case and see which machine(s) fail it\n> before back-patching the code change.\n\nSure, as you see fit.\n--\nMichael",
"msg_date": "Wed, 18 Oct 2023 09:02:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Wed, Oct 18, 2023 at 11:54 AM Tom Lane <[email protected]> wrote:\n>> Should we be testing against xlclang instead?\n\n> I hesitated to suggest it because it's not my animal/time we're\n> talking about but it seems to make more sense. It appears to be IBM's\n> answer to the nothing-builds-with-this-thing phenomenon, since it\n> accepts a lot of GCCisms via Clang's adoption of them. From a quick\n> glance at [1], it lacks the atomics builtins but we have our own\n> assembler magic for POWER. So maybe it'd all just work™.\n\nFWIW, I tried a test build with xlclang 16.1 on cfarm111, and\nit does seem like it Just Works, modulo a couple of oddities:\n\n* <netinet/tcp.h> fails to compile, due to references to struct\nin6_addr, unless <netinet/in.h> is included first. Most of our\nreferences to tcp.h already do that, but not libpq-be.h and\nfe-protocol3.c. I'm a bit at a loss why we've not seen this\nwith the existing BF animals on this machine, because AFAICS\nthey're all using the same /usr/include tree.\n\n* configure recognizes this as gcc but not Clang, which may or may\nnot be fine:\n...\nchecking whether we are using the GNU C compiler... yes\n...\nchecking whether xlclang is Clang... no\n...\nThis doesn't seem to break anything, but it struck me as odd.\nconfigure seems to pick a sane set of compiler options anyway.\n\nInterestingly, xlclang shows the same failure with the pre-19fa97731\nversions of timetz_zone/timetz_izone as plain xlc does. I guess\nthis is not so astonishing since they presumably share the same\ncodegen backend. But maybe somebody ought to file a bug with IBM?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 23:15:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 7:35 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Wed, Oct 18, 2023 at 11:54 AM Tom Lane <[email protected]> wrote:\n> >> Should we be testing against xlclang instead?\n>\n> > I hesitated to suggest it because it's not my animal/time we're\n> > talking about but it seems to make more sense. It appears to be IBM's\n> > answer to the nothing-builds-with-this-thing phenomenon, since it\n> > accepts a lot of GCCisms via Clang's adoption of them. From a quick\n> > glance at [1], it lacks the atomics builtins but we have our own\n> > assembler magic for POWER. So maybe it'd all just work™.\n>\n> Discounting the Windows animals, it looks like the xlc animals are\n> our only remaining ones that use anything except gcc or clang.\n> That feels uncomfortably like a compiler monoculture to me, so\n> I can understand the reasoning for keeping hornet/mandrill going.\n> Still, maybe we should just accept the fact that gcc/clang have\n> outcompeted everything else in the C compiler universe. It's\n> getting hard to imagine that anyone would bring out some new product\n> that didn't try to be bug-compatible with gcc, for precisely the\n> reason you mention.\n\nAfter some research I determined that the release date for xlc 12.1\nseems to be June 1, 2012. At that time, clang 3.1 was current and just\nafter, GCC release version 4.7.1 was released. The oldest version of\nclang that I find in the buildfarm is 3.9, and the oldest version of\ngcc I find in the buildfarm is 4.6.3. So, somewhat to my surprise, xlc\nis not the oldest compiler that we're still supporting in the\nbuildfarm. But it is very old, and it seems like that gcc and clang\nare going to continue to gain ground against gcc and other proprietary\ncompilers for some time to come. I think it's reasonable to ask\nourselves whether we really want to go to the trouble of maintaining\nsomething that is likely to get so little real-world usage.\n\nTo be honest, I'm not particularly concerned about the need to adjust\ncompiler and linker options from time to time, even though I know that\nprobably annoys Andres. What does concern me is finding and coding\naround compiler bugs. 19fa977311b9da9c6c84f0108600e78213751a38 is just\nridiculous, IMHO. If an end-of-life compiler for an end-of-life\noperating system has bugs that mean that C code that's doing nothing\nmore than a bit of arithmetic isn't compiling properly, it's time to\npull the plug. Nor is this the first example of working around a bug\nthat only manifests in ancient xlc.\n\nI think that, when there was more real diversity in the software\necosystem, testing on a lot of platforms was a good way of finding out\nwhether you'd done something that was in general correct or just\nsomething that happened to work on the machine you had in front of\nyou. But hornet and mandrill are not telling us about things we've\ndone that are incorrect in general yet happen to work on gcc and\nclang. What they seem to be telling us about, in this case and some\nothers, are things that are CORRECT in general yet happen NOT to work\non ancient xlc. And that's an important difference, because if we were\nfinding real mistakes for which other platforms were not punishing us,\nthen we could hope that fixing those mistakes would improve\ncompatibility with other, equally niche platforms, potentially\nincluding future platforms that haven't come along yet. As it is, it's\nhard to believe that any work we put into this is going to have any\nbenefit on any system other than ancient AIX. If there are other niche\nsystems out there that have a similar number of bugs, they'll probably\nbe *different* bugs.\n\nSources for release dates:\n\nhttps://www.ibm.com/support/pages/fix-list-xl-c-aix\nhttps://releases.llvm.org/\nhttps://gcc.gnu.org/releases.html\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 11:38:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Oct 17, 2023 at 7:35 PM Tom Lane <[email protected]> wrote:\n>> Discounting the Windows animals, it looks like the xlc animals are\n>> our only remaining ones that use anything except gcc or clang.\n\n> After some research I determined that the release date for xlc 12.1\n> seems to be June 1, 2012. At that time, clang 3.1 was current and just\n> after, GCC release version 4.7.1 was released. The oldest version of\n> clang that I find in the buildfarm is 3.9, and the oldest version of\n> gcc I find in the buildfarm is 4.6.3. So, somewhat to my surprise, xlc\n> is not the oldest compiler that we're still supporting in the\n> buildfarm. But it is very old, and it seems like that gcc and clang\n> are going to continue to gain ground against gcc and other proprietary\n> compilers for some time to come.\n\nProbably. Independent of that, it's fair to ask why we're still\ntesting against xlc 12.1 and not the considerably-more-recent xlclang,\nor at least xlc 16.1. (I also wonder why we're still testing AIX 7.1\nrather than an OS version that's not EOL.)\n\n> What does concern me is finding and coding\n> around compiler bugs. 19fa977311b9da9c6c84f0108600e78213751a38 is just\n> ridiculous, IMHO.\n\nI would agree, except for the downthread discovery that the bug is\nstill present in current xlc and xlclang. Short of blowing off AIX\naltogether, it seems like we need to do something about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Oct 2023 12:02:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 12:02 PM Tom Lane <[email protected]> wrote:\n> Probably. Independent of that, it's fair to ask why we're still\n> testing against xlc 12.1 and not the considerably-more-recent xlclang,\n> or at least xlc 16.1. (I also wonder why we're still testing AIX 7.1\n> rather than an OS version that's not EOL.)\n\nWell, according to Wikipedia, AIX 7.3 (released in 2021) requires\nPOWER8. AIX 7.2 (released 2015) only requires POWER7, and according to\nthe buildfarm page, this machine is POWER7. So it could possibly be\nupgraded from 7.1 to 7.2, supposing that it is indeed compatible with\nthat release and that Noah's willing to do it and that there's not an\nexorbitant fee and so on, but that still leaves you running an OS\nversion that is almost certainly closer to EOL than it is to the\noriginal release date. Anything newer would require buying new\nhardware, or so I guess.\n\nPut otherwise, I think the reason we're testing on this AIX rather\nthan anything else is probably that there is exactly 1 person\nassociated with the project who has >0 pieces of hardware that can run\nAIX, and that person has one, so we're testing on that one. That might\nbe a reason to question whether that particular strain of hardware has\na bright future, at least in terms of PostgreSQL support.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 13:01:42 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Oct 18, 2023 at 12:02 PM Tom Lane <[email protected]> wrote:\n>> Probably. Independent of that, it's fair to ask why we're still\n>> testing against xlc 12.1 and not the considerably-more-recent xlclang,\n>> or at least xlc 16.1. (I also wonder why we're still testing AIX 7.1\n>> rather than an OS version that's not EOL.)\n\n> Well, according to Wikipedia, AIX 7.3 (released in 2021) requires\n> POWER8. AIX 7.2 (released 2015) only requires POWER7, and according to\n> the buildfarm page, this machine is POWER7. So it could possibly be\n> upgraded from 7.1 to 7.2, supposing that it is indeed compatible with\n> that release and that Noah's willing to do it and that there's not an\n> exorbitant fee and so on, but that still leaves you running an OS\n> version that is almost certainly closer to EOL than it is to the\n> original release date. Anything newer would require buying new\n> hardware, or so I guess.\n\nThe machine belongs to OSU (via the gcc compile farm), and I see\nthat they have another one that's POWER8 and is running AIX 7.3 [1].\nSo in principle the buildfarm animals could just be moved over\nto that one.\n\nPerhaps Noah has some particular attachment to 7.1, but now that that's\nEOL it seems like we shouldn't be paying so much attention to it.\nMy guess is that it's still there in the compile farm because the gcc\npeople think it's still useful to have access to POWER7 hardware; but\nI doubt there's enough difference for our purposes to be worth dealing\nwith a dead OS and ancient compiler.\n\n\t\t\tregards, tom lane\n\n[1] https://portal.cfarm.net/machines/list/\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:45:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 04:45:46PM -0400, Tom Lane wrote:\n> > On Wed, Oct 18, 2023 at 12:02 PM Tom Lane <[email protected]> wrote:\n> >> Probably. Independent of that, it's fair to ask why we're still\n> >> testing against xlc 12.1 and not the considerably-more-recent xlclang,\n> >> or at least xlc 16.1. (I also wonder why we're still testing AIX 7.1\n> >> rather than an OS version that's not EOL.)\n\n> The machine belongs to OSU (via the gcc compile farm), and I see\n> that they have another one that's POWER8 and is running AIX 7.3 [1].\n> So in principle the buildfarm animals could just be moved over\n> to that one.\n> \n> Perhaps Noah has some particular attachment to 7.1, but now that that's\n> EOL it seems like we shouldn't be paying so much attention to it.\n> My guess is that it's still there in the compile farm because the gcc\n> people think it's still useful to have access to POWER7 hardware; but\n> I doubt there's enough difference for our purposes to be worth dealing\n> with a dead OS and ancient compiler.\n\nNo particular attachment. From 2019 to 2023-08, hoverfly tested xlc16 on AIX\n7.2; its run ended when cfarm119's owner replaced cfarm119 with an AIX 7.3,\nibm-clang v17.1.1 machine. Since 2015, hornet and mandrill have tested xlc12\non AIX 7.1. That said, given your finding that later xlc versions have the\nsame code generation bug, the choice of version is a side issue. A migration\nto ibm-clang wouldn't have prevented this week's xlc-prompted commits.\n\nI feel the gravity and longevity of xlc bugs has been out of proportion with\nthe compiler's contribution to PostgreSQL. I would find it reasonable to\nrevoke xlc support in v17+, leaving AIX gcc support in place. The main\ncontribution of AIX has been to find the bug behind commit a1b8aa1. That\nbenefited from the AIX kernel, not from any particular compiler. hornet and\nmandrill would continue to test v16-.\n\nBy the way, I once tried to report an xlc bug. Their system was tailored to\naccept bugs from paid support customers only. I submitted it via some sales\ninquiry form, just in case, but never heard back.\n\n\n",
"msg_date": "Wed, 18 Oct 2023 16:33:20 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 7:33 PM Noah Misch <[email protected]> wrote:\n> I feel the gravity and longevity of xlc bugs has been out of proportion with\n> the compiler's contribution to PostgreSQL. I would find it reasonable to\n> revoke xlc support in v17+, leaving AIX gcc support in place.\n\n+1 for this proposal. I just think this is getting silly. We're saying\nthat we only have access to 1 or 2 AIX machines, and most of us have\naccess to none, and the compiler has serious code generation bugs that\nare present in both a release 11 years old and also a release current\nrelease, meaning they went unfixed for 10 years, and we can't report\nbugs or get them fixed when we find them, and the use of this\nparticular compiler in the buildfarm isn't finding any issues that\nmatter anywhere else.\n\nTo be honest, I'm not entirely sure that even AIX gcc support is\ndelivering enough value per unit work to justify keeping it around.\nBut the xlc situation is worse.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Oct 2023 10:38:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Oct 18, 2023 at 7:33 PM Noah Misch <[email protected]> wrote:\n>> I feel the gravity and longevity of xlc bugs has been out of proportion with\n>> the compiler's contribution to PostgreSQL. I would find it reasonable to\n>> revoke xlc support in v17+, leaving AIX gcc support in place.\n\n> +1 for this proposal.\n\nWFM, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Oct 2023 10:46:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-19 10:38:14 -0400, Robert Haas wrote:\n> On Wed, Oct 18, 2023 at 7:33 PM Noah Misch <[email protected]> wrote:\n> > I feel the gravity and longevity of xlc bugs has been out of proportion with\n> > the compiler's contribution to PostgreSQL. I would find it reasonable to\n> > revoke xlc support in v17+, leaving AIX gcc support in place.\n> \n> +1 for this proposal. I just think this is getting silly. We're saying\n> that we only have access to 1 or 2 AIX machines, and most of us have\n> access to none, and the compiler has serious code generation bugs that\n> are present in both a release 11 years old and also a release current\n> release, meaning they went unfixed for 10 years, and we can't report\n> bugs or get them fixed when we find them, and the use of this\n> particular compiler in the buildfarm isn't finding any issues that\n> matter anywhere else.\n\n+1.\n\n\n> To be honest, I'm not entirely sure that even AIX gcc support is\n> delivering enough value per unit work to justify keeping it around.\n> But the xlc situation is worse.\n\nAgreed with both. If it were just a platform that didn't need special casing\nin a bunch of places, it'd be one thing, but it's linkage model is so odd that\nit makes no sense to keep AIX support around. But I'll take what I can get...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Oct 2023 22:18:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-10-19 10:38:14 -0400, Robert Haas wrote:\n>> To be honest, I'm not entirely sure that even AIX gcc support is\n>> delivering enough value per unit work to justify keeping it around.\n>> But the xlc situation is worse.\n\n> Agreed with both. If it were just a platform that didn't need special casing\n> in a bunch of places, it'd be one thing, but it's linkage model is so odd that\n> it makes no sense to keep AIX support around. But I'll take what I can get...\n\nThe other thread recently referred to:\n\nhttps://www.postgresql.org/message-id/flat/20220702183354.a6uhja35wta7agew%40alap3.anarazel.de\n\nwas mostly about how AIX's choice that alignof(double) < alignof(int64)\nbreaks a whole bunch of assumptions in our code. AFAICS we've done\nnothing to resolve that, and nobody really wants to deal with it,\nand there's no good reason to think that fixing it would improve\nportability to any other platform. So maybe there's an adequate\ncase for just nuking AIX support altogether? I can't recall the\nlast time I saw a report from an actual AIX end user.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Oct 2023 01:27:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add support for AT LOCAL"
}
] |
[
{
"msg_contents": "Hi All,\n\nAt present, pg_promote() returns true to the caller on successful\npromotion of standby, however it returns false in multiple scenarios\nwhich includes:\n\n1) The SIGUSR1 signal could not be sent to the postmaster process.\n2) The postmaster died during standby promotion.\n3) Standby couldn't be promoted within the specified wait time.\n\nFor an application calling this function, if pg_promote returns false,\nit is hard to interpret the reason behind it. So I think we should\n*only* allow pg_promote to return false when the server could not be\npromoted in the given wait time and in other scenarios it should just\nthrow an error (FATAL, ERROR ... depending on the type of failure that\noccurred). Please let me know your thoughts on this change. thanks.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 6 Jun 2023 16:35:58 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Return value of pg_promote()"
},
{
"msg_contents": "On Tue, 2023-06-06 at 16:35 +0530, Ashutosh Sharma wrote:\n> At present, pg_promote() returns true to the caller on successful\n> promotion of standby, however it returns false in multiple scenarios\n> which includes:\n> \n> 1) The SIGUSR1 signal could not be sent to the postmaster process.\n> 2) The postmaster died during standby promotion.\n> 3) Standby couldn't be promoted within the specified wait time.\n> \n> For an application calling this function, if pg_promote returns false,\n> it is hard to interpret the reason behind it. So I think we should\n> *only* allow pg_promote to return false when the server could not be\n> promoted in the given wait time and in other scenarios it should just\n> throw an error (FATAL, ERROR ... depending on the type of failure that\n> occurred). Please let me know your thoughts on this change. thanks.!\n\nAs the original author, I'd say that that sounds reasonable, particularly\nin case #1. If the postmaster dies, we are going to die too, so it\nprobably doesn't matter much. But I think an error is certainly also\ncorrect in that case.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 06 Jun 2023 19:00:57 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "\n\nOn 2023/06/07 2:00, Laurenz Albe wrote:\n> On Tue, 2023-06-06 at 16:35 +0530, Ashutosh Sharma wrote:\n>> At present, pg_promote() returns true to the caller on successful\n>> promotion of standby, however it returns false in multiple scenarios\n>> which includes:\n>>\n>> 1) The SIGUSR1 signal could not be sent to the postmaster process.\n>> 2) The postmaster died during standby promotion.\n>> 3) Standby couldn't be promoted within the specified wait time.\n>>\n>> For an application calling this function, if pg_promote returns false,\n>> it is hard to interpret the reason behind it. So I think we should\n>> *only* allow pg_promote to return false when the server could not be\n>> promoted in the given wait time and in other scenarios it should just\n>> throw an error (FATAL, ERROR ... depending on the type of failure that\n>> occurred). Please let me know your thoughts on this change. thanks.!\n> \n> As the original author, I'd say that that sounds reasonable, particularly\n> in case #1. If the postmaster dies, we are going to die too, so it\n> probably doesn't matter much. But I think an error is certainly also\n> correct in that case.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 8 Jun 2023 01:25:38 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 9:55 PM Fujii Masao <[email protected]> wrote:\n>\n>\n>\n> On 2023/06/07 2:00, Laurenz Albe wrote:\n> > On Tue, 2023-06-06 at 16:35 +0530, Ashutosh Sharma wrote:\n> >> At present, pg_promote() returns true to the caller on successful\n> >> promotion of standby, however it returns false in multiple scenarios\n> >> which includes:\n> >>\n> >> 1) The SIGUSR1 signal could not be sent to the postmaster process.\n> >> 2) The postmaster died during standby promotion.\n> >> 3) Standby couldn't be promoted within the specified wait time.\n> >>\n> >> For an application calling this function, if pg_promote returns false,\n> >> it is hard to interpret the reason behind it. So I think we should\n> >> *only* allow pg_promote to return false when the server could not be\n> >> promoted in the given wait time and in other scenarios it should just\n> >> throw an error (FATAL, ERROR ... depending on the type of failure that\n> >> occurred). Please let me know your thoughts on this change. thanks.!\n> >\n> > As the original author, I'd say that that sounds reasonable, particularly\n> > in case #1. If the postmaster dies, we are going to die too, so it\n> > probably doesn't matter much. But I think an error is certainly also\n> > correct in that case.\n>\n> +1\n>\n\nThanks for sharing your thoughts, Laurenz and Fujii-san. I've prepared\na patch that makes pg_promote error out if it couldn't send SIGUSR1 to\nthe postmaster or if the postmaster died in the middle of standby\npromotion. PFA. Please note that now (with this patch) pg_promote only\nreturns false if the standby could not be promoted within the given\nwait time. In case of any kind of failure, it just reports an error\nbased on the type of failure that occurred.\n\n--\nWith Regards,\nAshutosh Sharma.",
"msg_date": "Thu, 8 Jun 2023 16:53:50 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Thu, Jun 08, 2023 at 04:53:50PM +0530, Ashutosh Sharma wrote:\n> Thanks for sharing your thoughts, Laurenz and Fujii-san. I've prepared\n> a patch that makes pg_promote error out if it couldn't send SIGUSR1 to\n> the postmaster or if the postmaster died in the middle of standby\n> promotion. PFA. Please note that now (with this patch) pg_promote only\n> returns false if the standby could not be promoted within the given\n> wait time. In case of any kind of failure, it just reports an error\n> based on the type of failure that occurred.\n\n if (kill(PostmasterPid, SIGUSR1) != 0)\n {\n- ereport(WARNING,\n- (errmsg(\"failed to send signal to postmaster: %m\")));\n (void) unlink(PROMOTE_SIGNAL_FILE);\n- PG_RETURN_BOOL(false);\n+ ereport(ERROR,\n+ (errmsg(\"failed to send signal to postmaster: %m\")));\n }\n\nShouldn't you assign an error code to this one rather than the\ndefault one for internal errors, like ERRCODE_SYSTEM_ERROR?\n\n /* return immediately if waiting was not requested */\n@@ -744,7 +743,9 @@ pg_promote(PG_FUNCTION_ARGS)\n * necessity for manual cleanup of all postmaster children.\n */\n if (rc & WL_POSTMASTER_DEATH)\n- PG_RETURN_BOOL(false);\n+ ereport(FATAL,\n+ (errcode(ERRCODE_ADMIN_SHUTDOWN),\n+ errmsg(\"terminating connection due to unexpected postmaster exit\")));\n\nI would add an errcontext here, to let somebody know that the\nconnection died while waiting for the promotion to be processed, say\n\"while waiting on promotion\".\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 17:02:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 05:02:09PM +0900, Michael Paquier wrote:\n> if (kill(PostmasterPid, SIGUSR1) != 0)\n> {\n> - ereport(WARNING,\n> - (errmsg(\"failed to send signal to postmaster: %m\")));\n> (void) unlink(PROMOTE_SIGNAL_FILE);\n> - PG_RETURN_BOOL(false);\n> + ereport(ERROR,\n> + (errmsg(\"failed to send signal to postmaster: %m\")));\n> }\n> \n> Shouldn't you assign an error code to this one rather than the\n> default one for internal errors, like ERRCODE_SYSTEM_ERROR?\n> \n> /* return immediately if waiting was not requested */\n> @@ -744,7 +743,9 @@ pg_promote(PG_FUNCTION_ARGS)\n> * necessity for manual cleanup of all postmaster children.\n> */\n> if (rc & WL_POSTMASTER_DEATH)\n> - PG_RETURN_BOOL(false);\n> + ereport(FATAL,\n> + (errcode(ERRCODE_ADMIN_SHUTDOWN),\n> + errmsg(\"terminating connection due to unexpected postmaster exit\")));\n> \n> I would add an errcontext here, to let somebody know that the\n> connection died while waiting for the promotion to be processed, say\n> \"while waiting on promotion\".\n\nI have just noticed that we do not have a CF entry for this proposal,\nso I have added one with Laurenz as author:\nhttps://commitfest.postgresql.org/44/4504/\n\nFor now the patch is waiting on author. Could you address my\nlast review?\n--\nMichael",
"msg_date": "Thu, 17 Aug 2023 09:37:36 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "Hi Michael,\n\nOn Thu, Aug 17, 2023 at 6:07 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Aug 16, 2023 at 05:02:09PM +0900, Michael Paquier wrote:\n> > if (kill(PostmasterPid, SIGUSR1) != 0)\n> > {\n> > - ereport(WARNING,\n> > - (errmsg(\"failed to send signal to postmaster: %m\")));\n> > (void) unlink(PROMOTE_SIGNAL_FILE);\n> > - PG_RETURN_BOOL(false);\n> > + ereport(ERROR,\n> > + (errmsg(\"failed to send signal to postmaster: %m\")));\n> > }\n> >\n> > Shouldn't you assign an error code to this one rather than the\n> > default one for internal errors, like ERRCODE_SYSTEM_ERROR?\n> >\n> > /* return immediately if waiting was not requested */\n> > @@ -744,7 +743,9 @@ pg_promote(PG_FUNCTION_ARGS)\n> > * necessity for manual cleanup of all postmaster children.\n> > */\n> > if (rc & WL_POSTMASTER_DEATH)\n> > - PG_RETURN_BOOL(false);\n> > + ereport(FATAL,\n> > + (errcode(ERRCODE_ADMIN_SHUTDOWN),\n> > + errmsg(\"terminating connection due to unexpected postmaster exit\")));\n> >\n> > I would add an errcontext here, to let somebody know that the\n> > connection died while waiting for the promotion to be processed, say\n> > \"while waiting on promotion\".\n>\n> I have just noticed that we do not have a CF entry for this proposal,\n> so I have added one with Laurenz as author:\n> https://commitfest.postgresql.org/44/4504/\n>\n> For now the patch is waiting on author. Could you address my\n> last review?\n\nThanks for reviewing the patch and adding a CF entry for it. PFA patch\nthat addresses your review comments.\n\nAnd... Sorry for the delayed response. I totally missed it.\n\n--\nWith Regards,\nAshutosh Sharma.",
"msg_date": "Mon, 28 Aug 2023 11:50:45 +0530",
"msg_from": "Ashutosh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 11:50:45AM +0530, Ashutosh Sharma wrote:\n> Thanks for reviewing the patch and adding a CF entry for it. PFA patch\n> that addresses your review comments.\n\nThat looks OK seen from here. Perhaps others have more comments?\n\n> And... Sorry for the delayed response. I totally missed it.\n\nNo problem.\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 18:09:44 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Thu, 2023-08-17 at 09:37 +0900, Michael Paquier wrote:\n> I have just noticed that we do not have a CF entry for this proposal,\n> so I have added one with Laurenz as author:\n> https://commitfest.postgresql.org/44/4504/\n\nI have changed the author to Fujii Masao.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 28 Aug 2023 14:09:37 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 02:09:37PM +0200, Laurenz Albe wrote:\n> On Thu, 2023-08-17 at 09:37 +0900, Michael Paquier wrote:\n> > I have just noticed that we do not have a CF entry for this proposal,\n> > so I have added one with Laurenz as author:\n> > https://commitfest.postgresql.org/44/4504/\n> \n> I have changed the author to Fujii Masao.\n\nStill incorrect, as the author is Ashutosh Sharma. Fujii-san has\nprovided some feedback, though.\n--\nMichael",
"msg_date": "Tue, 29 Aug 2023 08:47:29 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Return value of pg_promote()"
}
] |
[
{
"msg_contents": "Hello,\n\nToday, I compiled the master branch of Postgres with the following GCC\nversion:\n\ngcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)\n\nI got the following warning:\n\n[701/2058] Compiling C object src/backend/postgres_lib.a.p/access_transam_xlogrecovery.c.o\nIn function ‘recoveryStopsAfter’,\n inlined from ‘PerformWalRecovery’ at ../src/backend/access/transam/xlogrecovery.c:1749:8:\n../src/backend/access/transam/xlogrecovery.c:2756:42: warning: ‘recordXtime’ may be used uninitialized [-Wmaybe-uninitialized]\n 2756 | recoveryStopTime = recordXtime;\n | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~\n../src/backend/access/transam/xlogrecovery.c: In function ‘PerformWalRecovery’:\n../src/backend/access/transam/xlogrecovery.c:2647:21: note: ‘recordXtime’ was declared here\n 2647 | TimestampTz recordXtime;\n | ^~~~~~~~~~~\n\nInvestigating this issue I see a potential assignment in\nxlogrecovery.c:2715. Best I can tell the warning looks real. Similar\nfunctions in this file seem to initialize recordXtime to 0. Attached is\na patch which does just that.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 06 Jun 2023 09:24:56 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential us of initialized memory in xlogrecovery.c"
},
{
"msg_contents": "On 06/06/2023 10:24, Tristan Partin wrote:\n> Hello,\n> \n> Today, I compiled the master branch of Postgres with the following GCC\n> version:\n> \n> gcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)\n> \n> I got the following warning:\n> \n> [701/2058] Compiling C object src/backend/postgres_lib.a.p/access_transam_xlogrecovery.c.o\n> In function ‘recoveryStopsAfter’,\n> inlined from ‘PerformWalRecovery’ at ../src/backend/access/transam/xlogrecovery.c:1749:8:\n> ../src/backend/access/transam/xlogrecovery.c:2756:42: warning: ‘recordXtime’ may be used uninitialized [-Wmaybe-uninitialized]\n> 2756 | recoveryStopTime = recordXtime;\n> | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~\n> ../src/backend/access/transam/xlogrecovery.c: In function ‘PerformWalRecovery’:\n> ../src/backend/access/transam/xlogrecovery.c:2647:21: note: ‘recordXtime’ was declared here\n> 2647 | TimestampTz recordXtime;\n> | ^~~~~~~~~~~\n> \n> Investigating this issue I see a potential assignment in\n> xlogrecovery.c:2715. Best I can tell the warning looks real. Similar\n> functions in this file seem to initialize recordXtime to 0. Attached is\n> a patch which does just that.\n\nThank you! My refactoring in commit c945af80cf introduced this. Looking \nat getRecordTimestamp(), it will always return true and set recordXtime \nfor the commit and abort records, and some compilers can deduce that.\n\nInitializing to 0 makes sense, I'll commit that fix later tonight.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 19:31:14 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential us of initialized memory in xlogrecovery.c"
}
] |
[
{
"msg_contents": "Hi,\n\nWe recently upgraded to Postgres 15.3. When running ANALYZE, we get the\nfollowing message:\nERROR: could not determine which collation to use for string comparison\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nWe have never seen this before. Could this be a bug?\n\nRegards,\n\nRômulo Coutinho.\n\nHi,We recently upgraded to Postgres 15.3. When running ANALYZE, we get the following message:ERROR: could not determine which collation to use for string comparisonHINT: Use the COLLATE clause to set the collation explicitly.We have never seen this before. Could this be a bug?Regards,Rômulo Coutinho.",
"msg_date": "Tue, 6 Jun 2023 11:42:36 -0300",
"msg_from": "=?UTF-8?Q?R=C3=B4mulo_Coutinho?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: could not determine which collation to use for string\n comparison"
},
{
"msg_contents": "On Tue, 2023-06-06 at 11:42 -0300, Rômulo Coutinho wrote:\n> We recently upgraded to Postgres 15.3. When running ANALYZE, we get the following message:\n> ERROR: could not determine which collation to use for string comparison\n> HINT: Use the COLLATE clause to set the collation explicitly.\n> \n> We have never seen this before. Could this be a bug?\n\nImpossible to say without a way to reproduce.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 06 Jun 2023 18:57:57 +0200",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: could not determine which collation to use for string\n comparison"
}
] |
[
{
"msg_contents": "Hi\n\nHere:\n\n https://www.postgresql.org/docs/current/fdw-functions.html\n\nthe enumeration of OIDs which might be passed as the FDW validator function's\nsecond argument omits \"AttributeRelationId\" (as passed when altering\na foreign table's column options).\n\nAttached v1 patch adds this to this list of OIDs.\n\nThe alternative v2 patch adds this to this list of OIDs, and also\nformats it as an\nSGML list, which IMHO is easier to read.\n\nLooks like this has been missing since 9.3.\n\n\nRegards\n\nIan Barwick",
"msg_date": "Wed, 7 Jun 2023 09:08:51 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "doc patch: note AttributeRelationId passed to FDW validator function"
},
{
"msg_contents": "2023年6月7日(水) 9:08 Ian Lawrence Barwick <[email protected]>:\n>\n> Hi\n>\n> Here:\n>\n> https://www.postgresql.org/docs/current/fdw-functions.html\n>\n> the enumeration of OIDs which might be passed as the FDW validator function's\n> second argument omits \"AttributeRelationId\" (as passed when altering\n> a foreign table's column options).\n>\n> Attached v1 patch adds this to this list of OIDs.\n>\n> The alternative v2 patch adds this to this list of OIDs, and also\n> formats it as an\n> SGML list, which IMHO is easier to read.\n>\n> Looks like this has been missing since 9.3.\n\nForgot to add this to a CF; done: https://commitfest.postgresql.org/46/4730/\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 28 Dec 2023 13:55:27 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc patch: note AttributeRelationId passed to FDW validator\n function"
},
{
"msg_contents": "On Thu, Dec 28, 2023 at 01:55:27PM +0900, Ian Lawrence Barwick wrote:\n> 2023年6月7日(水) 9:08 Ian Lawrence Barwick <[email protected]>:\n>> The alternative v2 patch adds this to this list of OIDs, and also\n>> formats it as an\n>> SGML list, which IMHO is easier to read.\n>>\n>> Looks like this has been missing since 9.3.\n> \n> Forgot to add this to a CF; done: https://commitfest.postgresql.org/46/4730/\n\nAgreed that a list is cleaner. Looking around I can see that the\ncatalogs going through the validator functions are limited to the five\nyou are listing in your patch. Will apply in a bit, thanks!\n--\nMichael",
"msg_date": "Thu, 28 Dec 2023 15:37:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: doc patch: note AttributeRelationId passed to FDW validator\n function"
},
{
"msg_contents": "2023年12月28日(木) 15:37 Michael Paquier <[email protected]>:\n>\n> On Thu, Dec 28, 2023 at 01:55:27PM +0900, Ian Lawrence Barwick wrote:\n> > 2023年6月7日(水) 9:08 Ian Lawrence Barwick <[email protected]>:\n> >> The alternative v2 patch adds this to this list of OIDs, and also\n> >> formats it as an\n> >> SGML list, which IMHO is easier to read.\n> >>\n> >> Looks like this has been missing since 9.3.\n> >\n> > Forgot to add this to a CF; done: https://commitfest.postgresql.org/46/4730/\n>\n> Agreed that a list is cleaner. Looking around I can see that the\n> catalogs going through the validator functions are limited to the five\n> you are listing in your patch. Will apply in a bit, thanks!\n\nThanks for taking care of that!\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Thu, 28 Dec 2023 21:46:12 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: doc patch: note AttributeRelationId passed to FDW validator\n function"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm testing the ability to have a logical replica subscribed from a standby.\n\nOf course, I'm doing this in a laboratory with no activity so\neverything get stuck after creating the subscription (the main slot).\nThis is clearly because every time it will create a temp slot for copy\na table it needs the running xacts from the primary.\n\nNow, I was solving this by executing CHECKPOINT on the primary, and\nalso noted that pg_switch_wal() works too. After that, I read about\npg_log_standby_snapshot().\n\nSo, I wonder if that function is really needed because as I said I\nsolved it with already existing functionality. Or if it is really\nneeded maybe it is a bug that a CHECKPOINT and pg_switch_wal() have\nthe same effect?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 7 Jun 2023 00:32:22 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "is pg_log_standby_snapshot() really needed?"
},
{
"msg_contents": "Hi,\n\nOn 6/7/23 7:32 AM, Jaime Casanova wrote:\n> Hi,\n> \n> I'm testing the ability to have a logical replica subscribed from a standby.\n> \n> Of course, I'm doing this in a laboratory with no activity so\n> everything get stuck after creating the subscription (the main slot).\n> This is clearly because every time it will create a temp slot for copy\n> a table it needs the running xacts from the primary.\n> \n> Now, I was solving this by executing CHECKPOINT on the primary, and\n> also noted that pg_switch_wal() works too. After that, I read about\n> pg_log_standby_snapshot().\n> \n> So, I wonder if that function is really needed because as I said I\n> solved it with already existing functionality. Or if it is really\n> needed maybe it is a bug that a CHECKPOINT and pg_switch_wal() have\n> the same effect?\n> \n\nEven if CHECKPOINT and pg_switch_wal() do produce the same effect, I think\nthey are expensive (as compare to pg_log_standby_snapshot() which does nothing but\nemit a xl_running_xacts).\n\nFor this reason, I think pg_log_standby_snapshot() is worth to have/keep.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Jun 2023 12:19:31 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is pg_log_standby_snapshot() really needed?"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 5:19 AM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 6/7/23 7:32 AM, Jaime Casanova wrote:\n> >\n> > So, I wonder if that function is really needed because as I said I\n> > solved it with already existing functionality. Or if it is really\n> > needed maybe it is a bug that a CHECKPOINT and pg_switch_wal() have\n> > the same effect?\n> >\n>\n> Even if CHECKPOINT and pg_switch_wal() do produce the same effect, I think\n> they are expensive (as compare to pg_log_standby_snapshot() which does nothing but\n> emit a xl_running_xacts).\n>\n> For this reason, I think pg_log_standby_snapshot() is worth to have/keep.\n>\n\nCHECKPOINT could be expensive in a busy system, but the problem\npg_log_standby_snapshot() is solving is about a no-activity system,\nand in a no-activity system CHECKPOINT is very fast.\nEven with very low activity SUBSCRIPTION flows fine. As an example I\nput an INSERT happening every 10s and SUBSCRIPTION never stuck no\nCHECKPOINT nor pg_log_standby_snapshot() needed.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 7 Jun 2023 13:50:30 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: is pg_log_standby_snapshot() really needed?"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 13:50:30 -0500, Jaime Casanova wrote:\n> CHECKPOINT could be expensive in a busy system, but the problem\n> pg_log_standby_snapshot() is solving is about a no-activity system,\n> and in a no-activity system CHECKPOINT is very fast.\n\nThere's no easy way for the subscriber to know if the system is active or\nnot. The only realistic way is to unconditionally issue the relevant\ncommand. And that's not at all ok for CHECKPOINT.\n\n\n> Even with very low activity SUBSCRIPTION flows fine. As an example I\n> put an INSERT happening every 10s and SUBSCRIPTION never stuck no\n> CHECKPOINT nor pg_log_standby_snapshot() needed.\n\nNobody forces you to issue pg_log_standby_snapshot(). If things work fine\nwithout it for you, cool. But it's pretty trivial to see that it doesn't\nalways:\n\nWith pg_log_standby_snapshot() as normal,\nrecovery/035_standby_logical_decoding takes 15.63s on my machine. Without it I\nlost patience after 2 minutes. And the test was only at the start (8 out of 78\nsubtests).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 16:02:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is pg_log_standby_snapshot() really needed?"
},
{
"msg_contents": "Hi,\n\nOn 6/7/23 8:50 PM, Jaime Casanova wrote:\n> On Wed, Jun 7, 2023 at 5:19 AM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 6/7/23 7:32 AM, Jaime Casanova wrote:\n>>>\n>>> So, I wonder if that function is really needed because as I said I\n>>> solved it with already existing functionality. Or if it is really\n>>> needed maybe it is a bug that a CHECKPOINT and pg_switch_wal() have\n>>> the same effect?\n>>>\n>>\n>> Even if CHECKPOINT and pg_switch_wal() do produce the same effect, I think\n>> they are expensive (as compare to pg_log_standby_snapshot() which does nothing but\n>> emit a xl_running_xacts).\n>>\n>> For this reason, I think pg_log_standby_snapshot() is worth to have/keep.\n>>\n> \n> CHECKPOINT could be expensive in a busy system, but the problem\n> pg_log_standby_snapshot() is solving is about a no-activity system,\n> and in a no-activity system CHECKPOINT is very fast.\n\na no-activity system at the time the logical replication slot is being created.\nMeans at the time the system is \"non active\" it may be possible that the checkpoint\nwould still have a lot to do.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 07:12:17 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is pg_log_standby_snapshot() really needed?"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 12:12 AM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> a no-activity system at the time the logical replication slot is being created.\n> Means at the time the system is \"non active\" it may be possible that the checkpoint\n> would still have a lot to do.\n>\n\nok, this doesn't deserve that much attention anyway...\nIt doesn't seem to do any harm, so I'm not spending more time on it\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 8 Jun 2023 00:27:23 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: is pg_log_standby_snapshot() really needed?"
}
] |
[
{
"msg_contents": "Continuing the work started with 208bf364a9, this patch removes md5() \nfunction calls from these test suites:\n\n- bloom\n- test_decoding\n- isolation\n- recovery\n- subscription\n\nThis covers all remaining test suites where md5() calls were just used \nto generate some random data and can be replaced by appropriately \nadapted sha256() calls. Unlike for the main regression tests, I didn't \nwrite a fipshash() wrapper here, because that would have been too \nrepetitive and wouldn't really save much here. In some cases it was \neasier to remove one layer of indirection by changing column types from \ntext to bytea.",
"msg_date": "Wed, 7 Jun 2023 08:59:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove incidental md5() function uses from several tests"
},
{
"msg_contents": "> On 7 Jun 2023, at 08:59, Peter Eisentraut <[email protected]> wrote:\n> \n> Continuing the work started with 208bf364a9, this patch removes md5() function calls from these test suites:\n> \n> - bloom\n> - test_decoding\n> - isolation\n> - recovery\n> - subscription\n> \n> This covers all remaining test suites where md5() calls were just used to generate some random data and can be replaced by appropriately adapted sha256() calls.\n\nLGTM from a skim.\n\n> Unlike for the main regression tests, I didn't write a fipshash() wrapper here, because that would have been too repetitive and wouldn't really save much here. In some cases it was easier to remove one layer of indirection by changing column types from text to bytea.\n\nAgreed. Since the commit message mentions 208bf364a9 it would probably be a\ngood idea to add some version of the above fipshash clarification to the commit\nmessage.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 10:19:30 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove incidental md5() function uses from several tests"
},
{
"msg_contents": "On 07.06.23 10:19, Daniel Gustafsson wrote:\n>> Unlike for the main regression tests, I didn't write a fipshash() wrapper here, because that would have been too repetitive and wouldn't really save much here. In some cases it was easier to remove one layer of indirection by changing column types from text to bytea.\n> \n> Agreed. Since the commit message mentions 208bf364a9 it would probably be a\n> good idea to add some version of the above fipshash clarification to the commit\n> message.\n\nCommitted with that addition, thanks.\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 14:51:55 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove incidental md5() function uses from several tests"
}
] |
[
{
"msg_contents": "Fixed typo in SQL.\n\nCurrent: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1\n'value', SET opt2 'value2', DROP opt3 'value3');\n\nFixed: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value',\nSET opt2 'value2', DROP opt3);\n\nDrop options do not get a value.\n\n-- \nMEHMET EMİN KARAKAŞ",
"msg_date": "Wed, 7 Jun 2023 17:25:28 +0300",
"msg_from": "=?UTF-8?Q?Mehmet_Emin_KARAKA=C5=9E?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[DOCS] alter_foreign_table.sgml typo"
},
{
"msg_contents": "\n\nOn 2023/06/07 23:25, Mehmet Emin KARAKAŞ wrote:\n> Fixed typo in SQL.\n> \n> Current: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3 'value3');\n> \n> Fixed: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3);\n> \n> Drop options do not get a value.\n\nThanks for the report! I agree with your findings and the patch looks good to me.\nI will commit the patch barring any objection.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 8 Jun 2023 00:53:56 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] alter_foreign_table.sgml typo"
},
{
"msg_contents": "\n\nOn 2023/06/08 0:53, Fujii Masao wrote:\n> \n> \n> On 2023/06/07 23:25, Mehmet Emin KARAKAŞ wrote:\n>> Fixed typo in SQL.\n>>\n>> Current: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3 'value3');\n>>\n>> Fixed: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3);\n>>\n>> Drop options do not get a value.\n> \n> Thanks for the report! I agree with your findings and the patch looks good to me.\n> I will commit the patch barring any objection.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 8 Jun 2023 20:19:11 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [DOCS] alter_foreign_table.sgml typo"
},
{
"msg_contents": "Thank you.\n\nFujii Masao <[email protected]>, 8 Haz 2023 Per, 14:19 tarihinde\nşunu yazdı:\n\n>\n>\n> On 2023/06/08 0:53, Fujii Masao wrote:\n> >\n> >\n> > On 2023/06/07 23:25, Mehmet Emin KARAKAŞ wrote:\n> >> Fixed typo in SQL.\n> >>\n> >> Current: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1\n> 'value', SET opt2 'value2', DROP opt3 'value3');\n> >>\n> >> Fixed: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1\n> 'value', SET opt2 'value2', DROP opt3);\n> >>\n> >> Drop options do not get a value.\n> >\n> > Thanks for the report! I agree with your findings and the patch looks\n> good to me.\n> > I will commit the patch barring any objection.\n>\n> Pushed. Thanks!\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\n\n-- \nMEHMET EMİN KARAKAŞ\n\nThank you. Fujii Masao <[email protected]>, 8 Haz 2023 Per, 14:19 tarihinde şunu yazdı:\n\nOn 2023/06/08 0:53, Fujii Masao wrote:\n> \n> \n> On 2023/06/07 23:25, Mehmet Emin KARAKAŞ wrote:\n>> Fixed typo in SQL.\n>>\n>> Current: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3 'value3');\n>>\n>> Fixed: ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3);\n>>\n>> Drop options do not get a value.\n> \n> Thanks for the report! I agree with your findings and the patch looks good to me.\n> I will commit the patch barring any objection.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n-- MEHMET EMİN KARAKAŞ",
"msg_date": "Thu, 8 Jun 2023 14:49:09 +0300",
"msg_from": "=?UTF-8?Q?Mehmet_Emin_KARAKA=C5=9E?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [DOCS] alter_foreign_table.sgml typo"
}
] |
[
{
"msg_contents": "This patch is really not necessary from a functional point of view. It\nis only necessary if we want to silence a compiler warning.\n\nTested on `gcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)`.\n\nAfter silencing this warning, all I am left with (given my build\nconfiguration) is:\n\n[1667/2280] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\nIn file included from ../src/include/access/htup_details.h:22,\n from ../src/pl/plpgsql/src/pl_exec.c:21:\nIn function ‘assign_simple_var’,\n inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8349:2:\n../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n | ^\n../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n | ^~~\n../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n | ^~~~~~~~~~~\n../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n | ^~~~~~~~~~~~~~~\n../src/pl/plpgsql/src/pl_exec.c:8537:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n 8537 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n From my perspective, this warning definitely seems like a false\npositive, but I don't know the code well-enough to say that for certain.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 07 Jun 2023 09:31:10 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix last unitialized memory warning"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 7:31 AM Tristan Partin <[email protected]> wrote:\n>\n> This patch is really not necessary from a functional point of view. It\n> is only necessary if we want to silence a compiler warning.\n>\n> Tested on `gcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)`.\n\n...\n\n> From my perspective, this warning definitely seems like a false\n> positive, but I don't know the code well-enough to say that for certain.\n\nIt was a bit confusing to see a patch for src/bin/pgbench/pgbench.c,\nbut that file not mentioned in the warning messages you quoted. I take\nit that your patch silences a warning in pgbench.c; it would've been\nnice to see the actual warning.\n\n- PgBenchValue vargs[MAX_FARGS];\n+ PgBenchValue vargs[MAX_FARGS] = { 0 };\n\nIf I'm right about what kind of warning this might've caused (use of\npossibly uninitialized variable), you're correct that it is benign.\nThe for loop after declarations initializes all the elements of this\narray using evaluateExpr(), and if the initialization fails for some\nreason, the loop ends prematurely and returns from the function.\n\nI analyzed a few code-paths that return true from evaluateExpr(), and\nI'd like to believe that _all_ code paths that return true also\ninitialize the array element passed. But because there are so many\nbranches and function calls beneath evaluateExpr(), I think it's\nbetter to be paranoid and initialize all the array elements to 0.\n\nAlso, it is better to initialize/clear an array at the point of\ndefinition, like your patch does. So, +1 to the patch.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 8 Jun 2023 20:16:18 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 07.06.23 16:31, Tristan Partin wrote:\n> This patch is really not necessary from a functional point of view. It\n> is only necessary if we want to silence a compiler warning.\n> \n> Tested on `gcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)`.\n> \n> After silencing this warning, all I am left with (given my build\n> configuration) is:\n> \n> [1667/2280] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\n> In file included from ../src/include/access/htup_details.h:22,\n> from ../src/pl/plpgsql/src/pl_exec.c:21:\n> In function ‘assign_simple_var’,\n> inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8349:2:\n> ../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n> 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n> | ^\n> ../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n> 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n> | ^~~\n> ../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n> 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n> | ^~~~~~~~~~~\n> ../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n> 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n> | ^~~~~~~~~~~~~~~\n> ../src/pl/plpgsql/src/pl_exec.c:8537:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n> 8537 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> From my perspective, this warning definitely seems like a false\n> positive, but I don't know the code well-enough to say that for certain.\n\nI cannot reproduce this warning with gcc-13. Are you using any \nnon-standard optimization options. Could you give your full configure \nand build commands and the OS?\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 08:19:05 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Mon Jul 3, 2023 at 1:19 AM CDT, Peter Eisentraut wrote:\n> On 07.06.23 16:31, Tristan Partin wrote:\n> > This patch is really not necessary from a functional point of view. It\n> > is only necessary if we want to silence a compiler warning.\n> > \n> > Tested on `gcc (GCC) 13.1.1 20230511 (Red Hat 13.1.1-2)`.\n> > \n> > After silencing this warning, all I am left with (given my build\n> > configuration) is:\n> > \n> > [1667/2280] Compiling C object src/pl/plpgsql/src/plpgsql.so.p/pl_exec.c.o\n> > In file included from ../src/include/access/htup_details.h:22,\n> > from ../src/pl/plpgsql/src/pl_exec.c:21:\n> > In function ‘assign_simple_var’,\n> > inlined from ‘exec_set_found’ at ../src/pl/plpgsql/src/pl_exec.c:8349:2:\n> > ../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of ‘char[0]’ [-Warray-bounds=]\n> > 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n> > | ^\n> > ../src/include/varatt.h:94:12: note: in definition of macro ‘VARTAG_IS_EXPANDED’\n> > 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n> > | ^~~\n> > ../src/include/varatt.h:284:57: note: in expansion of macro ‘VARTAG_1B_E’\n> > 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n> > | ^~~~~~~~~~~\n> > ../src/include/varatt.h:301:57: note: in expansion of macro ‘VARTAG_EXTERNAL’\n> > 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n> > | ^~~~~~~~~~~~~~~\n> > ../src/pl/plpgsql/src/pl_exec.c:8537:17: note: in expansion of macro ‘VARATT_IS_EXTERNAL_NON_EXPANDED’\n> > 8537 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> > | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> > \n> > From my perspective, this warning definitely seems like a false\n> > positive, but I don't know the code well-enough to say that for certain.\n>\n> I cannot reproduce this warning with gcc-13. Are you using any \n> non-standard optimization options. Could you give your full configure \n> and build commands and the OS?\n\nThanks for following up. My system is Fedora 38. I can confirm this is\nstill happening on master.\n\n$ gcc --version\ngcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\nCopyright (C) 2023 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n$ meson setup build --buildtype=release\n$ ninja -C build\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 05 Jul 2023 16:06:46 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 05.07.23 23:06, Tristan Partin wrote:\n> Thanks for following up. My system is Fedora 38. I can confirm this is\n> still happening on master.\n> \n> $ gcc --version\n> gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n> Copyright (C) 2023 Free Software Foundation, Inc.\n> This is free software; see the source for copying conditions. There is NO\n> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n> $ meson setup build --buildtype=release\n\nThis buildtype turns on -O3 warnings. We have usually opted against \nchasing warnings in -O3 level because there are often some \nfalse-positive uninitialized variable warnings with every new compiler.\n\nNote that we have set the default build type to debugoptimized, for that \nreason.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:21:44 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Thu Jul 6, 2023 at 3:21 AM CDT, Peter Eisentraut wrote:\n> On 05.07.23 23:06, Tristan Partin wrote:\n> > Thanks for following up. My system is Fedora 38. I can confirm this is\n> > still happening on master.\n> > \n> > $ gcc --version\n> > gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n> > Copyright (C) 2023 Free Software Foundation, Inc.\n> > This is free software; see the source for copying conditions. There is NO\n> > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n> > $ meson setup build --buildtype=release\n>\n> This buildtype turns on -O3 warnings. We have usually opted against \n> chasing warnings in -O3 level because there are often some \n> false-positive uninitialized variable warnings with every new compiler.\n>\n> Note that we have set the default build type to debugoptimized, for that \n> reason.\n\nGood to know, thanks.\n\nRegarding the original patch, do you think it is good to be applied?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 06 Jul 2023 08:41:50 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-06 10:21:44 +0200, Peter Eisentraut wrote:\n> On 05.07.23 23:06, Tristan Partin wrote:\n> > Thanks for following up. My system is Fedora 38. I can confirm this is\n> > still happening on master.\n> > \n> > $ gcc --version\n> > gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n> > Copyright (C) 2023 Free Software Foundation, Inc.\n> > This is free software; see the source for copying conditions. There is NO\n> > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n> > $ meson setup build --buildtype=release\n> \n> This buildtype turns on -O3 warnings. We have usually opted against chasing\n> warnings in -O3 level because there are often some false-positive\n> uninitialized variable warnings with every new compiler.\n\nOTOH, -O3 is substantially faster IME in cpu bound tests than -O2. It doesn't\nseem wise to me for the project to basically say that that's not advisable due\nto the level of warnings created.\n\nI've also found bugs with -O3 that -O2 didn't find. And often -O3 warnings end\nup showing up with -O2 a compiler major version or three down the line, so\nit's often just deferring work.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jul 2023 12:15:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 06.07.23 15:41, Tristan Partin wrote:\n> On Thu Jul 6, 2023 at 3:21 AM CDT, Peter Eisentraut wrote:\n>> On 05.07.23 23:06, Tristan Partin wrote:\n>>> Thanks for following up. My system is Fedora 38. I can confirm this is\n>>> still happening on master.\n>>>\n>>> $ gcc --version\n>>> gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n>>> Copyright (C) 2023 Free Software Foundation, Inc.\n>>> This is free software; see the source for copying conditions. There is NO\n>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n>>> $ meson setup build --buildtype=release\n>>\n>> This buildtype turns on -O3 warnings. We have usually opted against\n>> chasing warnings in -O3 level because there are often some\n>> false-positive uninitialized variable warnings with every new compiler.\n>>\n>> Note that we have set the default build type to debugoptimized, for that\n>> reason.\n> \n> Good to know, thanks.\n> \n> Regarding the original patch, do you think it is good to be applied?\n\nThat patch looks reasonable. But I can't actually reproduce the \nwarning, even with gcc-13. I do get the warning from plpgsql. Can you \nshow the warning you are seeing?\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 09:23:24 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Sun Jul 9, 2023 at 2:23 AM CDT, Peter Eisentraut wrote:\n> On 06.07.23 15:41, Tristan Partin wrote:\n> > On Thu Jul 6, 2023 at 3:21 AM CDT, Peter Eisentraut wrote:\n> >> On 05.07.23 23:06, Tristan Partin wrote:\n> >>> Thanks for following up. My system is Fedora 38. I can confirm this is\n> >>> still happening on master.\n> >>>\n> >>> $ gcc --version\n> >>> gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n> >>> Copyright (C) 2023 Free Software Foundation, Inc.\n> >>> This is free software; see the source for copying conditions. There is NO\n> >>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n> >>> $ meson setup build --buildtype=release\n> >>\n> >> This buildtype turns on -O3 warnings. We have usually opted against\n> >> chasing warnings in -O3 level because there are often some\n> >> false-positive uninitialized variable warnings with every new compiler.\n> >>\n> >> Note that we have set the default build type to debugoptimized, for that\n> >> reason.\n> > \n> > Good to know, thanks.\n> > \n> > Regarding the original patch, do you think it is good to be applied?\n>\n> That patch looks reasonable. But I can't actually reproduce the \n> warning, even with gcc-13. I do get the warning from plpgsql. Can you \n> show the warning you are seeing?\n\nHere is the full warning that the original patch suppresses.\n\n[1360/1876] Compiling C object src/bin/pgbench/pgbench.p/pgbench.c.o\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2607:11:\n../src/bin/pgbench/pgbench.c:2032:17: warning: ‘vargs[0].type’ may be used uninitialized [-Wmaybe-uninitialized]\n 2032 | if (pval->type == PGBT_INT)\n | ~~~~^~~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2240:22: note: ‘vargs’ declared here\n 2240 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2607:11:\n../src/bin/pgbench/pgbench.c:2034:32: warning: ‘vargs[0].u.ival’ may be used uninitialized [-Wmaybe-uninitialized]\n 2034 | *ival = pval->u.ival;\n | ~~~~~~~^~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2240:22: note: ‘vargs’ declared here\n 2240 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\nIn function ‘coerceToInt’,\n inlined from ‘evalStandardFunc’ at ../src/bin/pgbench/pgbench.c:2607:11:\n../src/bin/pgbench/pgbench.c:2039:40: warning: ‘vargs[0].u.dval’ may be used uninitialized [-Wmaybe-uninitialized]\n 2039 | double dval = rint(pval->u.dval);\n | ^~~~~~~~~~~~~~~~~~\n../src/bin/pgbench/pgbench.c: In function ‘evalStandardFunc’:\n../src/bin/pgbench/pgbench.c:2240:22: note: ‘vargs’ declared here\n 2240 | PgBenchValue vargs[MAX_FARGS];\n | ^~~~~\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 19 Jul 2023 12:15:31 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 19.07.23 19:15, Tristan Partin wrote:\n> On Sun Jul 9, 2023 at 2:23 AM CDT, Peter Eisentraut wrote:\n>> On 06.07.23 15:41, Tristan Partin wrote:\n>> > On Thu Jul 6, 2023 at 3:21 AM CDT, Peter Eisentraut wrote:\n>> >> On 05.07.23 23:06, Tristan Partin wrote:\n>> >>> Thanks for following up. My system is Fedora 38. I can confirm \n>> this is\n>> >>> still happening on master.\n>> >>>\n>> >>> $ gcc --version\n>> >>> gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n>> >>> Copyright (C) 2023 Free Software Foundation, Inc.\n>> >>> This is free software; see the source for copying conditions. \n>> There is NO\n>> >>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR \n>> PURPOSE.\n>> >>> $ meson setup build --buildtype=release\n>> >>\n>> >> This buildtype turns on -O3 warnings. We have usually opted against\n>> >> chasing warnings in -O3 level because there are often some\n>> >> false-positive uninitialized variable warnings with every new \n>> compiler.\n>> >>\n>> >> Note that we have set the default build type to debugoptimized, for \n>> that\n>> >> reason.\n>> > > Good to know, thanks.\n>> > > Regarding the original patch, do you think it is good to be applied?\n>>\n>> That patch looks reasonable. But I can't actually reproduce the \n>> warning, even with gcc-13. I do get the warning from plpgsql. Can \n>> you show the warning you are seeing?\n> \n> Here is the full warning that the original patch suppresses.\n\nI was able to reproduce the warning now on Fedora. I agree with the patch\n\n- PgBenchValue vargs[MAX_FARGS];\n+ PgBenchValue vargs[MAX_FARGS] = { 0 };\n\nI suggest to also do\n\n typedef enum\n {\n- PGBT_NO_VALUE,\n+ PGBT_NO_VALUE = 0,\n\nto make clear that the initialization value is meant to be invalid.\n\nI also got the plpgsql warning that you showed earlier, but I couldn't \nthink of a reasonable way to fix that.\n\n\n\n",
"msg_date": "Tue, 8 Aug 2023 12:20:24 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Tue Aug 8, 2023 at 5:20 AM CDT, Peter Eisentraut wrote:\n> On 19.07.23 19:15, Tristan Partin wrote:\n> > On Sun Jul 9, 2023 at 2:23 AM CDT, Peter Eisentraut wrote:\n> >> On 06.07.23 15:41, Tristan Partin wrote:\n> >> > On Thu Jul 6, 2023 at 3:21 AM CDT, Peter Eisentraut wrote:\n> >> >> On 05.07.23 23:06, Tristan Partin wrote:\n> >> >>> Thanks for following up. My system is Fedora 38. I can confirm \n> >> this is\n> >> >>> still happening on master.\n> >> >>>\n> >> >>> $ gcc --version\n> >> >>> gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4)\n> >> >>> Copyright (C) 2023 Free Software Foundation, Inc.\n> >> >>> This is free software; see the source for copying conditions. \n> >> There is NO\n> >> >>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR \n> >> PURPOSE.\n> >> >>> $ meson setup build --buildtype=release\n> >> >>\n> >> >> This buildtype turns on -O3 warnings. We have usually opted against\n> >> >> chasing warnings in -O3 level because there are often some\n> >> >> false-positive uninitialized variable warnings with every new \n> >> compiler.\n> >> >>\n> >> >> Note that we have set the default build type to debugoptimized, for \n> >> that\n> >> >> reason.\n> >> > > Good to know, thanks.\n> >> > > Regarding the original patch, do you think it is good to be applied?\n> >>\n> >> That patch looks reasonable. But I can't actually reproduce the \n> >> warning, even with gcc-13. I do get the warning from plpgsql. Can \n> >> you show the warning you are seeing?\n> > \n> > Here is the full warning that the original patch suppresses.\n>\n> I was able to reproduce the warning now on Fedora. I agree with the patch\n>\n> - PgBenchValue vargs[MAX_FARGS];\n> + PgBenchValue vargs[MAX_FARGS] = { 0 };\n>\n> I suggest to also do\n>\n> typedef enum\n> {\n> - PGBT_NO_VALUE,\n> + PGBT_NO_VALUE = 0,\n>\n> to make clear that the initialization value is meant to be invalid.\n>\n> I also got the plpgsql warning that you showed earlier, but I couldn't \n> think of a reasonable way to fix that.\n\nApplied in v2.\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Tue, 08 Aug 2023 10:14:57 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 08.08.23 17:14, Tristan Partin wrote:\n>> I was able to reproduce the warning now on Fedora. I agree with the \n>> patch\n>>\n>> - PgBenchValue vargs[MAX_FARGS];\n>> + PgBenchValue vargs[MAX_FARGS] = { 0 };\n>>\n>> I suggest to also do\n>>\n>> typedef enum\n>> {\n>> - PGBT_NO_VALUE,\n>> + PGBT_NO_VALUE = 0,\n>>\n>> to make clear that the initialization value is meant to be invalid.\n>>\n>> I also got the plpgsql warning that you showed earlier, but I couldn't \n>> think of a reasonable way to fix that.\n> \n> Applied in v2.\n\ncommitted\n\n\n\n",
"msg_date": "Wed, 9 Aug 2023 10:07:08 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 09.08.23 10:07, Peter Eisentraut wrote:\n> On 08.08.23 17:14, Tristan Partin wrote:\n>>> I was able to reproduce the warning now on Fedora. I agree with the \n>>> patch\n>>>\n>>> - PgBenchValue vargs[MAX_FARGS];\n>>> + PgBenchValue vargs[MAX_FARGS] = { 0 };\n>>>\n>>> I suggest to also do\n>>>\n>>> typedef enum\n>>> {\n>>> - PGBT_NO_VALUE,\n>>> + PGBT_NO_VALUE = 0,\n>>>\n>>> to make clear that the initialization value is meant to be invalid.\n>>>\n>>> I also got the plpgsql warning that you showed earlier, but I \n>>> couldn't think of a reasonable way to fix that.\n>>\n>> Applied in v2.\n> \n> committed\n\nThis patch has apparently upset one buildfarm member with a very old \ncompiler: \nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n\nAny thoughts?\n\n\n\n",
"msg_date": "Wed, 9 Aug 2023 17:02:46 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Wed Aug 9, 2023 at 10:02 AM CDT, Peter Eisentraut wrote:\n> On 09.08.23 10:07, Peter Eisentraut wrote:\n> > On 08.08.23 17:14, Tristan Partin wrote:\n> >>> I was able to reproduce the warning now on Fedora. I agree with the \n> >>> patch\n> >>>\n> >>> - PgBenchValue vargs[MAX_FARGS];\n> >>> + PgBenchValue vargs[MAX_FARGS] = { 0 };\n> >>>\n> >>> I suggest to also do\n> >>>\n> >>> typedef enum\n> >>> {\n> >>> - PGBT_NO_VALUE,\n> >>> + PGBT_NO_VALUE = 0,\n> >>>\n> >>> to make clear that the initialization value is meant to be invalid.\n> >>>\n> >>> I also got the plpgsql warning that you showed earlier, but I \n> >>> couldn't think of a reasonable way to fix that.\n> >>\n> >> Applied in v2.\n> > \n> > committed\n>\n> This patch has apparently upset one buildfarm member with a very old \n> compiler: \n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n>\n> Any thoughts?\n\nBest I could find is SO question[0] which links out to[1]. Try this \npatch. Otherwise, a memset() would probably do too.\n\n[0]: https://stackoverflow.com/questions/13746033/how-to-repair-warning-missing-braces-around-initializer\n[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119\n\n-- \nTristan Partin\nNeon (https://neon.tech)",
"msg_date": "Wed, 09 Aug 2023 10:29:56 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Wed, Aug 09, 2023 at 10:29:56AM -0500, Tristan Partin wrote:\n> On Wed Aug 9, 2023 at 10:02 AM CDT, Peter Eisentraut wrote:\n> >\n> > This patch has apparently upset one buildfarm member with a very old\n> > compiler:\n> > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n> >\n> > Any thoughts?\n>\n> Best I could find is SO question[0] which links out to[1]. Try this patch.\n> Otherwise, a memset() would probably do too.\n\nYes, it's a buggy warning that came up in the past a few times as I recall, for\nwhich we previously used the {{...}} approach to silence it.\n\nAs there have been previous complaints about it, I removed the -Werror from\nlapwing and forced a new run to make it green again.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 08:56:43 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 8:57 AM Julien Rouhaud <[email protected]> wrote:\n\n> On Wed, Aug 09, 2023 at 10:29:56AM -0500, Tristan Partin wrote:\n> > On Wed Aug 9, 2023 at 10:02 AM CDT, Peter Eisentraut wrote:\n> > >\n> > > This patch has apparently upset one buildfarm member with a very old\n> > > compiler:\n> > >\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n> > >\n> > > Any thoughts?\n> >\n> > Best I could find is SO question[0] which links out to[1]. Try this\n> patch.\n> > Otherwise, a memset() would probably do too.\n>\n> Yes, it's a buggy warning that came up in the past a few times as I\n> recall, for\n> which we previously used the {{...}} approach to silence it.\n\n\nI came across this warning too on one of my VMs, with gcc 4.8.5. +1 to\nsilence it with {{...}}. We did that in d937904 and 6392f2a (and maybe\nmore).\n\nIn case it helps, here is the GCC I'm on.\n\n$ gcc --version\ngcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)\n\nThanks\nRichard\n\nOn Thu, Aug 10, 2023 at 8:57 AM Julien Rouhaud <[email protected]> wrote:On Wed, Aug 09, 2023 at 10:29:56AM -0500, Tristan Partin wrote:\n> On Wed Aug 9, 2023 at 10:02 AM CDT, Peter Eisentraut wrote:\n> >\n> > This patch has apparently upset one buildfarm member with a very old\n> > compiler:\n> > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n> >\n> > Any thoughts?\n>\n> Best I could find is SO question[0] which links out to[1]. Try this patch.\n> Otherwise, a memset() would probably do too.\n\nYes, it's a buggy warning that came up in the past a few times as I recall, for\nwhich we previously used the {{...}} approach to silence it.I came across this warning too on one of my VMs, with gcc 4.8.5. +1 tosilence it with {{...}}. We did that in d937904 and 6392f2a (and maybemore).In case it helps, here is the GCC I'm on.$ gcc --versiongcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)ThanksRichard",
"msg_date": "Thu, 10 Aug 2023 16:06:59 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
},
{
"msg_contents": "On 09.08.23 17:29, Tristan Partin wrote:\n> On Wed Aug 9, 2023 at 10:02 AM CDT, Peter Eisentraut wrote:\n>> On 09.08.23 10:07, Peter Eisentraut wrote:\n>> > On 08.08.23 17:14, Tristan Partin wrote:\n>> >>> I was able to reproduce the warning now on Fedora. I agree with \n>> the >>> patch\n>> >>>\n>> >>> - PgBenchValue vargs[MAX_FARGS];\n>> >>> + PgBenchValue vargs[MAX_FARGS] = { 0 };\n>> >>>\n>> >>> I suggest to also do\n>> >>>\n>> >>> typedef enum\n>> >>> {\n>> >>> - PGBT_NO_VALUE,\n>> >>> + PGBT_NO_VALUE = 0,\n>> >>>\n>> >>> to make clear that the initialization value is meant to be invalid.\n>> >>>\n>> >>> I also got the plpgsql warning that you showed earlier, but I >>> \n>> couldn't think of a reasonable way to fix that.\n>> >>\n>> >> Applied in v2.\n>> > > committed\n>>\n>> This patch has apparently upset one buildfarm member with a very old \n>> compiler: \n>> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=lapwing&br=HEAD\n>>\n>> Any thoughts?\n> \n> Best I could find is SO question[0] which links out to[1]. Try this \n> patch.\n\ncommitted this fix\n\n\n\n",
"msg_date": "Thu, 10 Aug 2023 16:59:52 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix last unitialized memory warning"
}
] |
[
{
"msg_contents": "The usual question is “why did DELETE not release disk space?”, and I\nunderstand why that is and something about how to get the space back\n(VACUUM).\n\nI have a database which hosts multiple applications in various schemas and\nI’m trying to make test/sample data files by starting with a restored copy\nof production and then dropping all schemas except for the ones I need for\na particular application.\n\nThe total size of all relations after the drop operations is just a few MB:\n\nodyssey=# select sum (pg_total_relation_size (oid)) from pg_class;\n sum\n----------\n 13877248\n(1 row)\n\nYet the database size is still large (although much smaller than in the\noriginal database):\n\nodyssey=# select datname, pg_database_size (oid) from pg_database;\n datname | pg_database_size\n-----------+------------------\n postgres | 8930083\n _repmgr | 654934531\n template0 | 8643075\n template1 | 8864547\n odyssey | 14375453475\n(5 rows)\n\nThe only change made after starting from a basebackup of production was to\nset all the passwords to NULL in pg_authid, and to delete most of the\nschemas. In particular, I wouldn’t expect VACUUM to do anything.\n\nDoes anybody know what could be holding all that space?\n\nThe usual question is “why did DELETE not release disk space?”, and I understand why that is and something about how to get the space back (VACUUM).I have a database which hosts multiple applications in various schemas and I’m trying to make test/sample data files by starting with a restored copy of production and then dropping all schemas except for the ones I need for a particular application.The total size of all relations after the drop operations is just a few MB:odyssey=# select sum (pg_total_relation_size (oid)) from pg_class; sum ---------- 13877248(1 row)Yet the database size is still large (although much smaller than in the original database):odyssey=# select datname, pg_database_size (oid) from pg_database; datname | pg_database_size -----------+------------------ postgres | 8930083 _repmgr | 654934531 template0 | 8643075 template1 | 8864547 odyssey | 14375453475(5 rows)The only change made after starting from a basebackup of production was to set all the passwords to NULL in pg_authid, and to delete most of the schemas. In particular, I wouldn’t expect VACUUM to do anything.Does anybody know what could be holding all that space?",
"msg_date": "Wed, 7 Jun 2023 11:48:32 -0400",
"msg_from": "Isaac Morland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk space not released after schema deletion"
}
] |
[
{
"msg_contents": "Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\nbug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\nC1026: parser stack overflow, program too complex (compiling source file\nsrc/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\nVisual Studio 2015, changing the code to cope is easy. See attached.\nhttps://stackoverflow.com/a/34266725/16371536 asserts having confirmation of\nthe compiler bug, but its reference is now a dead link. This became a problem\nin v16 due to the 135% increase in node types known to parseNodeString().\n\nhttps:/postgr.es/m/[email protected]\npreviously reported this error message when overriding USE_READLINE in a\nVisual Studio build, for tab-complete.c. If that combination ever becomes\nsupported, we may face a similar decision there.",
"msg_date": "Wed, 7 Jun 2023 11:54:58 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\n> bug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\n> C1026: parser stack overflow, program too complex (compiling source file\n> src/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\n> Visual Studio 2015, changing the code to cope is easy. See attached.\n\n+1, I think this reads better anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Jun 2023 17:16:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 11:54:58 -0700, Noah Misch wrote:\n> Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\n> bug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\n> C1026: parser stack overflow, program too complex (compiling source file\n> src/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\n> Visual Studio 2015, changing the code to cope is easy. See attached.\n\nI don't see a point in trying to keep Visual Studio 2015 working. We have no\nautomated testing for it, as evidenced by this issue. It seems quite possible\nwe're going to hit such issues in other places.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 14:21:05 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "On 07.06.23 23:21, Andres Freund wrote:\n> On 2023-06-07 11:54:58 -0700, Noah Misch wrote:\n>> Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\n>> bug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\n>> C1026: parser stack overflow, program too complex (compiling source file\n>> src/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\n>> Visual Studio 2015, changing the code to cope is easy. See attached.\n> \n> I don't see a point in trying to keep Visual Studio 2015 working. We have no\n> automated testing for it, as evidenced by this issue. It seems quite possible\n> we're going to hit such issues in other places.\n\nApparently, nobody has used it between Sat Jul 9 08:52:19 2022 and now?\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:34:09 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "On 07.06.23 23:16, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n>> Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\n>> bug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\n>> C1026: parser stack overflow, program too complex (compiling source file\n>> src/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\n>> Visual Studio 2015, changing the code to cope is easy. See attached.\n> \n> +1, I think this reads better anyway.\n\nI kind of like the style where there is only one return at the end, \nbecause it makes it easier to inject debugging code that inspects the \nreturn value.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:35:34 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 11:34:09PM +0200, Peter Eisentraut wrote:\n> On 07.06.23 23:21, Andres Freund wrote:\n> >On 2023-06-07 11:54:58 -0700, Noah Misch wrote:\n> >>Visual Studio 2015 version \"14.0.25431.01 Update 3\" has an apparent compiler\n> >>bug that causes the build to fail with \"readfuncs.switch.c(522): fatal error\n> >>C1026: parser stack overflow, program too complex (compiling source file\n> >>src/backend/nodes/readfuncs.c)\". While I wouldn't mind revoking support for\n> >>Visual Studio 2015, changing the code to cope is easy. See attached.\n> >\n> >I don't see a point in trying to keep Visual Studio 2015 working. We have no\n> >automated testing for it, as evidenced by this issue. It seems quite possible\n> >we're going to hit such issues in other places.\n> \n> Apparently, nobody has used it between Sat Jul 9 08:52:19 2022 and now?\n\nEssentially. I assume you're referring to commit 964d01a \"Automatically\ngenerate node support functions\". I bet it actually broke a few days later,\nat ff33a8c \"Remove artificial restrictions on which node types have out/read\nfuncs.\"\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:04:16 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 11:35:34PM +0200, Peter Eisentraut wrote:\n> I kind of like the style where there is only one return at the end, because\n> it makes it easier to inject debugging code that inspects the return value.\n\nI kind of disagree here, the previous style is a bit ugly-ish, with\nthe code generated by gen_node_support.pl being dependent on this\nlocal call because it is necessary to know about return_value:\n- if (false)\n- ;\n #include \"readfuncs.switch.c\"\n\nSo +1 for what's proposed.\n--\nMichael",
"msg_date": "Mon, 12 Jun 2023 09:40:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 03:04:16PM -0700, Noah Misch wrote:\n> On Wed, Jun 07, 2023 at 11:34:09PM +0200, Peter Eisentraut wrote:\n>> Apparently, nobody has used it between Sat Jul 9 08:52:19 2022 and now?\n\nOne week close enough. I have run checks on VS 2015 back when working\non 6203583, but I don't have this environment at hand anymore.\n\n> Essentially. I assume you're referring to commit 964d01a \"Automatically\n> generate node support functions\". I bet it actually broke a few days later,\n> at ff33a8c \"Remove artificial restrictions on which node types have out/read\n> funcs.\"\n\nNote that the last version-dependent checks of _MSC_VER have been\nremoved in the commit I am mentioning above, so the gain in removing\nVS 2015 is marginal. Even less once src/tools/msvc/ gets removed.\nBut perhaps it makes a few things easier with meson in mind?\n\nI don't think that's a reason enough to officially remove support for\nVS 2015 on 17~ and let it be for v16, though. It seems like my old\nWindows env was one bug in the Matrix, and I've moved one to newer\nversions already.\n--\nMichael",
"msg_date": "Mon, 12 Jun 2023 09:50:58 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16 fails to build w/ Visual Studio 2015"
}
] |
[
{
"msg_contents": "Greetings,\n\nAt pgcon last week I was speaking to some people about the problem we have\nwith connection pools and named prepared statements.\n\nFor context pgjdbc (and others) use un-named statements and then switch to\nnamed statements after using the statement N (default 5) times. In session\nmode this is not a problem. When the connection is closed by the\napplication the pools generally issue \"DISCARD ALL\" and close all prepared\nstatements. The next time the connection is opened the statement is\nprepared and all works as it should.\n\nHowever one of the more interesting use cases for pgbouncer is to use\n\"TRANSACTION MODE\" to manage idle sessions. In transaction mode the\nconnection is returned to the pool after each transaction. There are usage\npatterns in large applications where clients have client pools and\nsubsequently have large numbers of connections open. Sometimes in the\nthousands, unfortunately many of these are idle connections. Using\ntransaction mode reduces the number of real connections to the database in\nmany cases by orders of magnitude.\n\nUnfortunately this is incompatible with named prepared statements. From the\nclient's point of view they have one session and named prepared statements\nare session objects. From one transaction to the next the physical\nconnection can change along with the attached prepared statements.\n\nThe idea that was discussed is when we prepare the statement we cache it in\na statement cache and return a queryid much like the queryid used in\npg_stat_statements. Instead of executing the statement name we would\nexecute the queryid.\n\nIf the queryid did not exist, attempting to execute it would cause an error\nand cause the running transaction to fail. Retrieving the statement from\nthe query cache would have to happen before the attempt to execute it and\nreturn an error to the client subsequently the client could re-prepare the\nstatement and execute. This would have to happen in such a way as to not\ncause the transaction to fail.\n\nThe one other idea that was proposed was to cache the statements in the\nclient. However this does nothing to address the issue of managing idle\nconnections.\n\nRegards,\nDave Cramer\n\nGreetings,At pgcon last week I was speaking to some people about the problem we have with connection pools and named prepared statements.For context pgjdbc (and others) use un-named statements and then switch to named statements after using the statement N (default 5) times. In session mode this is not a problem. When the connection is closed by the application the pools generally issue \"DISCARD ALL\" and close all prepared statements. The next time the connection is opened the statement is prepared and all works as it should.However one of the more interesting use cases for pgbouncer is to use \"TRANSACTION MODE\" to manage idle sessions. In transaction mode the connection is returned to the pool after each transaction. There are usage patterns in large applications where clients have client pools and subsequently have large numbers of connections open. Sometimes in the thousands, unfortunately many of these are idle connections. Using transaction mode reduces the number of real connections to the database in many cases by orders of magnitude.Unfortunately this is incompatible with named prepared statements. From the client's point of view they have one session and named prepared statements are session objects. From one transaction to the next the physical connection can change along with the attached prepared statements.The idea that was discussed is when we prepare the statement we cache it in a statement cache and return a queryid much like the queryid used in pg_stat_statements. Instead of executing the statement name we would execute the queryid. If the queryid did not exist, attempting to execute it would cause an error and cause the running transaction to fail. Retrieving the statement from the query cache would have to happen before the attempt to execute it and return an error to the client subsequently the client could re-prepare the statement and execute. This would have to happen in such a way as to not cause the transaction to fail.The one other idea that was proposed was to cache the statements in the client. However this does nothing to address the issue of managing idle connections.Regards,Dave Cramer",
"msg_date": "Wed, 7 Jun 2023 15:48:18 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 07.06.2023 10:48 PM, Dave Cramer wrote:\n> Greetings,\n>\n> At pgcon last week I was speaking to some people about the problem we \n> have with connection pools and named prepared statements.\n>\n> For context pgjdbc (and others) use un-named statements and then \n> switch to named statements after using the statement N (default 5) \n> times. In session mode this is not a problem. When the connection is \n> closed by the application the pools generally issue \"DISCARD ALL\" and \n> close all prepared statements. The next time the connection is opened \n> the statement is prepared and all works as it should.\n>\n> However one of the more interesting use cases for pgbouncer is to use \n> \"TRANSACTION MODE\" to manage idle sessions. In transaction mode the \n> connection is returned to the pool after each transaction. There are \n> usage patterns in large applications where clients have client pools \n> and subsequently have large numbers of connections open. Sometimes in \n> the thousands, unfortunately many of these are idle connections. Using \n> transaction mode reduces the number of real connections to the \n> database in many cases by orders of magnitude.\n>\n> Unfortunately this is incompatible with named prepared statements. \n> From the client's point of view they have one session and named \n> prepared statements are session objects. From one transaction to the \n> next the physical connection can change along with the attached \n> prepared statements.\n>\n> The idea that was discussed is when we prepare the statement we cache \n> it in a statement cache and return a queryid much like the queryid \n> used in pg_stat_statements. Instead of executing the statement name we \n> would execute the queryid.\n>\n> If the queryid did not exist, attempting to execute it would cause an \n> error and cause the running transaction to fail. Retrieving the \n> statement from the query cache would have to happen before the attempt \n> to execute it and return an error to the client subsequently the \n> client could re-prepare the statement and execute. This would have to \n> happen in such a way as to not cause the transaction to fail.\n>\n> The one other idea that was proposed was to cache the statements in \n> the client. However this does nothing to address the issue of managing \n> idle connections.\n>\n> Regards,\n> Dave Cramer\n\n\nThere is a PR with support of prepared statement support to pgbouncer:\nhttps://github.com/pgbouncer/pgbouncer/pull/845\nany feedback, reviews and suggestions are welcome.\n\n\n\n\n\n\n\nOn 07.06.2023 10:48 PM, Dave Cramer\n wrote:\n\n\n\nGreetings,\n \n\nAt pgcon last week I was speaking to some people about the\n problem we have with connection pools and named prepared\n statements.\n\n\nFor context pgjdbc (and others) use un-named statements and\n then switch to named statements after using the statement N\n (default 5) times. In session mode this is not a problem. When\n the connection is closed by the application the pools\n generally issue \"DISCARD ALL\" and close all prepared\n statements. The next time the connection is opened the\n statement is prepared and all works as it should.\n\n\nHowever one of the more interesting use cases for pgbouncer\n is to use \"TRANSACTION MODE\" to manage idle sessions. In\n transaction mode the connection is returned to the pool after\n each transaction. There are usage patterns in large\n applications where clients have client pools and subsequently\n have large numbers of connections open. Sometimes in the\n thousands, unfortunately many of these are idle connections.\n Using transaction mode reduces the number of real connections\n to the database in many cases by orders of magnitude.\n\n\nUnfortunately this is incompatible with named prepared\n statements. From the client's point of view they have one\n session and named prepared statements are session objects.\n From one transaction to the next the physical connection can\n change along with the attached prepared statements.\n\n\nThe idea that was discussed is when we prepare the\n statement we cache it in a statement cache and return a\n queryid much like the queryid used in pg_stat_statements. \n Instead of executing the statement name we would execute the\n queryid. \n\n\nIf the queryid did not exist, attempting to execute it\n would cause an error and cause the running transaction to\n fail. Retrieving the statement from the query cache would have\n to happen before the attempt to execute it and return an error\n to the client subsequently the client could re-prepare the\n statement and execute. This would have to happen in such a way\n as to not cause the transaction to fail.\n\n\nThe one other idea that was proposed was to cache the\n statements in the client. However this does nothing to address\n the issue of managing idle connections.\n\n\nRegards,\n\n\nDave Cramer\n\n\n\n\n\n\n There is a PR with support of prepared statement support to\n pgbouncer:\nhttps://github.com/pgbouncer/pgbouncer/pull/845\n any feedback, reviews and suggestions are welcome.",
"msg_date": "Thu, 8 Jun 2023 09:15:04 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "Hi Konstantin,\n\nYes, I ran into Euler at pgcon and he mentioned this. I intend to test it.\nI'd still like to see my proposal in the server.\n\nDave Cramer\n\n\nOn Thu, 8 Jun 2023 at 02:15, Konstantin Knizhnik <[email protected]> wrote:\n\n>\n>\n> On 07.06.2023 10:48 PM, Dave Cramer wrote:\n>\n> Greetings,\n>\n> At pgcon last week I was speaking to some people about the problem we have\n> with connection pools and named prepared statements.\n>\n> For context pgjdbc (and others) use un-named statements and then switch to\n> named statements after using the statement N (default 5) times. In session\n> mode this is not a problem. When the connection is closed by the\n> application the pools generally issue \"DISCARD ALL\" and close all prepared\n> statements. The next time the connection is opened the statement is\n> prepared and all works as it should.\n>\n> However one of the more interesting use cases for pgbouncer is to use\n> \"TRANSACTION MODE\" to manage idle sessions. In transaction mode the\n> connection is returned to the pool after each transaction. There are usage\n> patterns in large applications where clients have client pools and\n> subsequently have large numbers of connections open. Sometimes in the\n> thousands, unfortunately many of these are idle connections. Using\n> transaction mode reduces the number of real connections to the database in\n> many cases by orders of magnitude.\n>\n> Unfortunately this is incompatible with named prepared statements. From\n> the client's point of view they have one session and named prepared\n> statements are session objects. From one transaction to the next the\n> physical connection can change along with the attached prepared statements.\n>\n> The idea that was discussed is when we prepare the statement we cache it\n> in a statement cache and return a queryid much like the queryid used in\n> pg_stat_statements. Instead of executing the statement name we would\n> execute the queryid.\n>\n> If the queryid did not exist, attempting to execute it would cause an\n> error and cause the running transaction to fail. Retrieving the statement\n> from the query cache would have to happen before the attempt to execute it\n> and return an error to the client subsequently the client could re-prepare\n> the statement and execute. This would have to happen in such a way as to\n> not cause the transaction to fail.\n>\n> The one other idea that was proposed was to cache the statements in the\n> client. However this does nothing to address the issue of managing idle\n> connections.\n>\n> Regards,\n> Dave Cramer\n>\n>\n>\n> There is a PR with support of prepared statement support to pgbouncer:\n> https://github.com/pgbouncer/pgbouncer/pull/845\n> any feedback, reviews and suggestions are welcome.\n>\n\nHi Konstantin,Yes, I ran into Euler at pgcon and he mentioned this. I intend to test it. I'd still like to see my proposal in the server. Dave CramerOn Thu, 8 Jun 2023 at 02:15, Konstantin Knizhnik <[email protected]> wrote:\n\n\n\nOn 07.06.2023 10:48 PM, Dave Cramer\n wrote:\n\n\nGreetings,\n \n\nAt pgcon last week I was speaking to some people about the\n problem we have with connection pools and named prepared\n statements.\n\n\nFor context pgjdbc (and others) use un-named statements and\n then switch to named statements after using the statement N\n (default 5) times. In session mode this is not a problem. When\n the connection is closed by the application the pools\n generally issue \"DISCARD ALL\" and close all prepared\n statements. The next time the connection is opened the\n statement is prepared and all works as it should.\n\n\nHowever one of the more interesting use cases for pgbouncer\n is to use \"TRANSACTION MODE\" to manage idle sessions. In\n transaction mode the connection is returned to the pool after\n each transaction. There are usage patterns in large\n applications where clients have client pools and subsequently\n have large numbers of connections open. Sometimes in the\n thousands, unfortunately many of these are idle connections.\n Using transaction mode reduces the number of real connections\n to the database in many cases by orders of magnitude.\n\n\nUnfortunately this is incompatible with named prepared\n statements. From the client's point of view they have one\n session and named prepared statements are session objects.\n From one transaction to the next the physical connection can\n change along with the attached prepared statements.\n\n\nThe idea that was discussed is when we prepare the\n statement we cache it in a statement cache and return a\n queryid much like the queryid used in pg_stat_statements. \n Instead of executing the statement name we would execute the\n queryid. \n\n\nIf the queryid did not exist, attempting to execute it\n would cause an error and cause the running transaction to\n fail. Retrieving the statement from the query cache would have\n to happen before the attempt to execute it and return an error\n to the client subsequently the client could re-prepare the\n statement and execute. This would have to happen in such a way\n as to not cause the transaction to fail.\n\n\nThe one other idea that was proposed was to cache the\n statements in the client. However this does nothing to address\n the issue of managing idle connections.\n\n\nRegards,\n\n\nDave Cramer\n\n\n\n\n\n\n There is a PR with support of prepared statement support to\n pgbouncer:\nhttps://github.com/pgbouncer/pgbouncer/pull/845\n any feedback, reviews and suggestions are welcome.",
"msg_date": "Thu, 8 Jun 2023 05:53:11 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 02:15, Konstantin Knizhnik wrote:\n\n> There is a PR with support of prepared statement support to pgbouncer:\n> https://github.com/pgbouncer/pgbouncer/pull/845\n> any feedback, reviews and suggestions are welcome.\n\nI was about to say that the support would have to come from the pooler \nas it is possible to have multiple applications in different languages \nconnecting to the same pool(s).\n\nI can certainly give this a try, possibly over the weekend. I have a \nTPC-C that can use prepared statements plus pause/resume. That might be \na good stress for it.\n\n\nBest Regards, Jan\n\n\n",
"msg_date": "Thu, 8 Jun 2023 08:43:35 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 8:43 AM Jan Wieck <[email protected]> wrote:\n\n> On 6/8/23 02:15, Konstantin Knizhnik wrote:\n>\n> > There is a PR with support of prepared statement support to pgbouncer:\n> > https://github.com/pgbouncer/pgbouncer/pull/845\n> > any feedback, reviews and suggestions are welcome.\n>\n> I was about to say that the support would have to come from the pooler\n> as it is possible to have multiple applications in different languages\n> connecting to the same pool(s).\n\n\nWhy from the pooler? If it were done at the server every client could use\nit?\n\n>\n> Dave\n\n>\n> --\nDave Cramer\n\nOn Thu, Jun 8, 2023 at 8:43 AM Jan Wieck <[email protected]> wrote:On 6/8/23 02:15, Konstantin Knizhnik wrote:\n\n> There is a PR with support of prepared statement support to pgbouncer:\n> https://github.com/pgbouncer/pgbouncer/pull/845\n> any feedback, reviews and suggestions are welcome.\n\nI was about to say that the support would have to come from the pooler \nas it is possible to have multiple applications in different languages \nconnecting to the same pool(s).Why from the pooler? If it were done at the server every client could use it?Dave\n-- Dave Cramer",
"msg_date": "Thu, 8 Jun 2023 09:21:07 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 08.06.2023 3:43 PM, Jan Wieck wrote:\n> On 6/8/23 02:15, Konstantin Knizhnik wrote:\n>\n>> There is a PR with support of prepared statement support to pgbouncer:\n>> https://github.com/pgbouncer/pgbouncer/pull/845\n>> any feedback, reviews and suggestions are welcome.\n>\n> I was about to say that the support would have to come from the pooler \n> as it is possible to have multiple applications in different languages \n> connecting to the same pool(s)\n\nIdeally, support should be provided by both sides: only pooler knows \nmapping between clients and postgres backends and only server knows\nwhich queries require session semantic and which not (in principle it is \npossible to make connection pooler to determine it, but it is very \nnon-trivial).\n> .\n>\n> I can certainly give this a try, possibly over the weekend. I have a \n> TPC-C that can use prepared statements plus pause/resume. That might \n> be a good stress for it.\n>\n\nBy the way, I have done some small benchmarking of different connection \npoolers for Postgres.\nBenchmark was very simple: I just create small pgbench database with \nscale 10 and then\nrun read-only queries with 100 clients:\n\npgbench -c 100 -P 10 -T 100 -S -M prepared postgres\n\n\nNumber of connections to the database was limited in an all pooler\nconfigurations to 10. I have tested only transaction mode. If pooler \nsupports prepared statements, I have also tested them.\nJust for reference I also include results with direct connection to \nPostgres.\nAll benchamrking was done at my notebook, so it is not quite \nrepresentative scenario.\n\n\nDirect:\nConnections Prepared TPS\n10 yes 135507\n10 no 73218\n100 yes 79042\n100 no 59245\n\nPooler: (100 client connections, 10 server connections, transaction mode)\nPooler Prepared TPS\npgbouncer no 65029\npgbouncer-ps no 65570\npgbouncer-ps yes 65825\nodyssey yes 18351\nodyssey no 21299\npgagrol no 29673\npgcat no 23247\n\n\n\n\n\n\n\n\nOn 08.06.2023 3:43 PM, Jan Wieck wrote:\n\nOn\n 6/8/23 02:15, Konstantin Knizhnik wrote:\n \n\nThere is a PR with support of prepared\n statement support to pgbouncer:\n \nhttps://github.com/pgbouncer/pgbouncer/pull/845\n\n any feedback, reviews and suggestions are welcome.\n \n\n\n I was about to say that the support would have to come from the\n pooler as it is possible to have multiple applications in\n different languages connecting to the same pool(s)\n\n Ideally, support should be provided by both sides: only pooler knows\n mapping between clients and postgres backends and only server knows\n \n which queries require session semantic and which not (in principle\n it is possible to make connection pooler to determine it, but it is\n very non-trivial).\n.\n \n\n I can certainly give this a try, possibly over the weekend. I have\n a TPC-C that can use prepared statements plus pause/resume. That\n might be a good stress for it.\n \n\n\n\n By the way, I have done some small benchmarking of different\n connection poolers for Postgres.\n Benchmark was very simple: I just create small pgbench database with\n scale 10 and then\n run read-only queries with 100 clients:\n\npgbench -c 100 -P 10 -T 100 -S -M prepared postgres\n\n\n Number of connections to the database was limited in an all pooler\n configurations to 10. I have tested only transaction mode. If pooler\n supports prepared statements, I have also tested them.\n Just for reference I also include results with direct connection to\n Postgres.\n All benchamrking was done at my notebook, so it is not quite\n representative scenario.\n\n \nDirect:\nConnections Prepared TPS\n10 yes 135507\n10 no 73218\n100 yes 79042\n100 no 59245\n\nPooler: (100 client connections, 10 server connections, transaction mode)\nPooler Prepared TPS\npgbouncer no 65029\npgbouncer-ps no 65570\npgbouncer-ps yes 65825\nodyssey yes 18351\nodyssey no 21299\npgagrol no 29673\npgcat no 23247",
"msg_date": "Thu, 8 Jun 2023 16:27:47 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 09:21, Dave Cramer wrote:\n> \n> \n> On Thu, Jun 8, 2023 at 8:43 AM Jan Wieck <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 6/8/23 02:15, Konstantin Knizhnik wrote:\n> \n> > There is a PR with support of prepared statement support to\n> pgbouncer:\n> > https://github.com/pgbouncer/pgbouncer/pull/845\n> <https://github.com/pgbouncer/pgbouncer/pull/845>\n> > any feedback, reviews and suggestions are welcome.\n> \n> I was about to say that the support would have to come from the pooler\n> as it is possible to have multiple applications in different languages\n> connecting to the same pool(s).\n> \n> \n> Why from the pooler? If it were done at the server every client could \n> use it?\n\nThe server doesn't know about all the clients of the pooler, does it? It \nhas no way of telling if/when a client disconnects from the pooler.\n\n\nJan\n\n\n",
"msg_date": "Thu, 8 Jun 2023 09:53:02 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 09:53, Jan Wieck wrote:\n> On 6/8/23 09:21, Dave Cramer wrote:\n> The server doesn't know about all the clients of the pooler, does it? It\n> has no way of telling if/when a client disconnects from the pooler.\n\nAnother problem that complicates doing it in the server is that the \ninformation require to (re-)prepare a statement in a backend that \ncurrently doesn't have it needs to be kept in shared memory. This \nincludes the query string itself. Doing that without shared memory in a \npooler that is multi-threaded or based on async-IO is much simpler and \nallows for easy ballooning.\n\n\nJan\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 10:31:05 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 09:53, Jan Wieck <[email protected]> wrote:\n\n> On 6/8/23 09:21, Dave Cramer wrote:\n> >\n> >\n> > On Thu, Jun 8, 2023 at 8:43 AM Jan Wieck <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 6/8/23 02:15, Konstantin Knizhnik wrote:\n> >\n> > > There is a PR with support of prepared statement support to\n> > pgbouncer:\n> > > https://github.com/pgbouncer/pgbouncer/pull/845\n> > <https://github.com/pgbouncer/pgbouncer/pull/845>\n> > > any feedback, reviews and suggestions are welcome.\n> >\n> > I was about to say that the support would have to come from the\n> pooler\n> > as it is possible to have multiple applications in different\n> languages\n> > connecting to the same pool(s).\n> >\n> >\n> > Why from the pooler? If it were done at the server every client could\n> > use it?\n>\n> The server doesn't know about all the clients of the pooler, does it? It\n> has no way of telling if/when a client disconnects from the pooler.\n>\n\nWhy does it have to know if the client disconnects ? It just keeps a cache\nof prepared statements.\nIn large apps it is very likely there will be another client wanting to use\nthe statement\n\nDave\n\n>\n>\n\nOn Thu, 8 Jun 2023 at 09:53, Jan Wieck <[email protected]> wrote:On 6/8/23 09:21, Dave Cramer wrote:\n> \n> \n> On Thu, Jun 8, 2023 at 8:43 AM Jan Wieck <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 6/8/23 02:15, Konstantin Knizhnik wrote:\n> \n> > There is a PR with support of prepared statement support to\n> pgbouncer:\n> > https://github.com/pgbouncer/pgbouncer/pull/845\n> <https://github.com/pgbouncer/pgbouncer/pull/845>\n> > any feedback, reviews and suggestions are welcome.\n> \n> I was about to say that the support would have to come from the pooler\n> as it is possible to have multiple applications in different languages\n> connecting to the same pool(s).\n> \n> \n> Why from the pooler? If it were done at the server every client could \n> use it?\n\nThe server doesn't know about all the clients of the pooler, does it? It \nhas no way of telling if/when a client disconnects from the pooler.Why does it have to know if the client disconnects ? It just keeps a cache of prepared statements. In large apps it is very likely there will be another client wanting to use the statementDave",
"msg_date": "Thu, 8 Jun 2023 10:55:02 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]> wrote:\n\n> On 6/8/23 09:53, Jan Wieck wrote:\n> > On 6/8/23 09:21, Dave Cramer wrote:\n> > The server doesn't know about all the clients of the pooler, does it? It\n> > has no way of telling if/when a client disconnects from the pooler.\n>\n> Another problem that complicates doing it in the server is that the\n> information require to (re-)prepare a statement in a backend that\n> currently doesn't have it needs to be kept in shared memory. This\n> includes the query string itself. Doing that without shared memory in a\n> pooler that is multi-threaded or based on async-IO is much simpler and\n> allows for easy ballooning.\n>\n>\nI don't expect the server to re-prepare the statement. If the server\nresponds with \"statement doesn't exist\" the client would send a prepare.\n\nDave\n\nOn Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]> wrote:On 6/8/23 09:53, Jan Wieck wrote:\n> On 6/8/23 09:21, Dave Cramer wrote:\n> The server doesn't know about all the clients of the pooler, does it? It\n> has no way of telling if/when a client disconnects from the pooler.\n\nAnother problem that complicates doing it in the server is that the \ninformation require to (re-)prepare a statement in a backend that \ncurrently doesn't have it needs to be kept in shared memory. This \nincludes the query string itself. Doing that without shared memory in a \npooler that is multi-threaded or based on async-IO is much simpler and \nallows for easy ballooning.\nI don't expect the server to re-prepare the statement. If the server responds with \"statement doesn't exist\" the client would send a prepare.Dave",
"msg_date": "Thu, 8 Jun 2023 10:56:16 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 10:56, Dave Cramer wrote:\n> \n> \n> \n> \n> On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 6/8/23 09:53, Jan Wieck wrote:\n> > On 6/8/23 09:21, Dave Cramer wrote:\n> > The server doesn't know about all the clients of the pooler, does\n> it? It\n> > has no way of telling if/when a client disconnects from the pooler.\n> \n> Another problem that complicates doing it in the server is that the\n> information require to (re-)prepare a statement in a backend that\n> currently doesn't have it needs to be kept in shared memory. This\n> includes the query string itself. Doing that without shared memory in a\n> pooler that is multi-threaded or based on async-IO is much simpler and\n> allows for easy ballooning.\n> \n> \n> I don't expect the server to re-prepare the statement. If the server \n> responds with \"statement doesn't exist\" the client would send a prepare.\n\nAre you proposing a new libpq protocol version?\n\n\nJan\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:15:24 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 11:15, Jan Wieck <[email protected]> wrote:\n\n> On 6/8/23 10:56, Dave Cramer wrote:\n> >\n> >\n> >\n> >\n> > On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 6/8/23 09:53, Jan Wieck wrote:\n> > > On 6/8/23 09:21, Dave Cramer wrote:\n> > > The server doesn't know about all the clients of the pooler, does\n> > it? It\n> > > has no way of telling if/when a client disconnects from the\n> pooler.\n> >\n> > Another problem that complicates doing it in the server is that the\n> > information require to (re-)prepare a statement in a backend that\n> > currently doesn't have it needs to be kept in shared memory. This\n> > includes the query string itself. Doing that without shared memory\n> in a\n> > pooler that is multi-threaded or based on async-IO is much simpler\n> and\n> > allows for easy ballooning.\n> >\n> >\n> > I don't expect the server to re-prepare the statement. If the server\n> > responds with \"statement doesn't exist\" the client would send a prepare.\n>\n> Are you proposing a new libpq protocol version?\n>\n\nI believe we would need to add this to the protocol, yes.\n\nDave\n\n>\n>\n> Jan\n>\n\nOn Thu, 8 Jun 2023 at 11:15, Jan Wieck <[email protected]> wrote:On 6/8/23 10:56, Dave Cramer wrote:\n> \n> \n> \n> \n> On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 6/8/23 09:53, Jan Wieck wrote:\n> > On 6/8/23 09:21, Dave Cramer wrote:\n> > The server doesn't know about all the clients of the pooler, does\n> it? It\n> > has no way of telling if/when a client disconnects from the pooler.\n> \n> Another problem that complicates doing it in the server is that the\n> information require to (re-)prepare a statement in a backend that\n> currently doesn't have it needs to be kept in shared memory. This\n> includes the query string itself. Doing that without shared memory in a\n> pooler that is multi-threaded or based on async-IO is much simpler and\n> allows for easy ballooning.\n> \n> \n> I don't expect the server to re-prepare the statement. If the server \n> responds with \"statement doesn't exist\" the client would send a prepare.\n\nAre you proposing a new libpq protocol version?I believe we would need to add this to the protocol, yes.Dave \n\n\nJan",
"msg_date": "Thu, 8 Jun 2023 11:18:34 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 08.06.2023 6:18 PM, Dave Cramer wrote:\n>\n>\n> On Thu, 8 Jun 2023 at 11:15, Jan Wieck <[email protected]> wrote:\n>\n> On 6/8/23 10:56, Dave Cramer wrote:\n> >\n> >\n> >\n> >\n> > On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 6/8/23 09:53, Jan Wieck wrote:\n> > > On 6/8/23 09:21, Dave Cramer wrote:\n> > > The server doesn't know about all the clients of the\n> pooler, does\n> > it? It\n> > > has no way of telling if/when a client disconnects from\n> the pooler.\n> >\n> > Another problem that complicates doing it in the server is\n> that the\n> > information require to (re-)prepare a statement in a backend\n> that\n> > currently doesn't have it needs to be kept in shared memory.\n> This\n> > includes the query string itself. Doing that without shared\n> memory in a\n> > pooler that is multi-threaded or based on async-IO is much\n> simpler and\n> > allows for easy ballooning.\n> >\n> >\n> > I don't expect the server to re-prepare the statement. If the\n> server\n> > responds with \"statement doesn't exist\" the client would send a\n> prepare.\n>\n> Are you proposing a new libpq protocol version?\n>\n>\n> I believe we would need to add this to the protocol, yes.\n\n\nSo it will be responsibility of client to remember text of prepared \nquery to be able to resend it when statement doesn't exists at server?\nIMHO very strange decision. Why not to handle it in connection pooler \n(doesn't matter - external or embedded)?\n\n\n\n\n\n\n\n\nOn 08.06.2023 6:18 PM, Dave Cramer\n wrote:\n\n\n\n\n\n\n\n\nOn Thu, 8 Jun 2023 at 11:15,\n Jan Wieck <[email protected]>\n wrote:\n\nOn 6/8/23 10:56, Dave\n Cramer wrote:\n > \n > \n > \n > \n > On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]\n\n > <mailto:[email protected]>>\n wrote:\n > \n > On 6/8/23 09:53, Jan Wieck wrote:\n > > On 6/8/23 09:21, Dave Cramer wrote:\n > > The server doesn't know about all the clients\n of the pooler, does\n > it? It\n > > has no way of telling if/when a client\n disconnects from the pooler.\n > \n > Another problem that complicates doing it in the\n server is that the\n > information require to (re-)prepare a statement in\n a backend that\n > currently doesn't have it needs to be kept in\n shared memory. This\n > includes the query string itself. Doing that\n without shared memory in a\n > pooler that is multi-threaded or based on async-IO\n is much simpler and\n > allows for easy ballooning.\n > \n > \n > I don't expect the server to re-prepare the statement.\n If the server \n > responds with \"statement doesn't exist\" the client\n would send a prepare.\n\n Are you proposing a new libpq protocol version?\n\n\n\nI believe we would need to add this to the protocol, yes.\n\n\n\n\n\n So it will be responsibility of client to remember text of prepared\n query to be able to resend it when statement doesn't exists at\n server?\n IMHO very strange decision. Why not to handle it in connection\n pooler (doesn't matter - external or embedded)?",
"msg_date": "Thu, 8 Jun 2023 18:22:36 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik <[email protected]> wrote:\n\n>\n>\n> On 08.06.2023 6:18 PM, Dave Cramer wrote:\n>\n>\n>\n> On Thu, 8 Jun 2023 at 11:15, Jan Wieck <[email protected]> wrote:\n>\n>> On 6/8/23 10:56, Dave Cramer wrote:\n>> >\n>> >\n>> >\n>> >\n>> > On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]\n>> > <mailto:[email protected]>> wrote:\n>> >\n>> > On 6/8/23 09:53, Jan Wieck wrote:\n>> > > On 6/8/23 09:21, Dave Cramer wrote:\n>> > > The server doesn't know about all the clients of the pooler, does\n>> > it? It\n>> > > has no way of telling if/when a client disconnects from the\n>> pooler.\n>> >\n>> > Another problem that complicates doing it in the server is that the\n>> > information require to (re-)prepare a statement in a backend that\n>> > currently doesn't have it needs to be kept in shared memory. This\n>> > includes the query string itself. Doing that without shared memory\n>> in a\n>> > pooler that is multi-threaded or based on async-IO is much simpler\n>> and\n>> > allows for easy ballooning.\n>> >\n>> >\n>> > I don't expect the server to re-prepare the statement. If the server\n>> > responds with \"statement doesn't exist\" the client would send a prepare.\n>>\n>> Are you proposing a new libpq protocol version?\n>>\n>\n> I believe we would need to add this to the protocol, yes.\n>\n>\n>\n> So it will be responsibility of client to remember text of prepared query\n> to be able to resend it when statement doesn't exists at server?\n> IMHO very strange decision. Why not to handle it in connection pooler\n> (doesn't matter - external or embedded)?\n>\n\nI may be myopic but in the JDBC world and I assume others we have a\n`PreparedStatement` object which has the text of the query.\nThe text is readily available to us.\n\nAlso again from the JDBC point of view we have use un-named statements\nnormally and then name them after 5 uses so we already have embedded logic\non how to deal with PreparedStatements\n\nDave\n\nOn Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik <[email protected]> wrote:\n\n\n\nOn 08.06.2023 6:18 PM, Dave Cramer\n wrote:\n\n\n\n\n\n\n\nOn Thu, 8 Jun 2023 at 11:15,\n Jan Wieck <[email protected]>\n wrote:\n\nOn 6/8/23 10:56, Dave\n Cramer wrote:\n > \n > \n > \n > \n > On Thu, 8 Jun 2023 at 10:31, Jan Wieck <[email protected]\n\n > <mailto:[email protected]>>\n wrote:\n > \n > On 6/8/23 09:53, Jan Wieck wrote:\n > > On 6/8/23 09:21, Dave Cramer wrote:\n > > The server doesn't know about all the clients\n of the pooler, does\n > it? It\n > > has no way of telling if/when a client\n disconnects from the pooler.\n > \n > Another problem that complicates doing it in the\n server is that the\n > information require to (re-)prepare a statement in\n a backend that\n > currently doesn't have it needs to be kept in\n shared memory. This\n > includes the query string itself. Doing that\n without shared memory in a\n > pooler that is multi-threaded or based on async-IO\n is much simpler and\n > allows for easy ballooning.\n > \n > \n > I don't expect the server to re-prepare the statement.\n If the server \n > responds with \"statement doesn't exist\" the client\n would send a prepare.\n\n Are you proposing a new libpq protocol version?\n\n\n\nI believe we would need to add this to the protocol, yes.\n\n\n\n\n\n So it will be responsibility of client to remember text of prepared\n query to be able to resend it when statement doesn't exists at\n server?\n IMHO very strange decision. Why not to handle it in connection\n pooler (doesn't matter - external or embedded)?I may be myopic but in the JDBC world and I assume others we have a `PreparedStatement` object which has the text of the query.The text is readily available to us.Also again from the JDBC point of view we have use un-named statements normally and then name them after 5 uses so we already have embedded logic on how to deal with PreparedStatementsDave",
"msg_date": "Thu, 8 Jun 2023 13:31:32 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 13:31, Dave Cramer wrote:\n> \n> On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n\n> So it will be responsibility of client to remember text of prepared\n> query to be able to resend it when statement doesn't exists at server?\n> IMHO very strange decision. Why not to handle it in connection\n> pooler (doesn't matter - external or embedded)?\n> \n> \n> I may be myopic but in the JDBC world and I assume others we have a \n> `PreparedStatement` object which has the text of the query.\n> The text is readily available to us.\n> \n> Also again from the JDBC point of view we have use un-named statements \n> normally and then name them after 5 uses so we already have embedded \n> logic on how to deal with PreparedStatements\n\nThe entire problem only surfaces when using a connection pool of one \nsort or another. Without one the session is persistent to the client.\n\nAt some point I created a \"functional\" proof of concept for a connection \npool that did a mapping of the client side name to a pool managed server \nside name. It kept track of which query was known by a server. It kept a \nhashtable of poolname+username+query MD5 sums. On each prepare request \nit would look up if that query is known, add a query-client reference in \nanother hashtable and so on. On a Bind/Exec message it would check that \nthe server has the query prepared and issue a P message if not. What was \nmissing was to keep track of no longer needed queries and deallocate them.\n\nAs said, it was a POC. Since it was implemented in Tcl it performed \nmiserable, but I got it to the point of being able to pause & resume and \nthe whole thing did work with prepared statements on the transaction \nlevel. So it was a full functioning POC.\n\nWhat makes this design appealing to me is that it is completely \ntransparent to every existing client that uses the extended query \nprotocol for server side prepared statements.\n\n\nJan\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 15:49:55 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 15:49, Jan Wieck <[email protected]> wrote:\n\n> On 6/8/23 13:31, Dave Cramer wrote:\n> >\n> > On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n>\n> > So it will be responsibility of client to remember text of prepared\n> > query to be able to resend it when statement doesn't exists at\n> server?\n> > IMHO very strange decision. Why not to handle it in connection\n> > pooler (doesn't matter - external or embedded)?\n> >\n> >\n> > I may be myopic but in the JDBC world and I assume others we have a\n> > `PreparedStatement` object which has the text of the query.\n> > The text is readily available to us.\n> >\n> > Also again from the JDBC point of view we have use un-named statements\n> > normally and then name them after 5 uses so we already have embedded\n> > logic on how to deal with PreparedStatements\n>\n> The entire problem only surfaces when using a connection pool of one\n> sort or another. Without one the session is persistent to the client.\n>\n> At some point I created a \"functional\" proof of concept for a connection\n> pool that did a mapping of the client side name to a pool managed server\n> side name. It kept track of which query was known by a server. It kept a\n> hashtable of poolname+username+query MD5 sums. On each prepare request\n> it would look up if that query is known, add a query-client reference in\n> another hashtable and so on. On a Bind/Exec message it would check that\n> the server has the query prepared and issue a P message if not. What was\n> missing was to keep track of no longer needed queries and deallocate them.\n>\n> As said, it was a POC. Since it was implemented in Tcl it performed\n> miserable, but I got it to the point of being able to pause & resume and\n> the whole thing did work with prepared statements on the transaction\n> level. So it was a full functioning POC.\n>\n> What makes this design appealing to me is that it is completely\n> transparent to every existing client that uses the extended query\n> protocol for server side prepared statements.\n>\n\nApparently this is coming in pgbouncer Support of prepared statements by\nknizhnik · Pull Request #845 · pgbouncer/pgbouncer (github.com)\n<https://github.com/pgbouncer/pgbouncer/pull/845>\n\nDave\n\n>\n>\n> Jan\n>\n>\n\nOn Thu, 8 Jun 2023 at 15:49, Jan Wieck <[email protected]> wrote:On 6/8/23 13:31, Dave Cramer wrote:\n> \n> On Thu, 8 Jun 2023 at 11:22, Konstantin Knizhnik <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n\n> So it will be responsibility of client to remember text of prepared\n> query to be able to resend it when statement doesn't exists at server?\n> IMHO very strange decision. Why not to handle it in connection\n> pooler (doesn't matter - external or embedded)?\n> \n> \n> I may be myopic but in the JDBC world and I assume others we have a \n> `PreparedStatement` object which has the text of the query.\n> The text is readily available to us.\n> \n> Also again from the JDBC point of view we have use un-named statements \n> normally and then name them after 5 uses so we already have embedded \n> logic on how to deal with PreparedStatements\n\nThe entire problem only surfaces when using a connection pool of one \nsort or another. Without one the session is persistent to the client.\n\nAt some point I created a \"functional\" proof of concept for a connection \npool that did a mapping of the client side name to a pool managed server \nside name. It kept track of which query was known by a server. It kept a \nhashtable of poolname+username+query MD5 sums. On each prepare request \nit would look up if that query is known, add a query-client reference in \nanother hashtable and so on. On a Bind/Exec message it would check that \nthe server has the query prepared and issue a P message if not. What was \nmissing was to keep track of no longer needed queries and deallocate them.\n\nAs said, it was a POC. Since it was implemented in Tcl it performed \nmiserable, but I got it to the point of being able to pause & resume and \nthe whole thing did work with prepared statements on the transaction \nlevel. So it was a full functioning POC.\n\nWhat makes this design appealing to me is that it is completely \ntransparent to every existing client that uses the extended query \nprotocol for server side prepared statements.Apparently this is coming in pgbouncer Support of prepared statements by knizhnik · Pull Request #845 · pgbouncer/pgbouncer (github.com)Dave \n\n\nJan",
"msg_date": "Thu, 8 Jun 2023 15:57:51 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
},
{
"msg_contents": "On 6/8/23 15:57, Dave Cramer wrote:\n> \n> Apparently this is coming in pgbouncer Support of prepared statements by \n> knizhnik · Pull Request #845 · pgbouncer/pgbouncer (github.com) \n> <https://github.com/pgbouncer/pgbouncer/pull/845>\n\nI am quite interested in that patch. Considering how pgbouncer works \ninternally I am very curious.\n\n\nJan\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:16:31 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Named Prepared statement problems and possible solutions"
}
] |
[
{
"msg_contents": "A postgres.exe built with meson, ninja, and MSVC lacks the version metadata\nthat postgres.exe gets under non-meson build systems. Patch attached.",
"msg_date": "Wed, 7 Jun 2023 16:14:07 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "win32ver data in meson-built postgres.exe"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 16:14:07 -0700, Noah Misch wrote:\n> A postgres.exe built with meson, ninja, and MSVC lacks the version metadata\n> that postgres.exe gets under non-meson build systems. Patch attached.\n\nI dimly recall that we discussed that and basically decided that it doesn't\nreally make sense to attach this information to postgres.exe.\n\n\n> This preserves two quirks of the older build systems. First,\n> postgres.exe is icon-free.\n\nWe could also just change that.\n\n\n> Second, the resources object is not an input\n> to postgres.def.\n\nI don't see what negative effects that could have: postgres.def is used to\nprovide symbol \"thunks\" to extension libraries, which don't need to access\nthis information. If we used the rc compiler to inject binary data (say\nbootstrap.bki) into binaries, it'd possibly be a different story (although I\nthink it might work anyway) - but if we wanted to do something like that, we'd\nbuild portable infrastructure anyway.\n\n\n> - rcgen_bin_args = rcgen_base_args + [\n> + rcgen_server_args = rcgen_base_args + [\n> '--VFT_TYPE', 'VFT_APP',\n> - '--FILEENDING', 'exe',\n> + '--FILEENDING', 'exe'\n> + ]\n> +\n> + rcgen_bin_args = rcgen_server_args + [\n> '--ICO', pg_ico\n> ]\n\nSomehow it seems a bit wrong to derive non-server from server, but ... ;)\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Jun 2023 16:47:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: win32ver data in meson-built postgres.exe"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 04:47:26PM -0700, Andres Freund wrote:\n> On 2023-06-07 16:14:07 -0700, Noah Misch wrote:\n> > A postgres.exe built with meson, ninja, and MSVC lacks the version metadata\n> > that postgres.exe gets under non-meson build systems. Patch attached.\n> \n> I dimly recall that we discussed that and basically decided that it doesn't\n> really make sense to attach this information to postgres.exe.\n\nI looked for a discussion behind that, but I didn't find it. A key\nuser-visible consequence is whether the task manager \"Name\" column shows (1)\n\"PostgreSQL Server\" (version data present) vs. (2) \"postgres.exe\" (no version\ndata). While (2) is not terrible, (1) is more typical on Windows. I don't\nsee cause to migrate to (2) after N years of sending (1). Certainly this part\nof the user experience should not depend on one's choice of build system.\n\n> > This preserves two quirks of the older build systems. First,\n> > postgres.exe is icon-free.\n> \n> We could also just change that.\n\nI would be +1 for that (only if done for all build systems). Showing the\nelephant in task manager feels better than showing the generic-exe icon.\n\n> > Second, the resources object is not an input\n> > to postgres.def.\n> \n> I don't see what negative effects that could have:\n\nAgreed. I wrote that sentence for archaeologists of the future who might\nwonder why this change didn't use generated_backend_sources.\n\n\n",
"msg_date": "Wed, 7 Jun 2023 18:45:07 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: win32ver data in meson-built postgres.exe"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 3:45 AM Noah Misch <[email protected]> wrote:\n>\n> On Wed, Jun 07, 2023 at 04:47:26PM -0700, Andres Freund wrote:\n> > On 2023-06-07 16:14:07 -0700, Noah Misch wrote:\n> > > A postgres.exe built with meson, ninja, and MSVC lacks the version metadata\n> > > that postgres.exe gets under non-meson build systems. Patch attached.\n> >\n> > I dimly recall that we discussed that and basically decided that it doesn't\n> > really make sense to attach this information to postgres.exe.\n>\n> I looked for a discussion behind that, but I didn't find it. A key\n> user-visible consequence is whether the task manager \"Name\" column shows (1)\n> \"PostgreSQL Server\" (version data present) vs. (2) \"postgres.exe\" (no version\n> data). While (2) is not terrible, (1) is more typical on Windows. I don't\n> see cause to migrate to (2) after N years of sending (1). Certainly this part\n> of the user experience should not depend on one's choice of build system.\n\n+1, both on that it should be the same across build systems, and that\nthe variant that we have in the msvc build system is the best one.\n\nAnd if we don't have the version structure in it, it will cause issues\nfor installers (I think) and software inventory processes (definitely)\nthat also use that.\n\nI don't recall a discussion about removing it, but it's not unlikely I\nmissed it if it did take place...\n\n\n> > > This preserves two quirks of the older build systems. First,\n> > > postgres.exe is icon-free.\n> >\n> > We could also just change that.\n>\n> I would be +1 for that (only if done for all build systems). Showing the\n> elephant in task manager feels better than showing the generic-exe icon.\n\nI think this decision goes back all the way to the ancient times, and\nthe argument was then \"user should not use the postgres.exe file when\nclicking around\" sort of. Back then, task manager didn't show the icon\nat all, regardless. It does now, so I'm +1 to add the icon (in all the\nbuild systems).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Jun 2023 13:10:00 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: win32ver data in meson-built postgres.exe"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-07 18:45:07 -0700, Noah Misch wrote:\n> On Wed, Jun 07, 2023 at 04:47:26PM -0700, Andres Freund wrote:\n> > On 2023-06-07 16:14:07 -0700, Noah Misch wrote:\n> > > A postgres.exe built with meson, ninja, and MSVC lacks the version metadata\n> > > that postgres.exe gets under non-meson build systems. Patch attached.\n> >\n> > I dimly recall that we discussed that and basically decided that it doesn't\n> > really make sense to attach this information to postgres.exe.\n>\n> I looked for a discussion behind that, but I didn't find it. A key\n> user-visible consequence is whether the task manager \"Name\" column shows (1)\n> \"PostgreSQL Server\" (version data present) vs. (2) \"postgres.exe\" (no version\n> data). While (2) is not terrible, (1) is more typical on Windows. I don't\n> see cause to migrate to (2) after N years of sending (1). Certainly this part\n> of the user experience should not depend on one's choice of build system.\n\nI think I misremembered some details... I guess I was remembering the icon for\ndlls, in\nhttps://postgr.es/m/20220829221314.pepagj3i5mj43niy%40awork3.anarazel.de\n\nTouching the rc stuff always makes me feel dirty enough that I want to swap it\nout of my brain, if not run some deep erase tool :)\n\n\n> > > This preserves two quirks of the older build systems. First,\n> > > postgres.exe is icon-free.\n> >\n> > We could also just change that.\n>\n> I would be +1 for that (only if done for all build systems). Showing the\n> elephant in task manager feels better than showing the generic-exe icon.\n\nLet's do that then.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 10:45:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: win32ver data in meson-built postgres.exe"
},
{
"msg_contents": "On Thu, Jun 08, 2023 at 01:10:00PM +0200, Magnus Hagander wrote:\n> On Thu, Jun 8, 2023 at 3:45 AM Noah Misch <[email protected]> wrote:\n> > On Wed, Jun 07, 2023 at 04:47:26PM -0700, Andres Freund wrote:\n> > > On 2023-06-07 16:14:07 -0700, Noah Misch wrote:\n> > > > postgres.exe is icon-free.\n> > >\n> > > We could also just change that.\n> >\n> > I would be +1 for that (only if done for all build systems). Showing the\n> > elephant in task manager feels better than showing the generic-exe icon.\n> \n> I think this decision goes back all the way to the ancient times, and\n> the argument was then \"user should not use the postgres.exe file when\n> clicking around\" sort of. Back then, task manager didn't show the icon\n> at all, regardless. It does now, so I'm +1 to add the icon (in all the\n> build systems).\n\nThat sounds good, and it's the attached one-byte change. That also simplifies\nthe Meson fix; new version attached.",
"msg_date": "Fri, 9 Jun 2023 13:14:55 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: win32ver data in meson-built postgres.exe"
}
] |
[
{
"msg_contents": "Recently Markus Winand pointed out to me that the PG15 changes made in\n[1] to teach the query planner about monotonic window functions\nimproved the situation for PostgreSQL on his feature/optimization\ntimeline for PostgreSQL. These can be seen in [2].\n\nUnfortunately, if you look at the timeline in [2], we're not quite on\ngreen just yet per Markus's \"Not with partition by clause (see below)\"\ncaveat. This is because nodeWindowAgg.c's use_pass_through code must\nbe enabled when the WindowClause has a PARTITION BY clause.\n\nThe reason for this is that we can't just stop spitting out rows from\nthe WindowAgg when one partition is done as we still need to deal with\nrows from any subsequent partitions and we can only get to those by\ncontinuing to read rows until we find rows belonging to the next\npartition.\n\nThere is however a missed optimisation here when there is a PARTITION\nBY clause, but also some qual exists for the column(s) mentioned in\nthe partition by clause that makes it so only one partition can exist.\nA simple example of that is in the following:\n\nEXPLAIN\nSELECT *\nFROM\n (SELECT\n relkind,\n pg_relation_size(oid) size,\n rank() OVER (PARTITION BY relkind ORDER BY pg_relation_size(oid) DESC\n ) rank\n FROM pg_class)\nWHERE relkind = 'r' AND rank <= 10;\n\n(the subquery may be better imagined as a view)\n\nHere, because of the relkind='r' qual being pushed down into the\nsubquery, effectively that renders the PARTITION BY relkind clause\nredundant.\n\nWhat the attached patch does is process each WindowClause and removes\nany items from the PARTITION BY clause that are columns or expressions\nrelating to redundant PathKeys.\n\nEffectively, this allows the nodeWindowAgg.c code which stops\nprocessing WindowAgg rows when the run condition is met to work as the\nPARTITION BY clause is completely removed in the case of the above\nquery. Removing the redundant PARTITION BY items also has the added\nbenefit of not having to needlessly check if the next row belongs to\nthe same partition as the last row. For the above, that check is a\nwaste of time as all rows have relkind = 'r'\n\nI passed the patch along to Markus and he kindly confirmed that we're\nnow green for this particular optimisation.\n\nI'll add this patch to the July commitfest.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9d9c02ccd\n[2] https://use-the-index-luke.com/sql/partial-results/window-functions",
"msg_date": "Thu, 8 Jun 2023 11:37:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:37 AM David Rowley <[email protected]> wrote:\n\n> What the attached patch does is process each WindowClause and removes\n> any items from the PARTITION BY clause that are columns or expressions\n> relating to redundant PathKeys.\n>\n> Effectively, this allows the nodeWindowAgg.c code which stops\n> processing WindowAgg rows when the run condition is met to work as the\n> PARTITION BY clause is completely removed in the case of the above\n> query. Removing the redundant PARTITION BY items also has the added\n> benefit of not having to needlessly check if the next row belongs to\n> the same partition as the last row. For the above, that check is a\n> waste of time as all rows have relkind = 'r'\n\n\nThis is a nice optimization. I reviewed it and here are my findings.\n\nIn create_windowagg_plan there is such comment that says\n\n* ... Note: in principle, it's possible\n* to drop some of the sort columns, if they were proved redundant by\n* pathkey logic. However, it doesn't seem worth going out of our way to\n* optimize such cases.\n\nSince this patch removes any clauses from the wc->partitionClause for\nredundant pathkeys, this comment seems outdated, at least for the sort\ncolumns in partitionClause.\n\nAlso I'm wondering if we can do the same optimization to\nwc->orderClause. I tested it with the query below and saw performance\ngains.\n\ncreate table t (a int, b int);\ninsert into t select 1,2 from generate_series(1,100000)i;\nanalyze t;\n\nexplain analyze\nselect * from\n (select a, b, rank() over (PARTITION BY a order by b) rank\n from t where b = 2)\nwhere a = 1 and rank <= 10;\n\nWith and without this optimization to wc->orderClause the execution time\nis 67.279 ms VS. 119.120 ms (both best of 3).\n\nI notice you comment in the patch that doing this is unsafe because it\nwould change the semantics of peer rows during execution. Would you\nplease elaborate on that?\n\nThanks\nRichard\n\nOn Thu, Jun 8, 2023 at 7:37 AM David Rowley <[email protected]> wrote:\nWhat the attached patch does is process each WindowClause and removes\nany items from the PARTITION BY clause that are columns or expressions\nrelating to redundant PathKeys.\n\nEffectively, this allows the nodeWindowAgg.c code which stops\nprocessing WindowAgg rows when the run condition is met to work as the\nPARTITION BY clause is completely removed in the case of the above\nquery. Removing the redundant PARTITION BY items also has the added\nbenefit of not having to needlessly check if the next row belongs to\nthe same partition as the last row. For the above, that check is a\nwaste of time as all rows have relkind = 'r'This is a nice optimization. I reviewed it and here are my findings.In create_windowagg_plan there is such comment that says* ... Note: in principle, it's possible* to drop some of the sort columns, if they were proved redundant by* pathkey logic. However, it doesn't seem worth going out of our way to* optimize such cases.Since this patch removes any clauses from the wc->partitionClause forredundant pathkeys, this comment seems outdated, at least for the sortcolumns in partitionClause.Also I'm wondering if we can do the same optimization towc->orderClause. I tested it with the query below and saw performancegains.create table t (a int, b int);insert into t select 1,2 from generate_series(1,100000)i;analyze t;explain analyzeselect * from (select a, b, rank() over (PARTITION BY a order by b) rank from t where b = 2)where a = 1 and rank <= 10;With and without this optimization to wc->orderClause the execution timeis 67.279 ms VS. 119.120 ms (both best of 3).I notice you comment in the patch that doing this is unsafe because itwould change the semantics of peer rows during execution. Would youplease elaborate on that?ThanksRichard",
"msg_date": "Thu, 8 Jun 2023 17:11:17 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "Thank you for having a look at this.\n\nOn Thu, 8 Jun 2023 at 21:11, Richard Guo <[email protected]> wrote:\n>\n> On Thu, Jun 8, 2023 at 7:37 AM David Rowley <[email protected]> wrote:\n>>\n>> What the attached patch does is process each WindowClause and removes\n>> any items from the PARTITION BY clause that are columns or expressions\n>> relating to redundant PathKeys.\n\n> Also I'm wondering if we can do the same optimization to\n> wc->orderClause. I tested it with the query below and saw performance\n> gains.\n\nAfter looking again at nodeWindowAgg.c, I think it might be possible\nto do a bit more work and apply this to ORDER BY items too. Without\nan ORDER BY clause, all rows in the partition are peers of each other,\nand if the ORDER BY column is redundant due to belonging to a\nredundant pathkey, then those rows must also be peers too since the\nredundant pathkey must mean all rows have an equal value in the\nredundant column.\n\nHowever, there is a case where we must be much more careful. The\ncomment you highlighted in create_windowagg_plan() does mention this.\nIt reads \"we must *not* remove the ordering column for RANGE OFFSET\ncases\".\n\nThe following query can't work when the WindowClause has no ORDER BY column.\n\npostgres=# select relname,sum(pg_relation_size(oid)) over (range\nbetween 10 preceding and current row) from pg_Class;\nERROR: RANGE with offset PRECEDING/FOLLOWING requires exactly one\nORDER BY column\nLINE 1: select relname,sum(pg_relation_size(oid)) over (range between...\n\nIt might be possible to make adjustments in nodeWindowAgg.c to have\nthe equality checks come out as true when there is no ORDER BY.\nupdate_frameheadpos() is one location that would need to be adjusted.\nIt would need further study to ensure we don't accidentally break\nanything. I've not done that study, so won't be adjusting the patch\nfor now.\n\nI've attached an updated patch which updates the outdated comment\nwhich you highlighted. I ended up moving the mention of removing\nredundant columns into make_pathkeys_for_window() as it seemed a much\nmore relevant location to mention this optimisation.\n\nDavid",
"msg_date": "Fri, 9 Jun 2023 12:13:02 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 8:13 AM David Rowley <[email protected]> wrote:\n\n> After looking again at nodeWindowAgg.c, I think it might be possible\n> to do a bit more work and apply this to ORDER BY items too. Without\n> an ORDER BY clause, all rows in the partition are peers of each other,\n> and if the ORDER BY column is redundant due to belonging to a\n> redundant pathkey, then those rows must also be peers too since the\n> redundant pathkey must mean all rows have an equal value in the\n> redundant column.\n>\n> However, there is a case where we must be much more careful. The\n> comment you highlighted in create_windowagg_plan() does mention this.\n> It reads \"we must *not* remove the ordering column for RANGE OFFSET\n> cases\".\n\n\nI see. I tried to run the query below\n\nselect a, b, sum(a) over (order by b range between 10 preceding and current\nrow) from t where b = 2;\nserver closed the connection unexpectedly\n\nand if we've removed redundant items in wc->orderClause the query would\ntrigger the Assert in update_frameheadpos().\n\n /* We must have an ordering column */\n Assert(node->ordNumCols == 1);\n\n\n>\n> It might be possible to make adjustments in nodeWindowAgg.c to have\n> the equality checks come out as true when there is no ORDER BY.\n> update_frameheadpos() is one location that would need to be adjusted.\n> It would need further study to ensure we don't accidentally break\n> anything. I've not done that study, so won't be adjusting the patch\n> for now.\n\n\nI'm also not sure if doing that is safe in all cases. Hmm, do you think\nwe can instead check wc->frameOptions to see if it is the RANGE OFFSET\ncase in make_pathkeys_for_window(), and decide to not remove or remove\nredundant ORDER BY items according to whether it is or not RANGE OFFSET?\n\nThanks\nRichard\n\nOn Fri, Jun 9, 2023 at 8:13 AM David Rowley <[email protected]> wrote:\nAfter looking again at nodeWindowAgg.c, I think it might be possible\nto do a bit more work and apply this to ORDER BY items too. Without\nan ORDER BY clause, all rows in the partition are peers of each other,\nand if the ORDER BY column is redundant due to belonging to a\nredundant pathkey, then those rows must also be peers too since the\nredundant pathkey must mean all rows have an equal value in the\nredundant column.\n\nHowever, there is a case where we must be much more careful. The\ncomment you highlighted in create_windowagg_plan() does mention this.\nIt reads \"we must *not* remove the ordering column for RANGE OFFSET\ncases\".I see. I tried to run the query belowselect a, b, sum(a) over (order by b range between 10 preceding and current row) from t where b = 2;server closed the connection unexpectedlyand if we've removed redundant items in wc->orderClause the query wouldtrigger the Assert in update_frameheadpos(). /* We must have an ordering column */ Assert(node->ordNumCols == 1); \n\nIt might be possible to make adjustments in nodeWindowAgg.c to have\nthe equality checks come out as true when there is no ORDER BY.\nupdate_frameheadpos() is one location that would need to be adjusted.\nIt would need further study to ensure we don't accidentally break\nanything. I've not done that study, so won't be adjusting the patch\nfor now.I'm also not sure if doing that is safe in all cases. Hmm, do you thinkwe can instead check wc->frameOptions to see if it is the RANGE OFFSETcase in make_pathkeys_for_window(), and decide to not remove or removeredundant ORDER BY items according to whether it is or not RANGE OFFSET?ThanksRichard",
"msg_date": "Fri, 9 Jun 2023 16:57:02 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "On Fri, 9 Jun 2023 at 20:57, Richard Guo <[email protected]> wrote:\n>\n> On Fri, Jun 9, 2023 at 8:13 AM David Rowley <[email protected]> wrote:\n>> It might be possible to make adjustments in nodeWindowAgg.c to have\n>> the equality checks come out as true when there is no ORDER BY.\n>> update_frameheadpos() is one location that would need to be adjusted.\n>> It would need further study to ensure we don't accidentally break\n>> anything. I've not done that study, so won't be adjusting the patch\n>> for now.\n>\n>\n> I'm also not sure if doing that is safe in all cases. Hmm, do you think\n> we can instead check wc->frameOptions to see if it is the RANGE OFFSET\n> case in make_pathkeys_for_window(), and decide to not remove or remove\n> redundant ORDER BY items according to whether it is or not RANGE OFFSET?\n\nI think ideally, we'd not have to add special cases to the planner to\ndisable the optimisation for certain cases. I'd much rather see\nadjustments in the executor to handle cases where we've removed ORDER\nBY columns (e.g adjust update_frameheadpos() to assume rows are equal\nwhen there are no order by columns.) That of course would require\nthat there are no cases where removing ORDER BY columns would change\nthe actual query results. I can't currently think of any reason why\nthe results would change, but I'm not overly familiar with the RANGE\noption, so I'd need to spend a bit longer looking at it than I have\ndone so far to feel confident in making the patch process ORDER BY\ncolumns too.\n\nI'm ok with just doing the PARTITION BY stuff as step one. The ORDER\nBY stuff is more complex and risky which seems like a good reason to\ntackle separately.\n\nDavid\n\n\n",
"msg_date": "Mon, 12 Jun 2023 16:06:13 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 12:06 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 9 Jun 2023 at 20:57, Richard Guo <[email protected]> wrote:\n> > On Fri, Jun 9, 2023 at 8:13 AM David Rowley <[email protected]>\n> wrote:\n> >> It might be possible to make adjustments in nodeWindowAgg.c to have\n> >> the equality checks come out as true when there is no ORDER BY.\n> >> update_frameheadpos() is one location that would need to be adjusted.\n> >> It would need further study to ensure we don't accidentally break\n> >> anything. I've not done that study, so won't be adjusting the patch\n> >> for now.\n> >\n> > I'm also not sure if doing that is safe in all cases. Hmm, do you think\n> > we can instead check wc->frameOptions to see if it is the RANGE OFFSET\n> > case in make_pathkeys_for_window(), and decide to not remove or remove\n> > redundant ORDER BY items according to whether it is or not RANGE OFFSET?\n>\n> I think ideally, we'd not have to add special cases to the planner to\n> disable the optimisation for certain cases. I'd much rather see\n> adjustments in the executor to handle cases where we've removed ORDER\n> BY columns (e.g adjust update_frameheadpos() to assume rows are equal\n> when there are no order by columns.) That of course would require\n> that there are no cases where removing ORDER BY columns would change\n> the actual query results. I can't currently think of any reason why\n> the results would change, but I'm not overly familiar with the RANGE\n> option, so I'd need to spend a bit longer looking at it than I have\n> done so far to feel confident in making the patch process ORDER BY\n> columns too.\n>\n> I'm ok with just doing the PARTITION BY stuff as step one. The ORDER\n> BY stuff is more complex and risky which seems like a good reason to\n> tackle separately.\n\n\nI see your point. Agreed that the ORDER BY stuff might be better to be\ndone in a separate patch. So now the v2 patch looks good to me.\n\nThanks\nRichard\n\nOn Mon, Jun 12, 2023 at 12:06 PM David Rowley <[email protected]> wrote:On Fri, 9 Jun 2023 at 20:57, Richard Guo <[email protected]> wrote:\n> On Fri, Jun 9, 2023 at 8:13 AM David Rowley <[email protected]> wrote:\n>> It might be possible to make adjustments in nodeWindowAgg.c to have\n>> the equality checks come out as true when there is no ORDER BY.\n>> update_frameheadpos() is one location that would need to be adjusted.\n>> It would need further study to ensure we don't accidentally break\n>> anything. I've not done that study, so won't be adjusting the patch\n>> for now.\n>\n> I'm also not sure if doing that is safe in all cases. Hmm, do you think\n> we can instead check wc->frameOptions to see if it is the RANGE OFFSET\n> case in make_pathkeys_for_window(), and decide to not remove or remove\n> redundant ORDER BY items according to whether it is or not RANGE OFFSET?\n\nI think ideally, we'd not have to add special cases to the planner to\ndisable the optimisation for certain cases. I'd much rather see\nadjustments in the executor to handle cases where we've removed ORDER\nBY columns (e.g adjust update_frameheadpos() to assume rows are equal\nwhen there are no order by columns.) That of course would require\nthat there are no cases where removing ORDER BY columns would change\nthe actual query results. I can't currently think of any reason why\nthe results would change, but I'm not overly familiar with the RANGE\noption, so I'd need to spend a bit longer looking at it than I have\ndone so far to feel confident in making the patch process ORDER BY\ncolumns too.\n\nI'm ok with just doing the PARTITION BY stuff as step one. The ORDER\nBY stuff is more complex and risky which seems like a good reason to\ntackle separately.I see your point. Agreed that the ORDER BY stuff might be better to bedone in a separate patch. So now the v2 patch looks good to me.ThanksRichard",
"msg_date": "Mon, 12 Jun 2023 16:20:14 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
},
{
"msg_contents": "On Mon, 12 Jun 2023 at 20:20, Richard Guo <[email protected]> wrote:\n> So now the v2 patch looks good to me.\n\nThank you for reviewing this. I've just pushed the patch.\n\nDavid\n\n\n",
"msg_date": "Mon, 3 Jul 2023 12:50:49 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove WindowClause PARTITION BY items belonging to redundant\n pathkeys"
}
] |
[
{
"msg_contents": "Hi,\n\nThis patch tries to add loongarch native crc32 check with crcc.* \ninstructions to postgresql.\n\nThe patch is tested on my Loongson 3A5000 machine with Loong Arch Linux \nand GCC 13.1.0 / clang 16.0.0 with\n\n- default ./configure\n- default meson setup\n\n\nSee:\n\n[1]: \nhttps://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#crc-check-instructions\n[2]: \nhttps://gcc.gnu.org/onlinedocs/gcc/LoongArch-Base-Built-in-Functions.html\n[3]: \nhttps://github.com/llvm/llvm-project/blob/release/16.x/clang/include/clang/Basic/BuiltinsLoongArch.def#L36-L39",
"msg_date": "Thu, 8 Jun 2023 13:24:02 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 12:24 PM YANG Xudong <[email protected]> wrote:\n>\n> This patch tries to add loongarch native crc32 check with crcc.*\n> instructions to postgresql.\n>\n> The patch is tested on my Loongson 3A5000 machine with Loong Arch Linux\n> and GCC 13.1.0 / clang 16.0.0 with\n>\n> - default ./configure\n> - default meson setup\n\nI took a quick look at this, and it seems mostly in line with other\narchitectures we support for CRC. I have a couple questions and comments:\n\nconfigure.ac:\n\n+AC_SUBST(CFLAGS_CRC)\n\nThis seems to be an unnecessary copy-paste. I think we only need one, after\nall checks have run.\n\nmeson.build\n\n+ if cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w,\n__builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and\n__builtin_loongarch_crcc_w_d_w without -march=loongarch64',\n+ args: test_c_args)\n+ # Use LoongArch CRC instruction unconditionally\n+ cdata.set('USE_LOONGARCH_CRC32C', 1)\n+ have_optimized_crc = true\n+ elif cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w,\n__builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and\n__builtin_loongarch_crcc_w_d_w with -march=loongarch64',\n+ args: test_c_args + ['-march=loongarch64'])\n+ # Use LoongArch CRC instruction unconditionally\n\nFor x86 and Arm, if it fails to link without an -march flag, we allow for a\nruntime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are for\ninstructions not found on all platforms. The patch also checks both ways,\nand each one results in \"Use LoongArch CRC instruction unconditionally\".\nThe -march flag here is general, not specific. In other words, if this only\nruns inside \"+elif host_cpu == 'loongarch64'\", why do we need both with\n-march and without?\n\nAlso, I don't have a Loongarch machine for testing. Could you show that the\ninstructions are found in the binary, maybe using objdump and grep? Or a\nperformance test?\n\nIn the future, you may also consider running the buildfarm client on a\nmachine dedicated for testing. That will give us quick feedback if some\nfuture new code doesn't work on this platform. More information here:\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jun 8, 2023 at 12:24 PM YANG Xudong <[email protected]> wrote:>> This patch tries to add loongarch native crc32 check with crcc.*> instructions to postgresql.>> The patch is tested on my Loongson 3A5000 machine with Loong Arch Linux> and GCC 13.1.0 / clang 16.0.0 with>> - default ./configure> - default meson setupI took a quick look at this, and it seems mostly in line with other architectures we support for CRC. I have a couple questions and comments:configure.ac:+AC_SUBST(CFLAGS_CRC)This seems to be an unnecessary copy-paste. I think we only need one, after all checks have run.meson.build+ if cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w, __builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and __builtin_loongarch_crcc_w_d_w without -march=loongarch64',+ args: test_c_args)+ # Use LoongArch CRC instruction unconditionally+ cdata.set('USE_LOONGARCH_CRC32C', 1)+ have_optimized_crc = true+ elif cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w, __builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and __builtin_loongarch_crcc_w_d_w with -march=loongarch64',+ args: test_c_args + ['-march=loongarch64'])+ # Use LoongArch CRC instruction unconditionallyFor x86 and Arm, if it fails to link without an -march flag, we allow for a runtime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are for instructions not found on all platforms. The patch also checks both ways, and each one results in \"Use LoongArch CRC instruction unconditionally\". The -march flag here is general, not specific. In other words, if this only runs inside \"+elif host_cpu == 'loongarch64'\", why do we need both with -march and without? Also, I don't have a Loongarch machine for testing. Could you show that the instructions are found in the binary, maybe using objdump and grep? Or a performance test?In the future, you may also consider running the buildfarm client on a machine dedicated for testing. That will give us quick feedback if some future new code doesn't work on this platform. More information here:https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 17:26:23 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Attached a new patch with fixes based on the comment below.\n\n\nOn 2023/6/13 18:26, John Naylor wrote:\n> \n> On Thu, Jun 8, 2023 at 12:24 PM YANG Xudong <[email protected] \n> <mailto:[email protected]>> wrote:\n> >\n> > This patch tries to add loongarch native crc32 check with crcc.*\n> > instructions to postgresql.\n> >\n> > The patch is tested on my Loongson 3A5000 machine with Loong Arch Linux\n> > and GCC 13.1.0 / clang 16.0.0 with\n> >\n> > - default ./configure\n> > - default meson setup\n> \n> I took a quick look at this, and it seems mostly in line with other \n> architectures we support for CRC. I have a couple questions and comments:\n> \n> configure.ac <http://configure.ac>:\n> \n> +AC_SUBST(CFLAGS_CRC)\n> > This seems to be an unnecessary copy-paste. I think we only need one,\n> after all checks have run.\n> \n\nRemoved the extra line.\n\n\n> meson.build\n> \n> + if cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w, \n> __builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and \n> __builtin_loongarch_crcc_w_d_w without -march=loongarch64',\n> + args: test_c_args)\n> + # Use LoongArch CRC instruction unconditionally\n> + cdata.set('USE_LOONGARCH_CRC32C', 1)\n> + have_optimized_crc = true\n> + elif cc.links(prog, name: '__builtin_loongarch_crcc_w_b_w, \n> __builtin_loongarch_crcc_w_h_w, __builtin_loongarch_crcc_w_w_w, and \n> __builtin_loongarch_crcc_w_d_w with -march=loongarch64',\n> + args: test_c_args + ['-march=loongarch64'])\n> + # Use LoongArch CRC instruction unconditionally\n> \n> For x86 and Arm, if it fails to link without an -march flag, we allow \n> for a runtime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are \n> for instructions not found on all platforms. The patch also checks both \n> ways, and each one results in \"Use LoongArch CRC instruction \n> unconditionally\". The -march flag here is general, not specific. In \n> other words, if this only runs inside \"+elif host_cpu == 'loongarch64'\", \n> why do we need both with -march and without?\n> \n\nRemoved the elif branch.\n\n\n> Also, I don't have a Loongarch machine for testing. Could you show that \n> the instructions are found in the binary, maybe using objdump and grep? \n> Or a performance test?\n> \n\nThe output of the objdump command `objdump -dS \n../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep -B 30 \n-A 10 crcc` is attached.\n\nAlso the output of make check is attached.\n\n\nI run a simple test program to compare the performance of \npg_comp_crc32c_loongarch and pg_comp_crc32c_sb8 on my test machine. The \nresult is that pg_comp_crc32c_loongarch is over 2x faster than \npg_comp_crc32c_sb8.\n\n\n> In the future, you may also consider running the buildfarm client on a \n> machine dedicated for testing. That will give us quick feedback if some \n> future new code doesn't work on this platform. More information here:\n> \n> https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto \n> <https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto>\n> \n\nI will contact the loongson community \n(https://github.com/loongson-community) to see if they are able to \nprovide some machine for buildfarm or not.\n\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n--\nYANG Xudong",
"msg_date": "Wed, 14 Jun 2023 10:20:10 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected]> wrote:\n>\n> Attached a new patch with fixes based on the comment below.\n\nNote: It's helpful to pass \"-v\" to git format-patch, to have different\nversions.\n\n> > For x86 and Arm, if it fails to link without an -march flag, we allow\n> > for a runtime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are\n> > for instructions not found on all platforms. The patch also checks both\n> > ways, and each one results in \"Use LoongArch CRC instruction\n> > unconditionally\". The -march flag here is general, not specific. In\n> > other words, if this only runs inside \"+elif host_cpu == 'loongarch64'\",\n> > why do we need both with -march and without?\n> >\n>\n> Removed the elif branch.\n\nOkay, since we've confirmed that no arch flag is necessary, some other\nplaces can be simplified:\n\n--- a/src/port/Makefile\n+++ b/src/port/Makefile\n@@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n\n+# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n+pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n+pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n+pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n\nThis was copy-and-pasted from platforms that use a runtime check, so should\nbe unnecessary.\n\n+# If the intrinsics are supported, sets pgac_loongarch_crc32c_intrinsics,\n+# and CFLAGS_CRC.\n\n+# Check if __builtin_loongarch_crcc_* intrinsics can be used\n+# with the default compiler flags.\n+# CFLAGS_CRC is set if the extra flag is required.\n\nSame here -- it seems we don't need to set CFLAGS_CRC at all. Can you\nconfirm?\n\n> > Also, I don't have a Loongarch machine for testing. Could you show that\n> > the instructions are found in the binary, maybe using objdump and grep?\n> > Or a performance test?\n> >\n>\n> The output of the objdump command `objdump -dS\n> ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep -B 30\n> -A 10 crcc` is attached.\n\nThanks for confirming.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected]> wrote:>> Attached a new patch with fixes based on the comment below.Note: It's helpful to pass \"-v\" to git format-patch, to have different versions.> > For x86 and Arm, if it fails to link without an -march flag, we allow> > for a runtime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are> > for instructions not found on all platforms. The patch also checks both> > ways, and each one results in \"Use LoongArch CRC instruction> > unconditionally\". The -march flag here is general, not specific. In> > other words, if this only runs inside \"+elif host_cpu == 'loongarch64'\",> > why do we need both with -march and without?> >>> Removed the elif branch.Okay, since we've confirmed that no arch flag is necessary, some other places can be simplified:--- a/src/port/Makefile+++ b/src/port/Makefile@@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC) pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC) pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC) +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC+pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)+pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)+pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)This was copy-and-pasted from platforms that use a runtime check, so should be unnecessary.+# If the intrinsics are supported, sets pgac_loongarch_crc32c_intrinsics,+# and CFLAGS_CRC.+# Check if __builtin_loongarch_crcc_* intrinsics can be used+# with the default compiler flags.+# CFLAGS_CRC is set if the extra flag is required.Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you confirm?> > Also, I don't have a Loongarch machine for testing. Could you show that> > the instructions are found in the binary, maybe using objdump and grep?> > Or a performance test?> >>> The output of the objdump command `objdump -dS> ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep -B 30> -A 10 crcc` is attached.Thanks for confirming.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 15 Jun 2023 17:30:02 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Updated the patch based on the comments.\n\nOn 2023/6/15 18:30, John Naylor wrote:\n> \n> On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected] \n> <mailto:[email protected]>> wrote:\n> >\n> > Attached a new patch with fixes based on the comment below.\n> \n> Note: It's helpful to pass \"-v\" to git format-patch, to have different \n> versions.\n> \n\nAdded v2\n\n> > > For x86 and Arm, if it fails to link without an -march flag, we allow\n> > > for a runtime check. The flags \"-march=armv8-a+crc\" and \"-msse4.2\" are\n> > > for instructions not found on all platforms. The patch also checks both\n> > > ways, and each one results in \"Use LoongArch CRC instruction\n> > > unconditionally\". The -march flag here is general, not specific. In\n> > > other words, if this only runs inside \"+elif host_cpu == \n> 'loongarch64'\",\n> > > why do we need both with -march and without?\n> > >\n> >\n> > Removed the elif branch.\n> \n> Okay, since we've confirmed that no arch flag is necessary, some other \n> places can be simplified:\n> \n> --- a/src/port/Makefile\n> +++ b/src/port/Makefile\n> @@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n> \n> +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n> +pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n> +pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> +pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n> \n> This was copy-and-pasted from platforms that use a runtime check, so \n> should be unnecessary.\n> \n\nRemoved these lines.\n\n> +# If the intrinsics are supported, sets pgac_loongarch_crc32c_intrinsics,\n> +# and CFLAGS_CRC.\n> \n> +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n> +# with the default compiler flags.\n> +# CFLAGS_CRC is set if the extra flag is required.\n> \n> Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you \n> confirm?\n> \n\nWe don't need to set CFLAGS_CRC as commented. I have updated the \nconfigure script to make it align with the logic in meson build script.\n\n> > > Also, I don't have a Loongarch machine for testing. Could you show that\n> > > the instructions are found in the binary, maybe using objdump and grep?\n> > > Or a performance test?\n> > >\n> >\n> > The output of the objdump command `objdump -dS\n> > ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep -B 30\n> > -A 10 crcc` is attached.\n> \n> Thanks for confirming.\n> \n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>",
"msg_date": "Fri, 16 Jun 2023 09:28:07 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Is there any other comment?\n\nIf the patch looks OK, I would like to update its status to ready for \ncommitter in the commitfest.\n\nThanks!\n\nOn 2023/6/16 09:28, YANG Xudong wrote:\n> Updated the patch based on the comments.\n> \n> On 2023/6/15 18:30, John Naylor wrote:\n>>\n>> On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected] \n>> <mailto:[email protected]>> wrote:\n>> >\n>> > Attached a new patch with fixes based on the comment below.\n>>\n>> Note: It's helpful to pass \"-v\" to git format-patch, to have different \n>> versions.\n>>\n> \n> Added v2\n> \n>> > > For x86 and Arm, if it fails to link without an -march flag, we \n>> allow\n>> > > for a runtime check. The flags \"-march=armv8-a+crc\" and \n>> \"-msse4.2\" are\n>> > > for instructions not found on all platforms. The patch also \n>> checks both\n>> > > ways, and each one results in \"Use LoongArch CRC instruction\n>> > > unconditionally\". The -march flag here is general, not specific. In\n>> > > other words, if this only runs inside \"+elif host_cpu == \n>> 'loongarch64'\",\n>> > > why do we need both with -march and without?\n>> > >\n>> >\n>> > Removed the elif branch.\n>>\n>> Okay, since we've confirmed that no arch flag is necessary, some other \n>> places can be simplified:\n>>\n>> --- a/src/port/Makefile\n>> +++ b/src/port/Makefile\n>> @@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n>> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>> pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>\n>> +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n>> +pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n>> +pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>> +pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>\n>> This was copy-and-pasted from platforms that use a runtime check, so \n>> should be unnecessary.\n>>\n> \n> Removed these lines.\n> \n>> +# If the intrinsics are supported, sets \n>> pgac_loongarch_crc32c_intrinsics,\n>> +# and CFLAGS_CRC.\n>>\n>> +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n>> +# with the default compiler flags.\n>> +# CFLAGS_CRC is set if the extra flag is required.\n>>\n>> Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you \n>> confirm?\n>>\n> \n> We don't need to set CFLAGS_CRC as commented. I have updated the \n> configure script to make it align with the logic in meson build script.\n> \n>> > > Also, I don't have a Loongarch machine for testing. Could you \n>> show that\n>> > > the instructions are found in the binary, maybe using objdump and \n>> grep?\n>> > > Or a performance test?\n>> > >\n>> >\n>> > The output of the objdump command `objdump -dS\n>> > ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep \n>> -B 30\n>> > -A 10 crcc` is attached.\n>>\n>> Thanks for confirming.\n>>\n>> -- \n>> John Naylor\n>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n\n\n",
"msg_date": "Wed, 5 Jul 2023 10:15:51 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 9:16 AM YANG Xudong <[email protected]> wrote:\n>\n> Is there any other comment?\n\nIt's only been a few weeks since the last patch, and this is not an urgent\nbugfix, so there is no reason to ping the thread. Feature freeze will\nlikely be in April of next year.\n\nAlso, please don't top-post (which means: quoting an entire message, with\nnew text at the top) -- it clutters our archives.\n\nBefore I look at this again: Are there any objections to another CRC\nimplementation for the reason of having no buildfarm member?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 5, 2023 at 9:16 AM YANG Xudong <[email protected]> wrote:>> Is there any other comment?It's only been a few weeks since the last patch, and this is not an urgent bugfix, so there is no reason to ping the thread. Feature freeze will likely be in April of next year.Also, please don't top-post (which means: quoting an entire message, with new text at the top) -- it clutters our archives.Before I look at this again: Are there any objections to another CRC implementation for the reason of having no buildfarm member?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 5 Jul 2023 14:11:02 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Hi, i have a loongarch machine runing Loongnix-server (based on redhat/centos, it has gcc-8.3 on it), i am trying to test buildfarm on it, when i edit build-farm.conf it seems to need animal and secret to connect the buildfarm server.\nthen i go to https://buildfarm.postgresql.org/cgi-bin/register-form.pl, and registered the buildfarm a week ago, but didn't receive any response. so what else i need to do next.\n\n\n\n> -----Original Messages-----\n> From: \"YANG Xudong\" <[email protected]>\n> Send time:Wednesday, 07/05/2023 10:15:51\n> To: \"John Naylor\" <[email protected]>\n> Cc: [email protected], [email protected], [email protected]\n> Subject: Re: [PATCH] Add loongarch native checksum implementation.\n> \n> Is there any other comment?\n> \n> If the patch looks OK, I would like to update its status to ready for \n> committer in the commitfest.\n> \n> Thanks!\n> \n> On 2023/6/16 09:28, YANG Xudong wrote:\n> > Updated the patch based on the comments.\n> > \n> > On 2023/6/15 18:30, John Naylor wrote:\n> >>\n> >> On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected] \n> >> <mailto:[email protected]>> wrote:\n> >> >\n> >> > Attached a new patch with fixes based on the comment below.\n> >>\n> >> Note: It's helpful to pass \"-v\" to git format-patch, to have different \n> >> versions.\n> >>\n> > \n> > Added v2\n> > \n> >> > > For x86 and Arm, if it fails to link without an -march flag, we \n> >> allow\n> >> > > for a runtime check. The flags \"-march=armv8-a+crc\" and \n> >> \"-msse4.2\" are\n> >> > > for instructions not found on all platforms. The patch also \n> >> checks both\n> >> > > ways, and each one results in \"Use LoongArch CRC instruction\n> >> > > unconditionally\". The -march flag here is general, not specific. In\n> >> > > other words, if this only runs inside \"+elif host_cpu == \n> >> 'loongarch64'\",\n> >> > > why do we need both with -march and without?\n> >> > >\n> >> >\n> >> > Removed the elif branch.\n> >>\n> >> Okay, since we've confirmed that no arch flag is necessary, some other \n> >> places can be simplified:\n> >>\n> >> --- a/src/port/Makefile\n> >> +++ b/src/port/Makefile\n> >> @@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n> >> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> >> pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n> >>\n> >> +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n> >> +pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n> >> +pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n> >> +pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n> >>\n> >> This was copy-and-pasted from platforms that use a runtime check, so \n> >> should be unnecessary.\n> >>\n> > \n> > Removed these lines.\n> > \n> >> +# If the intrinsics are supported, sets \n> >> pgac_loongarch_crc32c_intrinsics,\n> >> +# and CFLAGS_CRC.\n> >>\n> >> +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n> >> +# with the default compiler flags.\n> >> +# CFLAGS_CRC is set if the extra flag is required.\n> >>\n> >> Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you \n> >> confirm?\n> >>\n> > \n> > We don't need to set CFLAGS_CRC as commented. I have updated the \n> > configure script to make it align with the logic in meson build script.\n> > \n> >> > > Also, I don't have a Loongarch machine for testing. Could you \n> >> show that\n> >> > > the instructions are found in the binary, maybe using objdump and \n> >> grep?\n> >> > > Or a performance test?\n> >> > >\n> >> >\n> >> > The output of the objdump command `objdump -dS\n> >> > ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep \n> >> -B 30\n> >> > -A 10 crcc` is attached.\n> >>\n> >> Thanks for confirming.\n> >>\n> >> -- \n> >> John Naylor\n> >> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> \n\r\n\r\n本邮件及其附件含有龙芯中科的商业秘密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制或散发)本邮件及其附件中的信息。如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。 \r\nThis email and its attachments contain confidential information from Loongson Technology , which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it. ",
"msg_date": "Thu, 6 Jul 2023 15:14:08 +0800 (GMT+08:00)",
"msg_from": "huchangqi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "> On 6 Jul 2023, at 09:14, huchangqi <[email protected]> wrote:\n> \n> Hi, i have a loongarch machine runing Loongnix-server (based on redhat/centos, it has gcc-8.3 on it), i am trying to test buildfarm on it, when i edit build-farm.conf it seems to need animal and secret to connect the buildfarm server.\n> then i go to https://buildfarm.postgresql.org/cgi-bin/register-form.pl, and registered the buildfarm a week ago, but didn't receive any response. so what else i need to do next.\n\nThanks for volunteering a buildfarm animal! The registration is probably just\npending due to it being summer and things slow down, but I've added Andrew\nDunstan who is the Buildfarm expert on CC:.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 11:48:38 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "\n\nOn 2023/7/6 15:14, huchangqi wrote:\n> Hi, i have a loongarch machine runing Loongnix-server (based on redhat/centos, it has gcc-8.3 on it), i am trying to test buildfarm on it, when i edit build-farm.conf it seems to need animal and secret to connect the buildfarm server.\n> then i go to https://buildfarm.postgresql.org/cgi-bin/register-form.pl, and registered the buildfarm a week ago, but didn't receive any response. so what else i need to do next.\n> \n> \n\nIs it possible to provide a build farm instance for new world ABI of \nloongarch also by loongson? It will be really appreciated.\n\nThanks!\n\n> \n>> -----Original Messages-----\n>> From: \"YANG Xudong\" <[email protected]>\n>> Send time:Wednesday, 07/05/2023 10:15:51\n>> To: \"John Naylor\" <[email protected]>\n>> Cc: [email protected], [email protected], [email protected]\n>> Subject: Re: [PATCH] Add loongarch native checksum implementation.\n>>\n>> Is there any other comment?\n>>\n>> If the patch looks OK, I would like to update its status to ready for\n>> committer in the commitfest.\n>>\n>> Thanks!\n>>\n>> On 2023/6/16 09:28, YANG Xudong wrote:\n>>> Updated the patch based on the comments.\n>>>\n>>> On 2023/6/15 18:30, John Naylor wrote:\n>>>>\n>>>> On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <[email protected]\n>>>> <mailto:[email protected]>> wrote:\n>>>> >\n>>>> > Attached a new patch with fixes based on the comment below.\n>>>>\n>>>> Note: It's helpful to pass \"-v\" to git format-patch, to have different\n>>>> versions.\n>>>>\n>>>\n>>> Added v2\n>>>\n>>>> > > For x86 and Arm, if it fails to link without an -march flag, we\n>>>> allow\n>>>> > > for a runtime check. The flags \"-march=armv8-a+crc\" and\n>>>> \"-msse4.2\" are\n>>>> > > for instructions not found on all platforms. The patch also\n>>>> checks both\n>>>> > > ways, and each one results in \"Use LoongArch CRC instruction\n>>>> > > unconditionally\". The -march flag here is general, not specific. In\n>>>> > > other words, if this only runs inside \"+elif host_cpu ==\n>>>> 'loongarch64'\",\n>>>> > > why do we need both with -march and without?\n>>>> > >\n>>>> >\n>>>> > Removed the elif branch.\n>>>>\n>>>> Okay, since we've confirmed that no arch flag is necessary, some other\n>>>> places can be simplified:\n>>>>\n>>>> --- a/src/port/Makefile\n>>>> +++ b/src/port/Makefile\n>>>> @@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n>>>> pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>>>> pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>\n>>>> +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n>>>> +pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n>>>> +pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>>>> +pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>\n>>>> This was copy-and-pasted from platforms that use a runtime check, so\n>>>> should be unnecessary.\n>>>>\n>>>\n>>> Removed these lines.\n>>>\n>>>> +# If the intrinsics are supported, sets\n>>>> pgac_loongarch_crc32c_intrinsics,\n>>>> +# and CFLAGS_CRC.\n>>>>\n>>>> +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n>>>> +# with the default compiler flags.\n>>>> +# CFLAGS_CRC is set if the extra flag is required.\n>>>>\n>>>> Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you\n>>>> confirm?\n>>>>\n>>>\n>>> We don't need to set CFLAGS_CRC as commented. I have updated the\n>>> configure script to make it align with the logic in meson build script.\n>>>\n>>>> > > Also, I don't have a Loongarch machine for testing. Could you\n>>>> show that\n>>>> > > the instructions are found in the binary, maybe using objdump and\n>>>> grep?\n>>>> > > Or a performance test?\n>>>> > >\n>>>> >\n>>>> > The output of the objdump command `objdump -dS\n>>>> > ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres | grep\n>>>> -B 30\n>>>> > -A 10 crcc` is attached.\n>>>>\n>>>> Thanks for confirming.\n>>>>\n>>>> -- \n>>>> John Naylor\n>>>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n>>\n> \n> \n> 本邮件及其附件含有龙芯中科的商业秘密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制或散发)本邮件及其附件中的信息。如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。\n> This email and its attachments contain confidential information from Loongson Technology , which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:30:30 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Many thanks to huchangqi. Now we have loongarch64 support for both old \nworld ABI and new world ABI on the buildfarm!\n\n-------- Forwarded Message --------\nSubject: Re: [PATCH] Add loongarch native checksum implementation.\nDate: Tue, 25 Jul 2023 15:51:43 +0800\nFrom: huchangqi <[email protected]>\nTo: YANG Xudong <[email protected]>\n\nBoth cisticola and nuthatch are on the buildfarm now。\n\ncisticola is \"old world ABI\".\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=cisticola&br=HEAD\n\nnuthatch is \"new world ABI\".\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=nuthatch&br=HEAD\n\n----------\nBest regards,\nhuchangqi\n\n\nOn 2023/7/5 15:11, John Naylor wrote:\n >\n > Before I look at this again: Are there any objections to another CRC\n > implementation for the reason of having no buildfarm member?\n\nIt is possible to try this patch on buildfarm now, I guess?\n\n\n",
"msg_date": "Wed, 26 Jul 2023 09:25:42 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Wed, Jul 05, 2023 at 02:11:02PM +0700, John Naylor wrote:\n> Also, please don't top-post (which means: quoting an entire message, with\n> new text at the top) -- it clutters our archives.\n> \n> Before I look at this again: Are there any objections to another CRC\n> implementation for the reason of having no buildfarm member?\n\nThe performance numbers presented upthread for the CRC computations\nare kind of nice in this environment, but honestly I have no idea how\nmuch this architecture is used. Perhaps that's only something in\nChina? I am not seeing much activity around that in Japan, for\ninstance, and that's really close.\n\nAnyway, based on today's state of the buildfarm, we have a buildfarm\nmember named cisticola that should be able to test this new CRC\nimplementation, so I see no problem in applying this stuff now if you\nthink it is in good shape.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 12:16:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 12:16:28PM +0900, Michael Paquier wrote:\n> On Wed, Jul 05, 2023 at 02:11:02PM +0700, John Naylor wrote:\n>> Before I look at this again: Are there any objections to another CRC\n>> implementation for the reason of having no buildfarm member?\n>\n> [ ... ] \n> \n> Anyway, based on today's state of the buildfarm, we have a buildfarm\n> member named cisticola that should be able to test this new CRC\n> implementation, so I see no problem in applying this stuff now if you\n> think it is in good shape.\n\nIMHO we should strive to maintain buildfarm coverage for all the\ninstrinsics used within Postgres, if for no other reason than to ensure\nfuture changes do not break those platforms.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 21:37:07 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On 2023/7/26 11:16, Michael Paquier wrote> The performance numbers \npresented upthread for the CRC computations\n> are kind of nice in this environment, but honestly I have no idea how\n> much this architecture is used. Perhaps that's only something in\n> China? I am not seeing much activity around that in Japan, for\n> instance, and that's really close.\n\nThe architecture is pretty new (to open source ecosystem). The support \nof it in Linux kernel and GCC were released last year.\n\nHere is a site about the status of major open source project support of \nit. (Chinese only)\n\nhttps://areweloongyet.com/\n\n\n",
"msg_date": "Wed, 26 Jul 2023 13:36:16 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 8:25 AM YANG Xudong <[email protected]> wrote:\n>\n> Many thanks to huchangqi. Now we have loongarch64 support for both old\n> world ABI and new world ABI on the buildfarm!\n\nGlad to hear it!\n\nOn Wed, Jul 26, 2023 at 10:16 AM Michael Paquier <[email protected]>\nwrote:\n>\n> The performance numbers presented upthread for the CRC computations\n> are kind of nice in this environment, but honestly I have no idea how\n> much this architecture is used. Perhaps that's only something in\n> China? I am not seeing much activity around that in Japan, for\n> instance, and that's really close.\n\nThat was my impression as well. My thinking was, we can give the same\ntreatment that we gave Arm a number of years ago (which is now quite\nmainstream).\n\n> Anyway, based on today's state of the buildfarm, we have a buildfarm\n> member named cisticola that should be able to test this new CRC\n> implementation, so I see no problem in applying this stuff now if you\n> think it is in good shape.\n\nI believe there was just a comment that needed updating, so I'll do that\nand push within a few days.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 26, 2023 at 8:25 AM YANG Xudong <[email protected]> wrote:>> Many thanks to huchangqi. Now we have loongarch64 support for both old> world ABI and new world ABI on the buildfarm!Glad to hear it!On Wed, Jul 26, 2023 at 10:16 AM Michael Paquier <[email protected]> wrote:>> The performance numbers presented upthread for the CRC computations> are kind of nice in this environment, but honestly I have no idea how> much this architecture is used. Perhaps that's only something in> China? I am not seeing much activity around that in Japan, for> instance, and that's really close.That was my impression as well. My thinking was, we can give the same treatment that we gave Arm a number of years ago (which is now quite mainstream).> Anyway, based on today's state of the buildfarm, we have a buildfarm> member named cisticola that should be able to test this new CRC> implementation, so I see no problem in applying this stuff now if you> think it is in good shape.I believe there was just a comment that needed updating, so I'll do that and push within a few days.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 26 Jul 2023 14:38:54 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 8:28 AM YANG Xudong <[email protected]> wrote:\n> > +# If the intrinsics are supported, sets\npgac_loongarch_crc32c_intrinsics,\n> > +# and CFLAGS_CRC.\n> >\n> > +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n> > +# with the default compiler flags.\n> > +# CFLAGS_CRC is set if the extra flag is required.\n> >\n> > Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you\n> > confirm?\n> >\n>\n> We don't need to set CFLAGS_CRC as commented. I have updated the\n> configure script to make it align with the logic in meson build script.\n\n(Looking again at v2)\n\nThe compilation test is found in c-compiler.m4, which still has all logic\nfor CFLAGS_CRC, including saving and restoring the old CFLAGS. Can this\nalso be simplified?\n\nI diff'd pg_crc32c_loongarch.c with the current other files, and found it\nis structurally the same as the Arm implementation. That's logical if\nmemory alignment is important.\n\n /*\n- * ARMv8 doesn't require alignment, but aligned memory access is\n- * significantly faster. Process leading bytes so that the loop below\n- * starts with a pointer aligned to eight bytes.\n+ * Aligned memory access is significantly faster.\n+ * Process leading bytes so that the loop below starts with a pointer\naligned to eight bytes.\n\nCan you confirm the alignment requirement -- it's not clear what the\nintention is since \"doesn't require\" wasn't carried over. Is there any\ndocumentation (or even a report in some other context) about aligned vs\nunaligned memory access performance?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jun 16, 2023 at 8:28 AM YANG Xudong <[email protected]> wrote:> > +# If the intrinsics are supported, sets pgac_loongarch_crc32c_intrinsics,> > +# and CFLAGS_CRC.> >> > +# Check if __builtin_loongarch_crcc_* intrinsics can be used> > +# with the default compiler flags.> > +# CFLAGS_CRC is set if the extra flag is required.> >> > Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you> > confirm?> >>> We don't need to set CFLAGS_CRC as commented. I have updated the> configure script to make it align with the logic in meson build script.(Looking again at v2)The compilation test is found in c-compiler.m4, which still has all logic for CFLAGS_CRC, including saving and restoring the old CFLAGS. Can this also be simplified?I diff'd pg_crc32c_loongarch.c with the current other files, and found it is structurally the same as the Arm implementation. That's logical if memory alignment is important. \t/*-\t * ARMv8 doesn't require alignment, but aligned memory access is-\t * significantly faster. Process leading bytes so that the loop below-\t * starts with a pointer aligned to eight bytes.+\t * Aligned memory access is significantly faster.+\t * Process leading bytes so that the loop below starts with a pointer aligned to eight bytes.Can you confirm the alignment requirement -- it's not clear what the intention is since \"doesn't require\" wasn't carried over. Is there any documentation (or even a report in some other context) about aligned vs unaligned memory access performance?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 7 Aug 2023 18:01:16 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "Thanks for the comment. I have updated the patch to v3. Please have a look.\n\n\nOn 2023/8/7 19:01, John Naylor wrote:\n> \n> On Fri, Jun 16, 2023 at 8:28 AM YANG Xudong <[email protected] \n> <mailto:[email protected]>> wrote:\n> > > +# If the intrinsics are supported, sets \n> pgac_loongarch_crc32c_intrinsics,\n> > > +# and CFLAGS_CRC.\n> > >\n> > > +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n> > > +# with the default compiler flags.\n> > > +# CFLAGS_CRC is set if the extra flag is required.\n> > >\n> > > Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you\n> > > confirm?\n> > >\n> >\n> > We don't need to set CFLAGS_CRC as commented. I have updated the\n> > configure script to make it align with the logic in meson build script.\n> \n> (Looking again at v2)\n> \n> The compilation test is found in c-compiler.m4, which still has all \n> logic for CFLAGS_CRC, including saving and restoring the old CFLAGS. Can \n> this also be simplified?\n\nFixed the function in c-compiler.m4 by removing the function argument \nand the logic of handling CFLAGS and CFLAGS_CRC.\n\n\n> \n> I diff'd pg_crc32c_loongarch.c with the current other files, and found \n> it is structurally the same as the Arm implementation. That's logical if \n> memory alignment is important.\n> \n> /*\n> - * ARMv8 doesn't require alignment, but aligned memory access is\n> - * significantly faster. Process leading bytes so that the loop below\n> - * starts with a pointer aligned to eight bytes.\n> + * Aligned memory access is significantly faster.\n> + * Process leading bytes so that the loop below starts with a pointer \n> aligned to eight bytes.\n> \n> Can you confirm the alignment requirement -- it's not clear what the \n> intention is since \"doesn't require\" wasn't carried over. Is there any \n> documentation (or even a report in some other context) about aligned vs \n> unaligned memory access performance?\n\nIt is in the official document that the alignment is not required.\n\nhttps://github.com/loongson/la-softdev-convention/blob/master/la-softdev-convention.adoc#74-unaligned-memory-access-support\n\n\nHowever, I found this patch in LKML that shows great performance gain \nwhen using aligned memory access similar to this patch.\n\nhttps://lore.kernel.org/lkml/[email protected]/\n\nSo I guess using aligned memory access is necessary and I have updated \nthe comment in the code.\n\n\n> \n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>",
"msg_date": "Tue, 8 Aug 2023 11:06:57 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 10:07 AM YANG Xudong <[email protected]> wrote:\n\n> On 2023/8/7 19:01, John Naylor wrote:\n\n> > The compilation test is found in c-compiler.m4, which still has all\n> > logic for CFLAGS_CRC, including saving and restoring the old CFLAGS. Can\n> > this also be simplified?\n>\n> Fixed the function in c-compiler.m4 by removing the function argument\n> and the logic of handling CFLAGS and CFLAGS_CRC.\n\nLooks good to me. It seems that platforms capable of running Postgres only\nsupport 64 bit. If that ever changes, the compiler intrinsic test (with 8\nbyte CRC input) should still gate that well enough in autoconf, I believe,\nso in v4 I added a comment to clarify this. The Meson build checks hostcpu\nfirst for all platforms, and the patch is consistent with surrounding code.\nIn the attached 0002 addendum, I change a comment in configure.ac to\nclarify \"override\" is referring to the runtime check for x86 and Arm, and\nthat LoongArch doesn't need one.\n\n> > Can you confirm the alignment requirement -- it's not clear what the\n> > intention is since \"doesn't require\" wasn't carried over. Is there any\n> > documentation (or even a report in some other context) about aligned vs\n> > unaligned memory access performance?\n>\n> It is in the official document that the alignment is not required.\n>\n>\nhttps://github.com/loongson/la-softdev-convention/blob/master/la-softdev-convention.adoc#74-unaligned-memory-access-support\n>\n>\n> However, I found this patch in LKML that shows great performance gain\n> when using aligned memory access similar to this patch.\n>\n> https://lore.kernel.org/lkml/[email protected]/\n>\n> So I guess using aligned memory access is necessary and I have updated\n> the comment in the code.\n\nOkay, so it's not \"necessary\" in the sense that it's illegal, so I'm\nthinking we can just re-use the Arm comment language, as in 0002.\n\nv4 0001 is the same as v3, but with a draft commit message. I will squash\nand commit this week, unless there is additional feedback.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Aug 2023 13:38:25 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "\nOn 2023/8/8 14:38, John Naylor wrote:\n> \n> It seems that platforms capable of running Postgres \n> only support 64 bit.\n\nI think so.\n\n\n> > So I guess using aligned memory access is necessary and I have updated\n> > the comment in the code.\n> \n> Okay, so it's not \"necessary\" in the sense that it's illegal, so I'm \n> thinking we can just re-use the Arm comment language, as in 0002.\n\nYes. I think it is similar to Arm.\n\n\n> v4 0001 is the same as v3, but with a draft commit message. I will \n> squash and commit this week, unless there is additional feedback.\n\nLooks good to me. Thanks for the additional patch.\n\n\n",
"msg_date": "Tue, 8 Aug 2023 15:22:34 +0800",
"msg_from": "YANG Xudong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 2:22 PM YANG Xudong <[email protected]> wrote:\n>\n> On 2023/8/8 14:38, John Naylor wrote:\n>\n> > v4 0001 is the same as v3, but with a draft commit message. I will\n> > squash and commit this week, unless there is additional feedback.\n>\n> Looks good to me. Thanks for the additional patch.\n\nI pushed this with another small comment change. Unfortunately, I didn't\nglance at the buildfarm beforehand -- it seems many members are failing an\nisolation check added by commit fa2e87494, including both loongarch64\nmembers. I'll check back periodically.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_status.pl\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 8, 2023 at 2:22 PM YANG Xudong <[email protected]> wrote:>> On 2023/8/8 14:38, John Naylor wrote:>> > v4 0001 is the same as v3, but with a draft commit message. I will> > squash and commit this week, unless there is additional feedback.>> Looks good to me. Thanks for the additional patch.I pushed this with another small comment change. Unfortunately, I didn't glance at the buildfarm beforehand -- it seems many members are failing an isolation check added by commit fa2e87494, including both loongarch64 members. I'll check back periodically.https://buildfarm.postgresql.org/cgi-bin/show_status.pl--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 10 Aug 2023 12:04:48 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 10:35 AM John Naylor\n<[email protected]> wrote:\n>\n> On Tue, Aug 8, 2023 at 2:22 PM YANG Xudong <[email protected]> wrote:\n> >\n> > On 2023/8/8 14:38, John Naylor wrote:\n> >\n> > > v4 0001 is the same as v3, but with a draft commit message. I will\n> > > squash and commit this week, unless there is additional feedback.\n> >\n> > Looks good to me. Thanks for the additional patch.\n>\n> I pushed this with another small comment change. Unfortunately, I didn't glance at the buildfarm beforehand -- it seems many members are failing an isolation check added by commit fa2e87494, including both loongarch64 members. I'll check back periodically.\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_status.pl\n>\n\nIn MSVC build, on doing: perl mkvcbuild.pl after this commit, I am\nfacing the below error:\nGenerating configuration headers...\nundefined symbol: USE_LOONGARCH_CRC32C at src/include/pg_config.h line\n718 at ../postgresql/src/tools/msvc/Mkvcbuild.pm line 872.\n\nAm I missing something or did the commit miss something?\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 15:56:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 03:56:37PM +0530, Amit Kapila wrote:\n> In MSVC build, on doing: perl mkvcbuild.pl after this commit, I am\n> facing the below error:\n> Generating configuration headers...\n> undefined symbol: USE_LOONGARCH_CRC32C at src/include/pg_config.h line\n> 718 at ../postgresql/src/tools/msvc/Mkvcbuild.pm line 872.\n> \n> Am I missing something or did the commit miss something?\n\nYes, the commit has missed the addition of USE_LOONGARCH_CRC32C in\nSolution.pm. If you want to be consistent with pg_config.h.in, you\ncould add it just after USE_LLVM, for instance.\n--\nMichael",
"msg_date": "Thu, 10 Aug 2023 19:54:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 5:54 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Aug 10, 2023 at 03:56:37PM +0530, Amit Kapila wrote:\n> > In MSVC build, on doing: perl mkvcbuild.pl after this commit, I am\n> > facing the below error:\n> > Generating configuration headers...\n> > undefined symbol: USE_LOONGARCH_CRC32C at src/include/pg_config.h line\n> > 718 at ../postgresql/src/tools/msvc/Mkvcbuild.pm line 872.\n> >\n> > Am I missing something or did the commit miss something?\n>\n> Yes, the commit has missed the addition of USE_LOONGARCH_CRC32C in\n> Solution.pm. If you want to be consistent with pg_config.h.in, you\n> could add it just after USE_LLVM, for instance.\n\nOops, fixing now...\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 10, 2023 at 5:54 PM Michael Paquier <[email protected]> wrote:>> On Thu, Aug 10, 2023 at 03:56:37PM +0530, Amit Kapila wrote:> > In MSVC build, on doing: perl mkvcbuild.pl after this commit, I am> > facing the below error:> > Generating configuration headers...> > undefined symbol: USE_LOONGARCH_CRC32C at src/include/pg_config.h line> > 718 at ../postgresql/src/tools/msvc/Mkvcbuild.pm line 872.> >> > Am I missing something or did the commit miss something?>> Yes, the commit has missed the addition of USE_LOONGARCH_CRC32C in> Solution.pm. If you want to be consistent with pg_config.h.in, you> could add it just after USE_LLVM, for instance.Oops, fixing now...--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 10 Aug 2023 18:37:24 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 5:07 PM John Naylor\n<[email protected]> wrote:\n>\n> On Thu, Aug 10, 2023 at 5:54 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Aug 10, 2023 at 03:56:37PM +0530, Amit Kapila wrote:\n> > > In MSVC build, on doing: perl mkvcbuild.pl after this commit, I am\n> > > facing the below error:\n> > > Generating configuration headers...\n> > > undefined symbol: USE_LOONGARCH_CRC32C at src/include/pg_config.h line\n> > > 718 at ../postgresql/src/tools/msvc/Mkvcbuild.pm line 872.\n> > >\n> > > Am I missing something or did the commit miss something?\n> >\n> > Yes, the commit has missed the addition of USE_LOONGARCH_CRC32C in\n> > Solution.pm. If you want to be consistent with pg_config.h.in, you\n> > could add it just after USE_LLVM, for instance.\n>\n> Oops, fixing now...\n>\n\nIt is fixed now. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 08:13:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add loongarch native checksum implementation."
}
] |
[
{
"msg_contents": "Greetings, everyone!\n\nWhile working on an extension I've found an error in how length of \nencoded base64 string is calulated;\n\nThis error is present in 3 files across all supported versions:\n\n/src/common/base64.c, function pg_b64_enc_len;\n/src/backend/utils/adt/encode.c, function pg_base64_enc_len;\n/contrib/pgcrypto/pgp-armor.c, function pg_base64_enc_len (copied from \nencode.c).\n\nIn all three cases the length is calculated as follows:\n\n(srclen + 2) * 4 / 3; (plus linefeed in latter two cases)\n\nThere's also a comment /* 3 bytes will be converted to 4 */\n\nThis formula is wrong. Let's calculate encoded length for different \nstarting lengths:\n\nstarting length 2: (2 + 2) * 4 / 3 = 5,\nstarting length 3: (3 + 2) * 4 / 3 = 6,\nstarting length 4: (4 + 2) * 4 / 3 = 8,\nstarting length 6: (6 + 2) * 4 / 3 = 10,\nstarting length 10: (10 + 2) * 4 / 3 = 16,\n\nwhen it should be 4, 4, 8, 8, 16.\n\nSo the suggestion is to change the formula to a right one: (srclen + 2) \n/ 3 * 4;\n\nThe patch is attached.\n\nOleg Tselebrovskiy, Postgres Pro",
"msg_date": "Thu, 08 Jun 2023 14:53:28 +0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Error in calculating length of encoded base64 string"
},
{
"msg_contents": "[email protected] writes:\n> While working on an extension I've found an error in how length of \n> encoded base64 string is calulated;\n\nYeah, I think you're right. It's not of huge significance, because\nit just overestimates by 1 or 2 bytes, but we might as well get\nit right. Thanks for the report and patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Jun 2023 10:35:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error in calculating length of encoded base64 string"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:35 AM Tom Lane <[email protected]> wrote:\n>\n> [email protected] writes:\n> > While working on an extension I've found an error in how length of\n> > encoded base64 string is calulated;\n>\n> Yeah, I think you're right. It's not of huge significance, because\n> it just overestimates by 1 or 2 bytes, but we might as well get\n> it right. Thanks for the report and patch!\n\n From your commit d98ed080bb\n\n> This bug is very ancient, dating to commit 79d78bb26 which\n> added encode.c. (The other instances were presumably copied\n> from there.) Still, it doesn't quite seem worth back-patching.\n\nIs it worth investing time in trying to unify these 3 occurrences of\nbase64 length (and possibly other relevant) code to one place? If yes,\nI can volunteer for it.\n\nThe common code facility under src/common/ did not exist back when\npgcrypto was added, but since it does now, it may be worth it make\nothers depend on implementation in src/common/ code.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 8 Jun 2023 23:10:03 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error in calculating length of encoded base64 string"
},
{
"msg_contents": "Gurjeet Singh <[email protected]> writes:\n> On Thu, Jun 8, 2023 at 7:35 AM Tom Lane <[email protected]> wrote:\n>> This bug is very ancient, dating to commit 79d78bb26 which\n>> added encode.c. (The other instances were presumably copied\n>> from there.) Still, it doesn't quite seem worth back-patching.\n\n> Is it worth investing time in trying to unify these 3 occurrences of\n> base64 length (and possibly other relevant) code to one place? If yes,\n> I can volunteer for it.\n\nI wondered about that too. It seems really silly that we made\na copy in src/common and did not replace the others with calls\nto that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Jun 2023 02:13:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error in calculating length of encoded base64 string"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Gurjeet Singh <[email protected]> writes:\n>> On Thu, Jun 8, 2023 at 7:35 AM Tom Lane <[email protected]> wrote:\n>>> This bug is very ancient, dating to commit 79d78bb26 which\n>>> added encode.c. (The other instances were presumably copied\n>>> from there.) Still, it doesn't quite seem worth back-patching.\n>\n>> Is it worth investing time in trying to unify these 3 occurrences of\n>> base64 length (and possibly other relevant) code to one place? If yes,\n>> I can volunteer for it.\n>\n> I wondered about that too. It seems really silly that we made\n> a copy in src/common and did not replace the others with calls\n> to that.\n\nAlso, while we're at it, how about some unit tests that both encode and\ncalculate the encoded length of strings of various lengths and check\nthat they match?\n\n- ilmari\n\n\n",
"msg_date": "Fri, 09 Jun 2023 11:26:38 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error in calculating length of encoded base64 string"
},
{
"msg_contents": "On 2023-Jun-09, Tom Lane wrote:\n\n> Gurjeet Singh <[email protected]> writes:\n> > On Thu, Jun 8, 2023 at 7:35 AM Tom Lane <[email protected]> wrote:\n> >> This bug is very ancient, dating to commit 79d78bb26 which\n> >> added encode.c. (The other instances were presumably copied\n> >> from there.) Still, it doesn't quite seem worth back-patching.\n> \n> > Is it worth investing time in trying to unify these 3 occurrences of\n> > base64 length (and possibly other relevant) code to one place? If yes,\n> > I can volunteer for it.\n> \n> I wondered about that too. It seems really silly that we made\n> a copy in src/common and did not replace the others with calls\n> to that.\n\nI looked into this. It turns out that there is a difference in newline\nhandling in the other routines compared to what was added for SCRAM,\nwhich doesn't have any (and complains if you supply them). Peter E\ndid suggest to unify them at the time:\nhttps://www.postgresql.org/message-id/947b9aff-8fdb-dbf5-a99c-0ffd4523a73f%402ndquadrant.com\n\nWe could add a boolean \"whitespace\" flag to both of\nsrc/common/base64.c's pg_b64_encode() and pg_b64_decode(); with that I\nthink it could serve the three places that need it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 26 Aug 2023 19:43:31 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error in calculating length of encoded base64 string"
}
] |
[
{
"msg_contents": "Hi, We are Query Tricks.\nWe are a project team created to provide better usability for PostgreSQL\nDBAs and users.\nand I'm Hyunhee Ryu, a member of the project team.\n\nThere is something I would like you to consider introducing in a new\nversion of the release.\nThis is related to \\d+ table_name and \\d+ index_name in psql, especially\nrelated to lookup lists in partition tables.\nWe conducted the test based on PostgreSQL 14, 15 version.\n\nThe existing partition table list is printed in this format.\n-- Current Partition Table List\npostgres=# \\d+ p_quarter_check\n Partitioned table\n\"public.p_quarter_check\"\n Column | Type | Collation | Nullable | Default | Storage\n | Compression | Stats target | Description\n--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain\n | | |\n dept | character varying(10) | | | | extended\n| | |\n name | character varying(20) | | | | extended\n| | |\n in_d | date | | not null | | plain\n | | |\n etc | text | | | | extended\n| | |\nPartition key: RANGE (in_d)\nIndexes:\n \"parent_idx01\" btree (id)\nPartitions: in_p_q1 FOR VALUES FROM ('2023-01-01') TO ('2023-04-01'),\nPARTITIONED,\n in_p_q2 FOR VALUES FROM ('2023-04-01') TO ('2023-07-01'),\nPARTITIONED,\n in_p_q3 FOR VALUES FROM ('2023-07-01') TO ('2023-10-01'),\nPARTITIONED,\n in_p_q4 FOR VALUES FROM ('2023-10-01') TO ('2024-01-01'),\nPARTITIONED\n\nIt doesn't matter in the normal partition structure, but I felt\nuncomfortable looking up the list when there were additional subpartitions.\nSo to improve this inconvenience, I wrote an SQL query to query the\npartition table and partition index in the format below when querying the\npartition table and partition index in psql.\n\n-- After Patch Partition Table List\npostgres=# \\d+ p_quarter_check\n Partitioned table\n\"public.p_quarter_check\"\n Column | Type | Collation | Nullable | Default | Storage\n | Compression | Stats target | Description\n--------+-----------------------+-----------+----------+---------+----------+-------------+--------------+-------------\n id | integer | | not null | | plain\n | | |\n dept | character varying(10) | | | | extended\n| | |\n name | character varying(20) | | | | extended\n| | |\n in_d | date | | not null | | plain\n | | |\n etc | text | | | | extended\n| | |\nPartition key: RANGE (in_d)\nIndexes:\n \"parent_idx01\" btree (id)\nPartitions: in_p_q1 FOR VALUES FROM ('2023-01-01') TO ('2023-04-01'),\nPARTITIONED,\n in_p_y202301 FOR VALUES FROM ('2023-01-01') TO\n('2023-02-01'),\n in_p_y202302 FOR VALUES FROM ('2023-02-01') TO\n('2023-03-01'),\n in_p_y202303 FOR VALUES FROM ('2023-03-01') TO\n('2023-04-01'),\n in_p_q2 FOR VALUES FROM ('2023-04-01') TO ('2023-07-01'),\nPARTITIONED,\n in_p_y202304 FOR VALUES FROM ('2023-04-01') TO\n('2023-05-01'),\n in_p_y202305 FOR VALUES FROM ('2023-05-01') TO\n('2023-06-01'),\n in_p_y202306 FOR VALUES FROM ('2023-06-01') TO\n('2023-07-01'),\n in_p_q3 FOR VALUES FROM ('2023-07-01') TO ('2023-10-01'),\nPARTITIONED,\n in_p_y202307 FOR VALUES FROM ('2023-07-01') TO\n('2023-08-01'),\n in_p_y202308 FOR VALUES FROM ('2023-08-01') TO\n('2023-09-01'),\n in_p_y202309 FOR VALUES FROM ('2023-09-01') TO\n('2023-10-01'),\n in_p_q4 FOR VALUES FROM ('2023-10-01') TO ('2024-01-01'),\nPARTITIONED,\n in_p_y202310 FOR VALUES FROM ('2023-10-01') TO\n('2023-11-01'),\n in_p_y202311 FOR VALUES FROM ('2023-11-01') TO\n('2023-12-01'),\n in_p_y202312 FOR VALUES FROM ('2023-12-01') TO\n('2024-01-01')\n\nPartition Index also wrote the SQL syntax so that you can look up the list\nwith an intuitive structure.\n--Current Partition Index\npostgres=# \\d+ parent_idx01\n Partitioned index \"public.parent_idx01\"\n Column | Type | Key? | Definition | Storage | Stats target\n--------+---------+------+------------+---------+--------------\n id | integer | yes | id | plain |\nbtree, for table \"public.p_quarter_check\"\nPartitions: in_p_q1_id_idx, PARTITIONED,\n in_p_q2_id_idx, PARTITIONED,\n in_p_q3_id_idx, PARTITIONED,\n in_p_q4_id_idx, PARTITIONED\nAccess method: btree\n\n-- After Patch Partition Index\npostgres=# \\d+ parent_idx01\n Partitioned index \"public.parent_idx01\"\n Column | Type | Key? | Definition | Storage | Stats target\n--------+---------+------+------------+---------+--------------\n id | integer | yes | id | plain |\nbtree, for table \"public.p_quarter_check\"\nPartitions: in_p_q1_id_idx, PARTITIONED,\n in_p_y202301_id_idx,\n in_p_y202302_id_idx,\n in_p_y202303_id_idx,\n in_p_q2_id_idx, PARTITIONED,\n in_p_y202304_id_idx,\n in_p_y202305_id_idx,\n in_p_y202306_id_idx,\n in_p_q3_id_idx, PARTITIONED,\n in_p_y202307_id_idx,\n in_p_y202308_id_idx,\n in_p_y202309_id_idx,\n in_p_q4_id_idx, PARTITIONED,\n in_p_y202310_id_idx,\n in_p_y202311_id_idx,\n in_p_y202312_id_idx\nAccess method: btree\n\nI attached the queries used to create the partition and the queries I wrote\nto look up the list to the mail.\nThis is the patch applied to line 3370 of the 'describe.c' source file.\nBased on this SQL syntax and patch file, I would like you to review the\nquery \\d+ Partition_table_name and \\d+ Partition_index_name so that the SQL\nis reflected.\n\nIf you are not asking for a review in this way, please let me know how to\nproceed.\nPlease give me a positive answer and I will wait for your feedback.\nHave a nice day.\n\n From Query Tricks / Hyunhee Ryu.",
"msg_date": "Thu, 8 Jun 2023 17:51:39 +0900",
"msg_from": "=?UTF-8?B?7L+866as7Yq466at7Iqk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nThank you for the patch and the effort to enhance \\d+ 's output on partitioned tables that contain sub-partitions. However, the patch does not apply and I notice that this patch is generated as a differ file from 2 files, describe.c and describe_change.c. You should use git diff to generate a patch rather than maintaining 2 files yourself. Also I noticed that you include a \"create_object.sql\" file to illustrate the feature, which is not necessary. Instead, you should add them as a regression test cases in the existing regression test suite under \"src/test/regress\", so these will get run as tests to illustrate the feature. This patch changes the output of \\d+ and it could potentially break other test cases so you should fix them in the patch in addition to providing the feature\r\n\r\nNow, regarding the feature, I see that you intent to print the sub partitions' partitions in the output, which is okay in my opinion. However, a sub-partition can also contain another sub-partition, which contains another sub-partition and so on. So it is possible that sub-partitions can span very, very deep. Your example assumes only 1 level of sub-partitions. Are you going to print all of them out in \\d+? If so, it would definitely cluster the output so much that it starts to become annoying. Are you planning to set a limit on how many levels of sub-partitions to print or just let it print as many as it needs?\r\n\r\nthank you\r\n\r\nCary Huang\r\n-----------------------\r\nHighgo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 25 Aug 2023 21:00:43 +0000",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "Thank you for letting me know more about the test method.\nAs you said, we applied the patch using git diff and created a test case on\nthe src/test/regress/sql.\nConsidering your question, we think it is enough to assume just one\nsubpartition level.\nBecause, Concidering the common partition configuration methods, we think\nit is rare case to configure subpartitions contains subpartitions.\nSo, we think it would be appropriate to mark up to level 1 of the\nsubpartition when using \\d+.\nIf there subpartitions contains subpartitions, the keyword 'CONTAINS\nSUBPARTITIONS' is added next to the partition name to indicate that the\nsubpartitions contains subpartitions exists.\nThese sources were tested on 14.5, 15.2 and 16 RC versions, respectively.\nIf you have any other opinions on this, please let us know. we will\nactively consider it.\n\nTeam Query Tricks\n---------------------------------------\nquerytricks2023.gmail.com\nQuery Tricks (github.com) <https://github.com/Query-Tricks>\n\n\n\n2023년 8월 26일 (토) 오전 6:01, Cary Huang <[email protected]>님이 작성:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: not tested\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hello\n>\n> Thank you for the patch and the effort to enhance \\d+ 's output on\n> partitioned tables that contain sub-partitions. However, the patch does not\n> apply and I notice that this patch is generated as a differ file from 2\n> files, describe.c and describe_change.c. You should use git diff to\n> generate a patch rather than maintaining 2 files yourself. Also I noticed\n> that you include a \"create_object.sql\" file to illustrate the feature,\n> which is not necessary. Instead, you should add them as a regression test\n> cases in the existing regression test suite under \"src/test/regress\", so\n> these will get run as tests to illustrate the feature. This patch changes\n> the output of \\d+ and it could potentially break other test cases so you\n> should fix them in the patch in addition to providing the feature\n>\n> Now, regarding the feature, I see that you intent to print the sub\n> partitions' partitions in the output, which is okay in my opinion. However,\n> a sub-partition can also contain another sub-partition, which contains\n> another sub-partition and so on. So it is possible that sub-partitions can\n> span very, very deep. Your example assumes only 1 level of sub-partitions.\n> Are you going to print all of them out in \\d+? If so, it would definitely\n> cluster the output so much that it starts to become annoying. Are you\n> planning to set a limit on how many levels of sub-partitions to print or\n> just let it print as many as it needs?\n>\n> thank you\n>\n> Cary Huang\n> -----------------------\n> Highgo Software Canada\n> www.highgo.ca",
"msg_date": "Tue, 12 Sep 2023 16:27:18 +0900",
"msg_from": "=?UTF-8?B?7L+866as7Yq466at7Iqk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "On 12.09.23 09:27, 쿼리트릭스 wrote:\n> Thank you for letting me know more about the test method.\n> As you said, we applied the patch using git diff and created a test case \n> on the src/test/regress/sql.\n\nBecause of the change of the psql output, a lot of existing test cases \nare now failing. You should run \"make check\" and fix up the failures. \nAlso, your new test file \"subpartition_indentation\" isn't actually run \nbecause it was not added to src/test/regress/parallel_schedule. I \nsuspect you probably don't want to add a new test file for this but \ninstead see if the existing tests cover this case.\n\n\n\n",
"msg_date": "Wed, 13 Sep 2023 15:48:50 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "The error was corrected and a new diff file was created.\nThe diff file was created based on 16 RC1.\nWe confirmed that 5 places where errors occurred when performing make check\nwere changed to ok.\n\nTeam Query Tricks\n---------------------------------------\nquerytricks2023.gmail.com\nQuery Tricks(https://github.com/Query-Tricks)\n\n2023년 9월 13일 (수) 오후 10:48, Peter Eisentraut <[email protected]>님이 작성:\n\n> On 12.09.23 09:27, 쿼리트릭스 wrote:\n> > Thank you for letting me know more about the test method.\n> > As you said, we applied the patch using git diff and created a test case\n> > on the src/test/regress/sql.\n>\n> Because of the change of the psql output, a lot of existing test cases\n> are now failing. You should run \"make check\" and fix up the failures.\n> Also, your new test file \"subpartition_indentation\" isn't actually run\n> because it was not added to src/test/regress/parallel_schedule. I\n> suspect you probably don't want to add a new test file for this but\n> instead see if the existing tests cover this case.\n>\n>",
"msg_date": "Tue, 19 Sep 2023 09:19:34 +0900",
"msg_from": "=?UTF-8?B?7L+866as7Yq466at7Iqk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "Hi,\n\nOn Mon, 6 Nov 2023 at 13:47, 쿼리트릭스 <[email protected]> wrote:\n>\n> The error was corrected and a new diff file was created.\n> The diff file was created based on 16 RC1.\n> We confirmed that 5 places where errors occurred when performing make check were changed to ok.\n>\n\nI went through Cfbot and still see that some tests are failing.\nlinks:\nhttps://cirrus-ci.com/task/6408253983162368\nhttps://cirrus-ci.com/task/5000879099609088\nhttps://cirrus-ci.com/task/6126779006451712\nhttps://cirrus-ci.com/task/5563829053030400\nhttps://cirrus-ci.com/task/6689728959873024\n\nFailure:\n[16:42:37.674] Summary of Failures:\n[16:42:37.674]\n[16:42:37.674] 5/270 postgresql:regress / regress/regress ERROR 28.88s\nexit status 1\n[16:42:37.674] 7/270 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\nERROR 46.73s exit status 1\n[16:42:37.674] 56/270 postgresql:recovery /\nrecovery/027_stream_regress ERROR 38.51s exit status 1\n\nThanks\nShlok Kumar Kyal\n\n\n",
"msg_date": "Mon, 6 Nov 2023 13:53:09 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "\tShlok Kyal wrote:\n\n> > The error was corrected and a new diff file was created.\n> > The diff file was created based on 16 RC1.\n> > We confirmed that 5 places where errors occurred when performing\n> > make check were changed to ok.\n\nReviewing the patch, I see these two problems in the current version\n(File: psql-slashDplus-partition-indentation.diff, Date: 2023-09-19 00:19:34) \n\n* There are changes in the regression tests that do not concern this\nfeature and should not be there.\n\nFor instance this hunk:\n\n--- a/src/test/regress/expected/foreign_data.out\n+++ b/src/test/regress/expected/foreign_data.out\n@@ -742,8 +742,6 @@ COMMENT ON COLUMN ft1.c1 IS 'ft1.c1';\n Check constraints:\n \"ft1_c2_check\" CHECK (c2 <> ''::text)\n \"ft1_c3_check\" CHECK (c3 >= '01-01-1994'::date AND c3 <=\n'01-31-1994'::date)\n-Not-null constraints:\n- \"ft1_c1_not_null\" NOT NULL \"c1\"\n Server: s0\n FDW options: (delimiter ',', quote '\"', \"be quoted\" 'value')\n\nIt seems to undo a test for a recent feature adding \"Not-null\nconstraints\" to \\d, which suggests that you've been running tests\nagainst and older version than the source tree you're diffing\nagainst. These should be the same version, and also the latest\none (git HEAD) or as close as possible to the latest when the\npatch is submitted.\n\n* The new query with \\d on partitioned tables does not work with\n Postgres servers 12 or 13:\n\n\npostgres=# CREATE TABLE measurement (\n city_id\t int not null,\n logdate\t date not null,\n peaktemp\t int,\n unitsales\t int\n) PARTITION BY RANGE (logdate);\n\npostgres=# \\d measurement \nERROR:\tsyntax error at or near \".\"\nLINE 2: ... 0 AS level,\tc.relkind,\t false AS i.inhdetach...\n\n\nSetting the CommitFest status to WoA.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 22 Nov 2023 17:24:43 +0100",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
},
{
"msg_contents": "On Wed, 22 Nov 2023 at 21:54, Daniel Verite <[email protected]> wrote:\n>\n> Shlok Kyal wrote:\n>\n> > > The error was corrected and a new diff file was created.\n> > > The diff file was created based on 16 RC1.\n> > > We confirmed that 5 places where errors occurred when performing\n> > > make check were changed to ok.\n>\n> Reviewing the patch, I see these two problems in the current version\n> (File: psql-slashDplus-partition-indentation.diff, Date: 2023-09-19 00:19:34)\n>\n> * There are changes in the regression tests that do not concern this\n> feature and should not be there.\n>\n> For instance this hunk:\n>\n> --- a/src/test/regress/expected/foreign_data.out\n> +++ b/src/test/regress/expected/foreign_data.out\n> @@ -742,8 +742,6 @@ COMMENT ON COLUMN ft1.c1 IS 'ft1.c1';\n> Check constraints:\n> \"ft1_c2_check\" CHECK (c2 <> ''::text)\n> \"ft1_c3_check\" CHECK (c3 >= '01-01-1994'::date AND c3 <=\n> '01-31-1994'::date)\n> -Not-null constraints:\n> - \"ft1_c1_not_null\" NOT NULL \"c1\"\n> Server: s0\n> FDW options: (delimiter ',', quote '\"', \"be quoted\" 'value')\n>\n> It seems to undo a test for a recent feature adding \"Not-null\n> constraints\" to \\d, which suggests that you've been running tests\n> against and older version than the source tree you're diffing\n> against. These should be the same version, and also the latest\n> one (git HEAD) or as close as possible to the latest when the\n> patch is submitted.\n>\n> * The new query with \\d on partitioned tables does not work with\n> Postgres servers 12 or 13:\n>\n>\n> postgres=# CREATE TABLE measurement (\n> city_id int not null,\n> logdate date not null,\n> peaktemp int,\n> unitsales int\n> ) PARTITION BY RANGE (logdate);\n>\n> postgres=# \\d measurement\n> ERROR: syntax error at or near \".\"\n> LINE 2: ... 0 AS level, c.relkind, false AS i.inhdetach...\n>\n>\n> Setting the CommitFest status to WoA.\n\nI have changed the status of the CommitFest entry to \"Returned with\nFeedback\" as Shlok's and Daniel's suggestions are not handled. Feel\nfree to address them and add a new commitfest entry for the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 11:59:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ psql - review request ] review request for \\d+ tablename, \\d+\n indexname indenting"
}
] |
[
{
"msg_contents": "I've noticed the planner is not yet smart enough to do an index scan\nwhen the left operand of a contains operator (<@) is a btree-indexed column\nand the right operand is a range or multirange type of the same type\nas the column.\n\nFor instance, given a users table with an id int primary key column,\nthese two queries are functionally equivalent, but only the second\none makes use of the btree index:\n\nSELECT COUNT(*) FROM users WHERE id <@ int4range(10,20);\nSELECT COUNT(*) FROM users WHERE id >= 10 AND id < 20;\n\nMultirange example:\n\nSELECT COUNT(*) FROM users WHERE id <@ int4multirange('{[10,20),[30,40)}');\nSELECT COUNT(*) FROM users WHERE id >= 10 AND id < 20 OR id >= 30 AND id < 40;\n\nI think support for this would open up for some interesting new use-cases,\nwhen range/multirange could be used to store aggregated intermediate IDs\nwhich would then be filtered on using a btree-indexed column.\n\n/Joel\n\n\n\nI've noticed the planner is not yet smart enough to do an index scanwhen the left operand of a contains operator (<@) is a btree-indexed columnand the right operand is a range or multirange type of the same typeas the column.For instance, given a users table with an id int primary key column,these two queries are functionally equivalent, but only the secondone makes use of the btree index:SELECT COUNT(*) FROM users WHERE id <@ int4range(10,20);SELECT COUNT(*) FROM users WHERE id >= 10 AND id < 20;Multirange example:SELECT COUNT(*) FROM users WHERE id <@ int4multirange('{[10,20),[30,40)}');SELECT COUNT(*) FROM users WHERE id >= 10 AND id < 20 OR id >= 30 AND id < 40;I think support for this would open up for some interesting new use-cases,when range/multirange could be used to store aggregated intermediate IDswhich would then be filtered on using a btree-indexed column./Joel",
"msg_date": "Thu, 08 Jun 2023 12:05:44 +0200",
"msg_from": "\"Joel Jacobson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[btree-indexed column] <@ [range | multirange]"
}
] |
[
{
"msg_contents": "Dear Postgres Hackers,\n\nI am writing to seek your guidance and utilization of Valgrind in\nPostgreSQL for detecting memory leaks in extension-related code. Recently,\nI have been exploring ways to improve the stability and performance of\nPostgreSQL extensions by addressing memory-related issues, specifically\nmemory leaks.\nI have come across Valgrind, a tool for detecting memory errors, leaks, and\nother memory-related problems in C/C++ programs. However, I am in need of\nsome guidance on how to effectively use Valgrind within the context of\nPostgreSQL and extensions.\n\nI request your assistance in providing insights on the following:\n\n1. Steps for utilizing Valgrind in PostgreSQL:\n - How do I install Valgrind and integrate it with PostgreSQL?\n - Are there any specific configurations or flags that need to be set for\noptimal usage with PostgreSQL?\n\n2. Techniques for detecting memory leaks in extension-related code:\n - What are the recommended approaches for exercising extension code with\nValgrind?\n - Are there any specific considerations or best practices to keep in\nmind when analyzing Valgrind's output for memory leaks in extension code?\n\nI would appreciate any resources, instructions, or insights you can provide\nregarding the above points.\n\nThanks and regards,\nPradeep\n\nDear Postgres Hackers,I am writing to seek your guidance and utilization of Valgrind in PostgreSQL for detecting memory leaks in extension-related code. Recently, I have been exploring ways to improve the stability and performance of PostgreSQL extensions by addressing memory-related issues, specifically memory leaks.I have come across Valgrind, a tool for detecting memory errors, leaks, and other memory-related problems in C/C++ programs. However, I am in need of some guidance on how to effectively use Valgrind within the context of PostgreSQL and extensions.I request your assistance in providing insights on the following:1. Steps for utilizing Valgrind in PostgreSQL: - How do I install Valgrind and integrate it with PostgreSQL? - Are there any specific configurations or flags that need to be set for optimal usage with PostgreSQL?2. Techniques for detecting memory leaks in extension-related code: - What are the recommended approaches for exercising extension code with Valgrind? - Are there any specific considerations or best practices to keep in mind when analyzing Valgrind's output for memory leaks in extension code?I would appreciate any resources, instructions, or insights you can provide regarding the above points. Thanks and regards,Pradeep",
"msg_date": "Thu, 8 Jun 2023 16:38:39 +0530",
"msg_from": "Pradeep Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seeking Guidance on Using Valgrind in PostgreSQL for Detecting Memory\n Leaks in Extension Code"
},
{
"msg_contents": "On 08/06/2023 14:08, Pradeep Kumar wrote:\n> I am writing to seek your guidance and utilization of Valgrind in \n> PostgreSQL for detecting memory leaks in extension-related code. \n\nhttps://wiki.postgresql.org/wiki/Valgrind is a good place to start. The \nsame tricks for using Valgrind on PostgreSQL itself should work for \nextensions too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 18:17:10 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seeking Guidance on Using Valgrind in PostgreSQL for Detecting\n Memory Leaks in Extension Code"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\ninfrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n(i.e. copied from that and massaged a bit to call brin stuff).\n\n _bt_begin_parallel -> _brin_begin_parallel\n _bt_end_parallel -> _brin_end_parallel\n _bt_parallel_estimate_shared -> _brin_parallel_estimate_shared\n _bt_leader_participate_as_worker -> _brin_leader_participate_as_worker\n _bt_parallel_scan_and_sort -> _brin_parallel_scan_and_build\n\nThis is mostly mechanical stuff - setting up the parallel workers,\nstarting the scan etc.\n\nThe tricky part is how to divide the work between workers and how we\ncombine the partial results. For BTREE we simply let each worker to read\na subset of the table (using a parallel scan), sort it and then do a\nmerge sort on the partial results.\n\nFor BRIN it's a bit different, because the indexes essentially splits\nthe table into smaller ranges and treat them independently. So the\neasiest way is to organize the table scan so that each range gets\nprocessed by exactly one worker. Each worker writes the index tuples\ninto a temporary file, and then when all workers are done we read and\nwrite them into the index.\n\nThe problem is a parallel scan assigns mostly random subset of the table\nto each worker - it's not guaranteed a BRIN page range to be processed\nby a single worker.\n\n\n0001 does that in a bit silly way - instead of doing single large scan,\neach worker does a sequence of TID range scans for each worker (see\n_brin_parallel_scan_and_build), and BrinShared has fields used to track\nwhich ranges were already assigned to workers. A bit cumbersome, but it\nworks pretty well.\n\n0002 replaces the TID range scan sequence with a single parallel scan,\nmodified to assign \"chunks\" in multiple of pagesPerRange.\n\n\nIn both cases _brin_end_parallel then reads the summaries from worker\nfiles, and adds them into the index. In 0001 this is fairly simple,\nalthough we could do one more improvement and sort the ranges by range\nstart to make the index nicer (and possibly a bit more efficient). This\nshould be simple, because the per-worker results are already sorted like\nthat (so a merge sort in _brin_end_parallel would be enough).\n\nFor 0002 it's a bit more complicated, because with a single parallel\nscan brinbuildCallbackParallel can't decide if a range is assigned to a\ndifferent worker or empty. And we want to generate summaries for empty\nranges in the index. We could either skip such range during index build,\nand then add empty summaries in _brin_end_parallel (if needed), or add\nthem and then merge them using \"union\".\n\n\nI just realized there's a third option to do this - we could just do\nregular parallel scan (with no particular regard to pagesPerRange), and\nthen do \"union\" when merging results from workers. It doesn't require\nthe sequence of TID scans, and the union would also handle the empty\nranges. The per-worker results might be much larger, though, because\neach worker might produce up to the \"full\" BRIN index.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Jun 2023 14:55:09 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Thu, 8 Jun 2023 at 14:55, Tomas Vondra <[email protected]> wrote:\n>\n> Hi,\n>\n> Here's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\n> infrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n> (i.e. copied from that and massaged a bit to call brin stuff).\n\nNice work.\n\n> In both cases _brin_end_parallel then reads the summaries from worker\n> files, and adds them into the index. In 0001 this is fairly simple,\n> although we could do one more improvement and sort the ranges by range\n> start to make the index nicer (and possibly a bit more efficient). This\n> should be simple, because the per-worker results are already sorted like\n> that (so a merge sort in _brin_end_parallel would be enough).\n\nI see that you manually built the passing and sorting of tuples\nbetween workers, but can't we use the parallel tuplesort\ninfrastructure for that? It already has similar features in place and\nimproves code commonality.\n\n> For 0002 it's a bit more complicated, because with a single parallel\n> scan brinbuildCallbackParallel can't decide if a range is assigned to a\n> different worker or empty. And we want to generate summaries for empty\n> ranges in the index. We could either skip such range during index build,\n> and then add empty summaries in _brin_end_parallel (if needed), or add\n> them and then merge them using \"union\".\n>\n>\n> I just realized there's a third option to do this - we could just do\n> regular parallel scan (with no particular regard to pagesPerRange), and\n> then do \"union\" when merging results from workers. It doesn't require\n> the sequence of TID scans, and the union would also handle the empty\n> ranges. The per-worker results might be much larger, though, because\n> each worker might produce up to the \"full\" BRIN index.\n\nWould it be too much effort to add a 'min_chunk_size' argument to\ntable_beginscan_parallel (or ParallelTableScanDesc) that defines the\nminimum granularity of block ranges to be assigned to each process? I\nthink that would be the most elegant solution that would require\nrelatively little effort: table_block_parallelscan_nextpage already\ndoes parallel management of multiple chunk sizes, and I think this\nmodification would fit quite well in that code.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 4 Jul 2023 23:53:41 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 7/4/23 23:53, Matthias van de Meent wrote:\n> On Thu, 8 Jun 2023 at 14:55, Tomas Vondra <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> Here's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\n>> infrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n>> (i.e. copied from that and massaged a bit to call brin stuff).\n> \n> Nice work.\n> \n>> In both cases _brin_end_parallel then reads the summaries from worker\n>> files, and adds them into the index. In 0001 this is fairly simple,\n>> although we could do one more improvement and sort the ranges by range\n>> start to make the index nicer (and possibly a bit more efficient). This\n>> should be simple, because the per-worker results are already sorted like\n>> that (so a merge sort in _brin_end_parallel would be enough).\n> \n> I see that you manually built the passing and sorting of tuples\n> between workers, but can't we use the parallel tuplesort\n> infrastructure for that? It already has similar features in place and\n> improves code commonality.\n> \n\nMaybe. I wasn't that familiar with what parallel tuplesort can and can't\ndo, and the little I knew I managed to forget since I wrote this patch.\nWhich similar features do you have in mind?\n\nThe workers are producing the results in \"start_block\" order, so if they\npass that to the leader, it probably can do the usual merge sort.\n\n>> For 0002 it's a bit more complicated, because with a single parallel\n>> scan brinbuildCallbackParallel can't decide if a range is assigned to a\n>> different worker or empty. And we want to generate summaries for empty\n>> ranges in the index. We could either skip such range during index build,\n>> and then add empty summaries in _brin_end_parallel (if needed), or add\n>> them and then merge them using \"union\".\n>>\n>>\n>> I just realized there's a third option to do this - we could just do\n>> regular parallel scan (with no particular regard to pagesPerRange), and\n>> then do \"union\" when merging results from workers. It doesn't require\n>> the sequence of TID scans, and the union would also handle the empty\n>> ranges. The per-worker results might be much larger, though, because\n>> each worker might produce up to the \"full\" BRIN index.\n> \n> Would it be too much effort to add a 'min_chunk_size' argument to\n> table_beginscan_parallel (or ParallelTableScanDesc) that defines the\n> minimum granularity of block ranges to be assigned to each process? I\n> think that would be the most elegant solution that would require\n> relatively little effort: table_block_parallelscan_nextpage already\n> does parallel management of multiple chunk sizes, and I think this\n> modification would fit quite well in that code.\n> \n\nI'm confused. Isn't that pretty much exactly what 0002 does? I mean,\nthat passes pagesPerRange to table_parallelscan_initialize(), so that\neach pagesPerRange is assigned to a single worker.\n\nThe trouble I described above is that the scan returns tuples, and the\nconsumer has no idea what was the chunk size or how many other workers\nare there. Imagine you get a tuple from block 1, and then a tuple from\nblock 1000. Does that mean that the blocks in between are empty or that\nthey were processed by some other worker?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Jul 2023 00:08:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 00:08, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 7/4/23 23:53, Matthias van de Meent wrote:\n> > On Thu, 8 Jun 2023 at 14:55, Tomas Vondra <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Here's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\n> >> infrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n> >> (i.e. copied from that and massaged a bit to call brin stuff).\n> >\n> > Nice work.\n> >\n> >> In both cases _brin_end_parallel then reads the summaries from worker\n> >> files, and adds them into the index. In 0001 this is fairly simple,\n> >> although we could do one more improvement and sort the ranges by range\n> >> start to make the index nicer (and possibly a bit more efficient). This\n> >> should be simple, because the per-worker results are already sorted like\n> >> that (so a merge sort in _brin_end_parallel would be enough).\n> >\n> > I see that you manually built the passing and sorting of tuples\n> > between workers, but can't we use the parallel tuplesort\n> > infrastructure for that? It already has similar features in place and\n> > improves code commonality.\n> >\n>\n> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n> do, and the little I knew I managed to forget since I wrote this patch.\n> Which similar features do you have in mind?\n>\n> The workers are producing the results in \"start_block\" order, so if they\n> pass that to the leader, it probably can do the usual merge sort.\n>\n> >> For 0002 it's a bit more complicated, because with a single parallel\n> >> scan brinbuildCallbackParallel can't decide if a range is assigned to a\n> >> different worker or empty. And we want to generate summaries for empty\n> >> ranges in the index. We could either skip such range during index build,\n> >> and then add empty summaries in _brin_end_parallel (if needed), or add\n> >> them and then merge them using \"union\".\n> >>\n> >>\n> >> I just realized there's a third option to do this - we could just do\n> >> regular parallel scan (with no particular regard to pagesPerRange), and\n> >> then do \"union\" when merging results from workers. It doesn't require\n> >> the sequence of TID scans, and the union would also handle the empty\n> >> ranges. The per-worker results might be much larger, though, because\n> >> each worker might produce up to the \"full\" BRIN index.\n> >\n> > Would it be too much effort to add a 'min_chunk_size' argument to\n> > table_beginscan_parallel (or ParallelTableScanDesc) that defines the\n> > minimum granularity of block ranges to be assigned to each process? I\n> > think that would be the most elegant solution that would require\n> > relatively little effort: table_block_parallelscan_nextpage already\n> > does parallel management of multiple chunk sizes, and I think this\n> > modification would fit quite well in that code.\n> >\n>\n> I'm confused. Isn't that pretty much exactly what 0002 does? I mean,\n> that passes pagesPerRange to table_parallelscan_initialize(), so that\n> each pagesPerRange is assigned to a single worker.\n\nHuh, I overlooked that one... Sorry for that.\n\n> The trouble I described above is that the scan returns tuples, and the\n> consumer has no idea what was the chunk size or how many other workers\n> are there. Imagine you get a tuple from block 1, and then a tuple from\n> block 1000. Does that mean that the blocks in between are empty or that\n> they were processed by some other worker?\n\nIf the unit of work for parallel table scans is the index's\npages_per_range, then I think we can just fill in expected-but-missing\nranges as 'empty' in the parallel leader during index loading, like\nthe first of the two solutions you proposed.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 10:44:00 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 7/5/23 10:44, Matthias van de Meent wrote:\n> On Wed, 5 Jul 2023 at 00:08, Tomas Vondra <[email protected]> wrote:\n>>\n>>\n>>\n>> On 7/4/23 23:53, Matthias van de Meent wrote:\n>>> On Thu, 8 Jun 2023 at 14:55, Tomas Vondra <[email protected]> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> Here's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\n>>>> infrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n>>>> (i.e. copied from that and massaged a bit to call brin stuff).\n>>>\n>>> Nice work.\n>>>\n>>>> In both cases _brin_end_parallel then reads the summaries from worker\n>>>> files, and adds them into the index. In 0001 this is fairly simple,\n>>>> although we could do one more improvement and sort the ranges by range\n>>>> start to make the index nicer (and possibly a bit more efficient). This\n>>>> should be simple, because the per-worker results are already sorted like\n>>>> that (so a merge sort in _brin_end_parallel would be enough).\n>>>\n>>> I see that you manually built the passing and sorting of tuples\n>>> between workers, but can't we use the parallel tuplesort\n>>> infrastructure for that? It already has similar features in place and\n>>> improves code commonality.\n>>>\n>>\n>> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n>> do, and the little I knew I managed to forget since I wrote this patch.\n>> Which similar features do you have in mind?\n>>\n>> The workers are producing the results in \"start_block\" order, so if they\n>> pass that to the leader, it probably can do the usual merge sort.\n>>\n>>>> For 0002 it's a bit more complicated, because with a single parallel\n>>>> scan brinbuildCallbackParallel can't decide if a range is assigned to a\n>>>> different worker or empty. And we want to generate summaries for empty\n>>>> ranges in the index. We could either skip such range during index build,\n>>>> and then add empty summaries in _brin_end_parallel (if needed), or add\n>>>> them and then merge them using \"union\".\n>>>>\n>>>>\n>>>> I just realized there's a third option to do this - we could just do\n>>>> regular parallel scan (with no particular regard to pagesPerRange), and\n>>>> then do \"union\" when merging results from workers. It doesn't require\n>>>> the sequence of TID scans, and the union would also handle the empty\n>>>> ranges. The per-worker results might be much larger, though, because\n>>>> each worker might produce up to the \"full\" BRIN index.\n>>>\n>>> Would it be too much effort to add a 'min_chunk_size' argument to\n>>> table_beginscan_parallel (or ParallelTableScanDesc) that defines the\n>>> minimum granularity of block ranges to be assigned to each process? I\n>>> think that would be the most elegant solution that would require\n>>> relatively little effort: table_block_parallelscan_nextpage already\n>>> does parallel management of multiple chunk sizes, and I think this\n>>> modification would fit quite well in that code.\n>>>\n>>\n>> I'm confused. Isn't that pretty much exactly what 0002 does? I mean,\n>> that passes pagesPerRange to table_parallelscan_initialize(), so that\n>> each pagesPerRange is assigned to a single worker.\n> \n> Huh, I overlooked that one... Sorry for that.\n> \n>> The trouble I described above is that the scan returns tuples, and the\n>> consumer has no idea what was the chunk size or how many other workers\n>> are there. Imagine you get a tuple from block 1, and then a tuple from\n>> block 1000. Does that mean that the blocks in between are empty or that\n>> they were processed by some other worker?\n> \n> If the unit of work for parallel table scans is the index's\n> pages_per_range, then I think we can just fill in expected-but-missing\n> ranges as 'empty' in the parallel leader during index loading, like\n> the first of the two solutions you proposed.\n> \n\nRight, I think that's the right solution.\n\nOr rather the only solution, because the other idea (generating the\nempty ranges in workers) relies on the workers knowing when to generate\nthat. But I don't think the workers have the necessary information.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Jul 2023 13:11:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 5 Jul 2023 at 00:08, Tomas Vondra <[email protected]> wrote:\n>\n>\n>\n> On 7/4/23 23:53, Matthias van de Meent wrote:\n> > On Thu, 8 Jun 2023 at 14:55, Tomas Vondra <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Here's a WIP patch allowing parallel CREATE INDEX for BRIN indexes. The\n> >> infrastructure (starting workers etc.) is \"inspired\" by the BTREE code\n> >> (i.e. copied from that and massaged a bit to call brin stuff).\n> >\n> > Nice work.\n> >\n> >> In both cases _brin_end_parallel then reads the summaries from worker\n> >> files, and adds them into the index. In 0001 this is fairly simple,\n> >> although we could do one more improvement and sort the ranges by range\n> >> start to make the index nicer (and possibly a bit more efficient). This\n> >> should be simple, because the per-worker results are already sorted like\n> >> that (so a merge sort in _brin_end_parallel would be enough).\n> >\n> > I see that you manually built the passing and sorting of tuples\n> > between workers, but can't we use the parallel tuplesort\n> > infrastructure for that? It already has similar features in place and\n> > improves code commonality.\n> >\n>\n> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n> do, and the little I knew I managed to forget since I wrote this patch.\n> Which similar features do you have in mind?\n\nI was referring to the feature that is \"emitting a single sorted run\nof tuples at the leader backend based on data gathered in parallel\nworker backends\". It manages the sort state, on-disk runs etc. so that\nyou don't have to manage that yourself.\n\nAdding a new storage format for what is effectively a logical tape\n(logtape.{c,h}) and manually merging it seems like a lot of changes if\nthat functionality is readily available, standardized and optimized in\nsortsupport; and adds an additional place to manually go through for\ndisk-related changes like TDE.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n",
"msg_date": "Wed, 5 Jul 2023 16:33:30 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 7/5/23 16:33, Matthias van de Meent wrote:\n> ...\n>\n>> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n>> do, and the little I knew I managed to forget since I wrote this patch.\n>> Which similar features do you have in mind?\n> \n> I was referring to the feature that is \"emitting a single sorted run\n> of tuples at the leader backend based on data gathered in parallel\n> worker backends\". It manages the sort state, on-disk runs etc. so that\n> you don't have to manage that yourself.\n> \n> Adding a new storage format for what is effectively a logical tape\n> (logtape.{c,h}) and manually merging it seems like a lot of changes if\n> that functionality is readily available, standardized and optimized in\n> sortsupport; and adds an additional place to manually go through for\n> disk-related changes like TDE.\n> \n\nHere's a new version of the patch, with three main changes:\n\n1) Adoption of the parallel scan approach, instead of the homegrown\nsolution with a sequence of TID scans. This is mostly what the 0002\npatch did, except for fixing a bug - parallel scan has a \"rampdown\"\nclose to the end, and this needs to consider the chunk size too.\n\n\n2) Switches to the parallel tuplesort, as proposed. This turned out to\nbe easier than I expected - most of the work was in adding methods to\ntuplesortvariants.c to allow reading/writing BrinTuple items. The main\nlimitation is that we need to pass around the length of the tuple\n(AFAICS it's not in the BrinTuple itself). I'm not entirely sure about\nthe memory management aspect of this, and maybe there's a more elegant\nsolution.\n\nOverall it seems to work - the brin.c code is heavily based on how\nnbtsearch.c does parallel builds for btree, so hopefully it's fine. At\nsome point I got a bit confused about which spool to create/use, but it\nseems to work.\n\n\n3) Handling of empty ranges - I ended up ignoring empty ranges in\nworkers (i.e. those are not written to the tuplesort), and instead the\nleader fills them in when reading data from the shared tuplesort.\n\n\nOne thing I was wondering about is whether it might be better to allow\nthe workers to process overlapping ranges, and then let the leader to\nmerge the summaries. That would mean we might not need the tableam.c\nchanges at all, but the leader would need to do more work (although the\nBRIN indexes tend to be fairly small). The main reason that got me\nthinking about this is that we have pretty much no tests for the union\nprocedures, because triggering that is really difficult. But for\nparallel index builds that'd be much more common.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 6 Jul 2023 16:13:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Thu, 6 Jul 2023 at 16:13, Tomas Vondra <[email protected]> wrote:\n>\n> On 7/5/23 16:33, Matthias van de Meent wrote:\n> > ...\n> >\n> >> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n> >> do, and the little I knew I managed to forget since I wrote this patch.\n> >> Which similar features do you have in mind?\n> >\n> > I was referring to the feature that is \"emitting a single sorted run\n> > of tuples at the leader backend based on data gathered in parallel\n> > worker backends\". It manages the sort state, on-disk runs etc. so that\n> > you don't have to manage that yourself.\n> >\n> > Adding a new storage format for what is effectively a logical tape\n> > (logtape.{c,h}) and manually merging it seems like a lot of changes if\n> > that functionality is readily available, standardized and optimized in\n> > sortsupport; and adds an additional place to manually go through for\n> > disk-related changes like TDE.\n> >\n>\n> Here's a new version of the patch, with three main changes:\n\nThanks! I've done a review on the patch, and most looks good. Some\nplaces need cleanup and polish, some others more documentations, and\nthere are some other issues, but so far it's looking OK.\n\n> One thing I was wondering about is whether it might be better to allow\n> the workers to process overlapping ranges, and then let the leader to\n> merge the summaries. That would mean we might not need the tableam.c\n> changes at all, but the leader would need to do more work (although the\n> BRIN indexes tend to be fairly small). The main reason that got me\n> thinking about this is that we have pretty much no tests for the union\n> procedures, because triggering that is really difficult. But for\n> parallel index builds that'd be much more common.\n\nHmm, that's a good point. I don't mind either way, but it would add\noverhead in the leader to do all of that merging - especially when you\nconfigure pages_per_range > PARALLEL_SEQSCAN_MAX_CHUNK_SIZE as we'd\nneed to merge up to parallel_workers tuples. That could be a\nsignificant overhead.\n\n... thinks a bit.\n\nHmm, but with the current P_S_M_C_S of 8192 blocks that's quite\nunlikely to be a serious problem - the per-backend IO saved of such\nlarge ranges on a single backend has presumably much more impact than\nthe merging of n_parallel_tasks max-sized brin tuples. So, seems fine\nwith me.\n\nReview follows below.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n-----------\n\n> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n\n> + BrinShared *brinshared;\n\nNeeds some indentation fixes.\n\n> + int bs_reltuples;\n> [...]\n> + state->bs_reltuples += reltuples;\n\nMy IDE warns me that reltuples is a double. Looking deeper into the\nvalue, it contains the number of live tuples in the table, so this\nconversion may not result in a meaningful value for tables with >=2^31\nlive tuples. Tables > 56GB could begin to get affected by this.\n\n> + int bs_worker_id;\n\nThis variable seems to be unused.\n\n> + BrinSpool *bs_spool;\n> + BrinSpool *bs_spool_out;\n\nAre both used? If so, could you add comments why we have two spools\nhere, instead of only one?\n\n> +/*\n> + * A version of the callback, used by parallel index builds. The main difference\n> + * is that instead of writing the BRIN tuples into the index, we write them into\n> + * a shared tuplestore file, and leave the insertion up to the leader (which may\n\n+ ... shared tuplesort, and ...\n\n> brinbuildCallbackParallel(...)\n> + while (thisblock > state->bs_currRangeStart + state->bs_pagesPerRange - 1)\n\nshouldn't this be an 'if' now?\n\n> + while (thisblock > state->bs_currRangeStart + state->bs_pagesPerRange - 1)\n> + state->bs_currRangeStart += state->bs_pagesPerRange;\n\nIs there a reason why you went with iterative addition instead of a\nsingle divide-and-multiply like the following?:\n\n+ state->bs_currRangeStart += state->bs_pagesPerRange *\n((state->bs_currRangeStart - thisblock) / state->bs_pagesPerRange);\n\n> diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c\n> [...]\n> -table_block_parallelscan_initialize(Relation rel, ParallelTableScanDesc pscan)\n> +table_block_parallelscan_initialize(Relation rel, ParallelTableScanDesc pscan, BlockNumber chunk_factor)\n> [...]\n> - /* compare phs_syncscan initialization to similar logic in initscan */\n> + bpscan->phs_chunk_factor = chunk_factor;\n> + /* compare phs_syncscan initialization to similar logic in initscan\n> + *\n> + * Disable sync scans if the chunk factor is set (valid block number).\n> + */\n\nI think this needs some pgindent or other style work, both on comment\nstyle and line lengths\n\n> diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c\n> [...]\n> + Assert(false); (x3)\n\nI think these can be cleaned up, right?\n\n> diff --git a/src/backend/utils/sort/tuplesortvariants.c b/src/backend/utils/sort/tuplesortvariants.c\n> [...]\n> + * Computing BrinTuple size with only the tuple is difficult, so we want to track\n> + * the length for r referenced by SortTuple. That's what BrinSortTuple is meant\n> + * to do - it's essentially a BrinTuple prefixed by length. We only write the\n> + * BrinTuple to the logtapes, though.\n\nWhy don't we write the full BrinSortTuple to disk? Doesn't that make more sense?\n\n> + tuplesort_puttuple_common(state, &stup,\n> + base->sortKeys &&\n> + base->sortKeys->abbrev_converter &&\n> + !stup.isnull1);\n\nCan't this last argument just be inlined, based on knowledge that we\ndon't use sortKeys in brin?\n\n> +comparetup_index_brin(const SortTuple *a, const SortTuple *b,\n> + Tuplesortstate *state)\n> +{\n> + BrinTuple *tuple1;\n> [...]\n> + tuple1 = &((BrinSortTuple *) a)->tuple;\n> [...]\n\nI'm fairly sure that this cast (and it's neighbour) is incorrect and\nshould be the following instead:\n\n+ tuple1 = &((BrinSortTuple *) (a->tuple))->tuple;\n\nAdditionally, I think the following would be a better approach here,\nas we wouldn't need to do pointer-chasing:\n\n+ static int\n+ comparetup_index_brin(const SortTuple *a, const SortTuple *b,\n+ Tuplesortstate *state)\n+ {\n+ Assert(TuplesortstateGetPublic(state)->haveDatum1);\n+\n+ if (DatumGetUInt32(a->datum1) > DatumGetUInt32(b->datum1))\n+ return 1;\n+ if (DatumGetUInt32(a->datum1) < DatumGetUInt32(b->datum1))\n+ return -1;\n+ /* silence compilers */\n+ return 0;\n+ }\n\n---\n\nThanks for working on this!\n\n\n",
"msg_date": "Tue, 11 Jul 2023 23:11:02 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 7/11/23 23:11, Matthias van de Meent wrote:\n> On Thu, 6 Jul 2023 at 16:13, Tomas Vondra <[email protected]> wrote:\n>>\n>> On 7/5/23 16:33, Matthias van de Meent wrote:\n>>> ...\n>>>\n>>>> Maybe. I wasn't that familiar with what parallel tuplesort can and can't\n>>>> do, and the little I knew I managed to forget since I wrote this patch.\n>>>> Which similar features do you have in mind?\n>>>\n>>> I was referring to the feature that is \"emitting a single sorted run\n>>> of tuples at the leader backend based on data gathered in parallel\n>>> worker backends\". It manages the sort state, on-disk runs etc. so that\n>>> you don't have to manage that yourself.\n>>>\n>>> Adding a new storage format for what is effectively a logical tape\n>>> (logtape.{c,h}) and manually merging it seems like a lot of changes if\n>>> that functionality is readily available, standardized and optimized in\n>>> sortsupport; and adds an additional place to manually go through for\n>>> disk-related changes like TDE.\n>>>\n>>\n>> Here's a new version of the patch, with three main changes:\n> \n> Thanks! I've done a review on the patch, and most looks good. Some\n> places need cleanup and polish, some others more documentations, and\n> there are some other issues, but so far it's looking OK.\n> \n>> One thing I was wondering about is whether it might be better to allow\n>> the workers to process overlapping ranges, and then let the leader to\n>> merge the summaries. That would mean we might not need the tableam.c\n>> changes at all, but the leader would need to do more work (although the\n>> BRIN indexes tend to be fairly small). The main reason that got me\n>> thinking about this is that we have pretty much no tests for the union\n>> procedures, because triggering that is really difficult. But for\n>> parallel index builds that'd be much more common.\n> \n> Hmm, that's a good point. I don't mind either way, but it would add\n> overhead in the leader to do all of that merging - especially when you\n> configure pages_per_range > PARALLEL_SEQSCAN_MAX_CHUNK_SIZE as we'd\n> need to merge up to parallel_workers tuples. That could be a\n> significant overhead.\n> \n> ... thinks a bit.\n> \n> Hmm, but with the current P_S_M_C_S of 8192 blocks that's quite\n> unlikely to be a serious problem - the per-backend IO saved of such\n> large ranges on a single backend has presumably much more impact than\n> the merging of n_parallel_tasks max-sized brin tuples. So, seems fine\n> with me.\n> \n\nAs for PARALLEL_SEQSCAN_MAX_CHUNK_SIZE, the last patch actually\nconsiders the chunk_factor (i.e. pages_per_range) *after* doing\n\n pbscanwork->phsw_chunk_size = Min(pbscanwork->phsw_chunk_size,\n PARALLEL_SEQSCAN_MAX_CHUNK_SIZE);\n\nso even with (pages_per_range > PARALLEL_SEQSCAN_MAX_CHUNK_SIZE) it\nwould not need to merge anything.\n\nNow, that might have been a bad idea and PARALLEL_SEQSCAN_MAX_CHUNK_SIZE\nshould be considered. In which case this *has* to do the union, even if\nonly for the rare corner case.\n\nBut I don't think that's a major issue - it's pretty sure summarizing\nthe tuples is way more expensive than merging the summaries. Which is\nwhat matters for Amdahl's law ...\n\n\n> Review follows below.\n> \n> Kind regards,\n> \n> Matthias van de Meent\n> Neon (https://neon.tech/)\n> \n> -----------\n> \n>> diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c\n> \n>> + BrinShared *brinshared;\n> \n> Needs some indentation fixes.\n> \n>> + int bs_reltuples;\n>> [...]\n>> + state->bs_reltuples += reltuples;\n> \n> My IDE warns me that reltuples is a double. Looking deeper into the\n> value, it contains the number of live tuples in the table, so this\n> conversion may not result in a meaningful value for tables with >=2^31\n> live tuples. Tables > 56GB could begin to get affected by this.\n> \n>> + int bs_worker_id;\n> \n> This variable seems to be unused.\n> \n>> + BrinSpool *bs_spool;\n>> + BrinSpool *bs_spool_out;\n> \n> Are both used? If so, could you add comments why we have two spools\n> here, instead of only one?\n> \n\nOK, I admit I'm not sure both are actually necessary. I was struggling\ngetting it working with just one spool, because when the leader\nparticipates as a worker, it needs to both summarize some of the chunks\n(and put the tuples somewhere). And then it also needs to consume the\nfinal output.\n\nMaybe it's just a case of cargo cult programming - I was mostly copying\nstuff from the btree index build, trying to make it work, and then with\ntwo spools it started working.\n\n>> +/*\n>> + * A version of the callback, used by parallel index builds. The main difference\n>> + * is that instead of writing the BRIN tuples into the index, we write them into\n>> + * a shared tuplestore file, and leave the insertion up to the leader (which may\n> \n> + ... shared tuplesort, and ...\n> \n>> brinbuildCallbackParallel(...)\n>> + while (thisblock > state->bs_currRangeStart + state->bs_pagesPerRange - 1)\n> \n> shouldn't this be an 'if' now?\n> \n\nHmmm, probably ... that way we'd skip the empty ranges.\n\n>> + while (thisblock > state->bs_currRangeStart + state->bs_pagesPerRange - 1)\n>> + state->bs_currRangeStart += state->bs_pagesPerRange;\n> \n> Is there a reason why you went with iterative addition instead of a\n> single divide-and-multiply like the following?:\n> \n> + state->bs_currRangeStart += state->bs_pagesPerRange *\n> ((state->bs_currRangeStart - thisblock) / state->bs_pagesPerRange);\n> \n\nProbably laziness ... You're right the divide-multiply seems simpler.\n\n>> diff --git a/src/backend/access/table/tableam.c b/src/backend/access/table/tableam.c\n>> [...]\n>> -table_block_parallelscan_initialize(Relation rel, ParallelTableScanDesc pscan)\n>> +table_block_parallelscan_initialize(Relation rel, ParallelTableScanDesc pscan, BlockNumber chunk_factor)\n>> [...]\n>> - /* compare phs_syncscan initialization to similar logic in initscan */\n>> + bpscan->phs_chunk_factor = chunk_factor;\n>> + /* compare phs_syncscan initialization to similar logic in initscan\n>> + *\n>> + * Disable sync scans if the chunk factor is set (valid block number).\n>> + */\n> \n> I think this needs some pgindent or other style work, both on comment\n> style and line lengths\n> \n\nRight.\n\n>> diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c\n>> [...]\n>> + Assert(false); (x3)\n> \n> I think these can be cleaned up, right?\n> \n\nDuh! Absolutely, this shouldn't have been in the patch at all. I only\nadded those to quickly identify places that got the tuplesort into\nunexpected state (much easier with a coredump and a backtrace).\n\n>> diff --git a/src/backend/utils/sort/tuplesortvariants.c b/src/backend/utils/sort/tuplesortvariants.c\n>> [...]\n>> + * Computing BrinTuple size with only the tuple is difficult, so we want to track\n>> + * the length for r referenced by SortTuple. That's what BrinSortTuple is meant\n>> + * to do - it's essentially a BrinTuple prefixed by length. We only write the\n>> + * BrinTuple to the logtapes, though.\n> \n> Why don't we write the full BrinSortTuple to disk? Doesn't that make more sense?\n> \n\nNot sure I understand. We ultimately do, because we write\n\n (length + BrinTuple)\n\nand BrinSortTuple is exactly that. But if we write BrinSortTuple, we\nwould end up writing length for that too, no?\n\nOr maybe I just don't understand how would that make things simpler.\n\n>> + tuplesort_puttuple_common(state, &stup,\n>> + base->sortKeys &&\n>> + base->sortKeys->abbrev_converter &&\n>> + !stup.isnull1);\n> \n> Can't this last argument just be inlined, based on knowledge that we\n> don't use sortKeys in brin?\n> \n\nWhat does \"inlined\" mean for an argument? But yeah, I guess it might be\njust set to false. And we should probably have an assert that there are\nno sortKeys.\n\n>> +comparetup_index_brin(const SortTuple *a, const SortTuple *b,\n>> + Tuplesortstate *state)\n>> +{\n>> + BrinTuple *tuple1;\n>> [...]\n>> + tuple1 = &((BrinSortTuple *) a)->tuple;\n>> [...]\n> \n> I'm fairly sure that this cast (and it's neighbour) is incorrect and\n> should be the following instead:\n> \n> + tuple1 = &((BrinSortTuple *) (a->tuple))->tuple;\n> \n> Additionally, I think the following would be a better approach here,\n> as we wouldn't need to do pointer-chasing:\n> \n\nUh, right. This only works because 'tuple' happens to be the first field\nin SortTuple.\n\n> + static int\n> + comparetup_index_brin(const SortTuple *a, const SortTuple *b,\n> + Tuplesortstate *state)\n> + {\n> + Assert(TuplesortstateGetPublic(state)->haveDatum1);\n> +\n> + if (DatumGetUInt32(a->datum1) > DatumGetUInt32(b->datum1))\n> + return 1;\n> + if (DatumGetUInt32(a->datum1) < DatumGetUInt32(b->datum1))\n> + return -1;\n> + /* silence compilers */\n> + return 0;\n> + }\n> \n\nGood idea! I forgot we're guaranteed to have datum1.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jul 2023 15:57:03 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Fri, 14 Jul 2023 at 15:57, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 7/11/23 23:11, Matthias van de Meent wrote:\n>> On Thu, 6 Jul 2023 at 16:13, Tomas Vondra <[email protected]> wrote:\n>>>\n>>> One thing I was wondering about is whether it might be better to allow\n>>> the workers to process overlapping ranges, and then let the leader to\n>>> merge the summaries. That would mean we might not need the tableam.c\n>>> changes at all, but the leader would need to do more work (although the\n>>> BRIN indexes tend to be fairly small). The main reason that got me\n>>> thinking about this is that we have pretty much no tests for the union\n>>> procedures, because triggering that is really difficult. But for\n>>> parallel index builds that'd be much more common.\n>>\n>> Hmm, that's a good point. I don't mind either way, but it would add\n>> overhead in the leader to do all of that merging - especially when you\n>> configure pages_per_range > PARALLEL_SEQSCAN_MAX_CHUNK_SIZE as we'd\n>> need to merge up to parallel_workers tuples. That could be a\n>> significant overhead.\n>>\n>> ... thinks a bit.\n>>\n>> Hmm, but with the current P_S_M_C_S of 8192 blocks that's quite\n>> unlikely to be a serious problem - the per-backend IO saved of such\n>> large ranges on a single backend has presumably much more impact than\n>> the merging of n_parallel_tasks max-sized brin tuples. So, seems fine\n>> with me.\n>>\n>\n> As for PARALLEL_SEQSCAN_MAX_CHUNK_SIZE, the last patch actually\n> considers the chunk_factor (i.e. pages_per_range) *after* doing\n>\n> pbscanwork->phsw_chunk_size = Min(pbscanwork->phsw_chunk_size,\n> PARALLEL_SEQSCAN_MAX_CHUNK_SIZE);\n>\n> so even with (pages_per_range > PARALLEL_SEQSCAN_MAX_CHUNK_SIZE) it\n> would not need to merge anything.\n>\n> Now, that might have been a bad idea and PARALLEL_SEQSCAN_MAX_CHUNK_SIZE\n> should be considered. In which case this *has* to do the union, even if\n> only for the rare corner case.\n>\n> But I don't think that's a major issue - it's pretty sure summarizing\n> the tuples is way more expensive than merging the summaries. Which is\n> what matters for Amdahl's law ...\n\nAgreed.\n\n>>> + BrinSpool *bs_spool;\n>>> + BrinSpool *bs_spool_out;\n>>\n>> Are both used? If so, could you add comments why we have two spools\n>> here, instead of only one?\n>>\n>\n> OK, I admit I'm not sure both are actually necessary. I was struggling\n> getting it working with just one spool, because when the leader\n> participates as a worker, it needs to both summarize some of the chunks\n> (and put the tuples somewhere). And then it also needs to consume the\n> final output.\n>\n> Maybe it's just a case of cargo cult programming - I was mostly copying\n> stuff from the btree index build, trying to make it work, and then with\n> two spools it started working.\n\nTwo spools seem to be necessary in a participating leader, but both\nspools have non-overlapping lifetimes. In the btree code actually two\npairs of spools are actually used (in unique indexes): you can see the\npairs being allocated in both _bt_leader_participate_as_worker (called\nfrom _bt_begin_parallel, from _bt_spools_heapscan) and in\n_bt_spools_heapscan.\n\n> >> diff --git a/src/backend/utils/sort/tuplesortvariants.c b/src/backend/utils/sort/tuplesortvariants.c\n> >> [...]\n> >> + * Computing BrinTuple size with only the tuple is difficult, so we want to track\n> >> + * the length for r referenced by SortTuple. That's what BrinSortTuple is meant\n> >> + * to do - it's essentially a BrinTuple prefixed by length. We only write the\n> >> + * BrinTuple to the logtapes, though.\n> >\n> > Why don't we write the full BrinSortTuple to disk? Doesn't that make more sense?\n> >\n>\n> Not sure I understand. We ultimately do, because we write\n>\n> (length + BrinTuple)\n>\n> and BrinSortTuple is exactly that. But if we write BrinSortTuple, we\n> would end up writing length for that too, no?\n>\n> Or maybe I just don't understand how would that make things simpler.\n\nI don't quite understand the intricacies of the tape storage format\nquite yet (specifically, I'm continuously getting confused by the len\n-= sizeof(int)), so you might well be correct.\n\nMy comment was written based on just the comment's contents, which\nclaims \"we can't easily recompute the length, so we store it with the\ntuple in memory. However, we don't store the length when we write it\nto the tape\", which seems self-contradictory.\n\n> >> + tuplesort_puttuple_common(state, &stup,\n> >> + base->sortKeys &&\n> >> + base->sortKeys->abbrev_converter &&\n> >> + !stup.isnull1);\n> >\n> > Can't this last argument just be inlined, based on knowledge that we\n> > don't use sortKeys in brin?\n> >\n>\n> What does \"inlined\" mean for an argument? But yeah, I guess it might be\n> just set to false. And we should probably have an assert that there are\n> no sortKeys.\n\n\"inlined\", \"precomputed\", \"const-ified\"? I'm not super good at\nvocabulary. But, indeed, thanks.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 14 Jul 2023 17:01:12 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hi,\n\nhere's an updated patch, addressing the review comments, and reworking\nhow the work is divided between the workers & leader etc.\n\n0001 is just v2, rebased to current master\n\n0002 and 0003 address most of the issues, in particular it\n\n - removes the unnecessary spool\n - fixes bs_reltuples type to double\n - a couple comments are reworded to be clearer\n - changes loop/condition in brinbuildCallbackParallel\n - removes asserts added for debugging\n - fixes cast in comparetup_index_brin\n - 0003 then simplifies comparetup_index_brin\n - I haven't inlined the tuplesort_puttuple_common parameter\n (didn't seem worth it)\n\n0004 Reworks how the work is divided between workers and combined by the\nleader. It undoes the tableam.c changes that attempted to divide the\nrelation into chunks matching the BRIN ranges, and instead merges the\nresults in the leader (using the BRIN \"union\" function).\n\nI haven't done any indentation fixes yet.\n\nI did fairly extensive testing, using pageinspect to compare indexes\nbuilt with/without parallelism. More testing is needed, but it seems to\nwork fine (with other opclasses and so on).\n\nIn general I'm quite happy with the current state, and I believe it's\nfairly close to be committable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 8 Nov 2023 12:03:42 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>\n> Hi,\n>\n> here's an updated patch, addressing the review comments, and reworking\n> how the work is divided between the workers & leader etc.\n\nThanks!\n\n> In general I'm quite happy with the current state, and I believe it's\n> fairly close to be committable.\n\nAre you planning on committing the patches separately, or squashed? I\nwon't have much time this week for reviewing the patch, and it seems\nlike these patches are incremental, so some guidance on what you want\nto be reviewed would be appreciated.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Sun, 12 Nov 2023 10:38:52 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 11/12/23 10:38, Matthias van de Meent wrote:\n> On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> here's an updated patch, addressing the review comments, and reworking\n>> how the work is divided between the workers & leader etc.\n> \n> Thanks!\n> \n>> In general I'm quite happy with the current state, and I believe it's\n>> fairly close to be committable.\n> \n> Are you planning on committing the patches separately, or squashed? I\n> won't have much time this week for reviewing the patch, and it seems\n> like these patches are incremental, so some guidance on what you want\n> to be reviewed would be appreciated.\n> \n\nDefinitely squashed. I only kept them separate to make it more obvious\nwhat the changes are.\n\nIf you need more time for a review, I can certainly wait. No rush.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 12 Nov 2023 11:01:39 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>\n> Hi,\n>\n> here's an updated patch, addressing the review comments, and reworking\n> how the work is divided between the workers & leader etc.\n>\n> 0001 is just v2, rebased to current master\n>\n> 0002 and 0003 address most of the issues, in particular it\n>\n> - removes the unnecessary spool\n> - fixes bs_reltuples type to double\n> - a couple comments are reworded to be clearer\n> - changes loop/condition in brinbuildCallbackParallel\n> - removes asserts added for debugging\n> - fixes cast in comparetup_index_brin\n> - 0003 then simplifies comparetup_index_brin\n> - I haven't inlined the tuplesort_puttuple_common parameter\n> (didn't seem worth it)\n\nOK, thanks\n\n> 0004 Reworks how the work is divided between workers and combined by the\n> leader. It undoes the tableam.c changes that attempted to divide the\n> relation into chunks matching the BRIN ranges, and instead merges the\n> results in the leader (using the BRIN \"union\" function).\n\nThat's OK.\n\n> I haven't done any indentation fixes yet.\n>\n> I did fairly extensive testing, using pageinspect to compare indexes\n> built with/without parallelism. More testing is needed, but it seems to\n> work fine (with other opclasses and so on).\n\nAfter code-only review, here are some comments:\n\n> +++ b/src/backend/access/brin/brin.c\n> [...]\n> +/* Magic numbers for parallel state sharing */\n> +#define PARALLEL_KEY_BRIN_SHARED UINT64CONST(0xA000000000000001)\n> +#define PARALLEL_KEY_TUPLESORT UINT64CONST(0xA000000000000002)\n\nThese shm keys use the same constants also in use in\naccess/nbtree/nbtsort.c. While this shouldn't be an issue in normal\noperations, I'd prefer if we didn't actively introduce conflicting\nidentifiers when we still have significant amounts of unused values\nremaining.\n\n> +#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xA000000000000003)\n\nThis is the fourth definition of a PARALLEL%_KEY_QUERY_TEXT, the\nothers being in access/nbtree/nbtsort.c (value 0xA000000000000004, one\nmore than brin's), backend/executor/execParallel.c\n(0xE000000000000008), and PARALLEL_VACUUM_KEY_QUERY_TEXT (0x3) (though\nI've not checked that their uses are exactly the same, I'd expect at\nleast btree to match mostly, if not fully, 1:1).\nI think we could probably benefit from a less ad-hoc sharing of query\ntexts. I don't think that needs to happen specifically in this patch,\nbut I think it's something to keep in mind in future efforts.\n\n> +_brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)\n> [...]\n> + BrinSpool *spool = state->bs_spool;\n> [...]\n> + if (!state)\n> + return;\n\nI think the assignment to spool should be moved to below this\ncondition, as _brin_begin_parallel calls this with state=NULL when it\ncan't launch parallel workers, which will cause issues here.\n\n> + state->bs_numtuples = brinshared->indtuples;\n\nMy IDE complains about bs_numtuples being an integer. This is a\npre-existing issue, but still valid: we can hit an overflow on tables\nwith pages_per_range=1 and relsize >= 2^31 pages. Extremely unlikely,\nbut problematic nonetheless.\n\n> + * FIXME This probably needs some memory management fixes - we're reading\n> + * tuples from the tuplesort, we're allocating an emty tuple, and so on.\n> + * Probably better to release this memory.\n\nThis should probably be resolved.\n\nI also noticed that this is likely to execute `union_tuples` many\ntimes when pages_per_range is coprime with the parallel table scan's\nblock stride (or when we for other reasons have many tuples returned\nfor each range); and this `union_tuples` internally allocates and\ndeletes its own memory context for its deserialization of the 'b'\ntuple. I think we should just pass a scratch context instead, so that\nwe don't have the overhead of continously creating then deleting the\nsame memory context.\n\n> +++ b/src/backend/catalog/index.c\n> [...]\n> - indexRelation->rd_rel->relam == BTREE_AM_OID)\n> + (indexRelation->rd_rel->relam == BTREE_AM_OID ||\n> + indexRelation->rd_rel->relam == BRIN_AM_OID))\n\nI think this needs some more effort. I imagine a new\nIndexAmRoutine->amcanbuildparallel is more appropriate than this\nhard-coded list - external indexes may want to utilize the parallel\nindex creation planner facility, too.\n\n\nSome notes:\nAs a project PostgreSQL seems to be trying to move away from\nhardcoding heap into everything in favour of the more AM-agnostic\n'table'. I suggest replacing all mentions of \"heap\" in the arguments\nwith \"table\", to reduce the work future maintainers need to do to fix\nthis. Even when this AM is mostly targetted towards the heap AM, other\nAMs (such as those I've heard of that were developed internally at\nEDB) use the same block-addressing that heap does, and should thus be\ncompatible with BRIN. Thus, \"heap\" is not a useful name here.\n\nThere are 2 new mentions of \"tuplestore\" in the patch, while the\nstructure used is tuplesort: one on form_and_spill_tuple, and one on\nbrinbuildCallbackParallel. Please update those comments.\n\nThat's it for code review. I'll do some performance comparisons and\ntesting soon, too.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 20 Nov 2023 20:48:39 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/20/23 20:48, Matthias van de Meent wrote:\n> On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> here's an updated patch, addressing the review comments, and reworking\n>> how the work is divided between the workers & leader etc.\n>>\n>> 0001 is just v2, rebased to current master\n>>\n>> 0002 and 0003 address most of the issues, in particular it\n>>\n>> - removes the unnecessary spool\n>> - fixes bs_reltuples type to double\n>> - a couple comments are reworded to be clearer\n>> - changes loop/condition in brinbuildCallbackParallel\n>> - removes asserts added for debugging\n>> - fixes cast in comparetup_index_brin\n>> - 0003 then simplifies comparetup_index_brin\n>> - I haven't inlined the tuplesort_puttuple_common parameter\n>> (didn't seem worth it)\n> \n> OK, thanks\n> \n>> 0004 Reworks how the work is divided between workers and combined by the\n>> leader. It undoes the tableam.c changes that attempted to divide the\n>> relation into chunks matching the BRIN ranges, and instead merges the\n>> results in the leader (using the BRIN \"union\" function).\n> \n> That's OK.\n> \n>> I haven't done any indentation fixes yet.\n>>\n>> I did fairly extensive testing, using pageinspect to compare indexes\n>> built with/without parallelism. More testing is needed, but it seems to\n>> work fine (with other opclasses and so on).\n> \n> After code-only review, here are some comments:\n> \n>> +++ b/src/backend/access/brin/brin.c\n>> [...]\n>> +/* Magic numbers for parallel state sharing */\n>> +#define PARALLEL_KEY_BRIN_SHARED UINT64CONST(0xA000000000000001)\n>> +#define PARALLEL_KEY_TUPLESORT UINT64CONST(0xA000000000000002)\n> \n> These shm keys use the same constants also in use in\n> access/nbtree/nbtsort.c. While this shouldn't be an issue in normal\n> operations, I'd prefer if we didn't actively introduce conflicting\n> identifiers when we still have significant amounts of unused values\n> remaining.\n> \n\nHmmm. Is there some rule of thumb how to pick these key values? I see\nnbtsort.c uses 0xA prefix, execParallel.c uses 0xE, while parallel.c\nended up using 0xFFFFFFFFFFFF as prefix. I've user 0xB, simply because\nBRIN also starts with B.\n\n>> +#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xA000000000000003)\n> \n> This is the fourth definition of a PARALLEL%_KEY_QUERY_TEXT, the\n> others being in access/nbtree/nbtsort.c (value 0xA000000000000004, one\n> more than brin's), backend/executor/execParallel.c\n> (0xE000000000000008), and PARALLEL_VACUUM_KEY_QUERY_TEXT (0x3) (though\n> I've not checked that their uses are exactly the same, I'd expect at\n> least btree to match mostly, if not fully, 1:1).\n> I think we could probably benefit from a less ad-hoc sharing of query\n> texts. I don't think that needs to happen specifically in this patch,\n> but I think it's something to keep in mind in future efforts.\n> \n\nI'm afraid I don't quite get what you mean by \"ad hoc sharing of query\ntexts\". Are you saying we shouldn't propagate the query text to the\nparallel workers? Why? Or what's the proper solution?\n\n>> +_brin_end_parallel(BrinLeader *brinleader, BrinBuildState *state)\n>> [...]\n>> + BrinSpool *spool = state->bs_spool;\n>> [...]\n>> + if (!state)\n>> + return;\n> \n> I think the assignment to spool should be moved to below this\n> condition, as _brin_begin_parallel calls this with state=NULL when it\n> can't launch parallel workers, which will cause issues here.\n> \n\nGood catch! I wonder if we have tests that might trigger this, say by\nsetting max_parallel_maintenance_workers > 0 while no workers allowed.\n\n>> + state->bs_numtuples = brinshared->indtuples;\n> \n> My IDE complains about bs_numtuples being an integer. This is a\n> pre-existing issue, but still valid: we can hit an overflow on tables\n> with pages_per_range=1 and relsize >= 2^31 pages. Extremely unlikely,\n> but problematic nonetheless.\n> \n\nTrue. I think I've been hesitant to make this a double because it seems\na bit weird to do +1 with a double, and at some point (d == d+1). But\nthis seems safe, we're guaranteed to be far away from that threshold.\n\n>> + * FIXME This probably needs some memory management fixes - we're reading\n>> + * tuples from the tuplesort, we're allocating an emty tuple, and so on.\n>> + * Probably better to release this memory.\n> \n> This should probably be resolved.\n> \n\nAFAICS that comment is actually inaccurate/stale, sorry about that. The\ncode actually allocates (and resets) a single memtuple, and also\nemptyTuple. So I think that's OK, I've removed the comment.\n\n> I also noticed that this is likely to execute `union_tuples` many\n> times when pages_per_range is coprime with the parallel table scan's\n> block stride (or when we for other reasons have many tuples returned\n> for each range); and this `union_tuples` internally allocates and\n> deletes its own memory context for its deserialization of the 'b'\n> tuple. I think we should just pass a scratch context instead, so that\n> we don't have the overhead of continously creating then deleting the\n> same memory context\n\nPerhaps. Looking at the code, isn't it a bit strange how union_tuples\nuses the context? It creates the context, calls brin_deform_tuple in\nthat context, but then the rest of the function (including datumCopy and\nsimilar stuff) happens in the caller's context ...\n\nHowever, I don't think the number of union_tuples calls is likely to be\nvery high, especially for large tables. Because we split the table into\n2048 chunks, and then cap the chunk size by 8192. For large tables\n(where this matters) we're likely close to 8192.\n\n> \n>> +++ b/src/backend/catalog/index.c\n>> [...]\n>> - indexRelation->rd_rel->relam == BTREE_AM_OID)\n>> + (indexRelation->rd_rel->relam == BTREE_AM_OID ||\n>> + indexRelation->rd_rel->relam == BRIN_AM_OID))\n> \n> I think this needs some more effort. I imagine a new\n> IndexAmRoutine->amcanbuildparallel is more appropriate than this\n> hard-coded list - external indexes may want to utilize the parallel\n> index creation planner facility, too.\n> \n\nGood idea. I added the IndexAmRoutine flag and used it here.\n\n> \n> Some notes:\n> As a project PostgreSQL seems to be trying to move away from\n> hardcoding heap into everything in favour of the more AM-agnostic\n> 'table'. I suggest replacing all mentions of \"heap\" in the arguments\n> with \"table\", to reduce the work future maintainers need to do to fix\n> this. Even when this AM is mostly targetted towards the heap AM, other\n> AMs (such as those I've heard of that were developed internally at\n> EDB) use the same block-addressing that heap does, and should thus be\n> compatible with BRIN. Thus, \"heap\" is not a useful name here.\n> \n\nI'm not against doing that, but I'd prefer to do that in a separate\npatch. There's a bunch of preexisting heap references, so and I don't\nwant to introduce inconsistency (patch using table, old code heap) nor\ndo I want to tweak unrelated code.\n\n> There are 2 new mentions of \"tuplestore\" in the patch, while the\n> structure used is tuplesort: one on form_and_spill_tuple, and one on\n> brinbuildCallbackParallel. Please update those comments.\n> \n> That's it for code review. I'll do some performance comparisons and\n> testing soon, too.\n> \n\nThanks! Attached is a patch squashing the previous version into a single\nv3 commit, with fixes for your review in a separate commit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 22 Nov 2023 20:16:55 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hi,\n\nOn Wed, 22 Nov 2023 at 20:16, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/20/23 20:48, Matthias van de Meent wrote:\n>> On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>>>\n>>> Hi,\n>>>\n>>> here's an updated patch, addressing the review comments, and reworking\n>>> how the work is divided between the workers & leader etc.\n>>>\n>>\n>> After code-only review, here are some comments:\n>>\n>>> +++ b/src/backend/access/brin/brin.c\n>>> [...]\n>>> +/* Magic numbers for parallel state sharing */\n>>> +#define PARALLEL_KEY_BRIN_SHARED UINT64CONST(0xA000000000000001)\n>>> +#define PARALLEL_KEY_TUPLESORT UINT64CONST(0xA000000000000002)\n>>\n>> These shm keys use the same constants also in use in\n>> access/nbtree/nbtsort.c. While this shouldn't be an issue in normal\n>> operations, I'd prefer if we didn't actively introduce conflicting\n>> identifiers when we still have significant amounts of unused values\n>> remaining.\n>>\n>\n> Hmmm. Is there some rule of thumb how to pick these key values?\n\nNone that I know of.\nThere is a warning in various places that define these constants that\nthey take care to not conflict with plan node's node_id: parallel plan\nexecution uses plain plan node IDs as keys, and as node_id is\nint-sized, any other key value that's created manually of value < 2^32\nshould be sure that it can't be executed in a parallel backend.\nBut apart from that one case, I can't find a convention, no.\n\n>>> +#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xA000000000000003)\n>>\n>> This is the fourth definition of a PARALLEL%_KEY_QUERY_TEXT, the\n>> others being in access/nbtree/nbtsort.c (value 0xA000000000000004, one\n>> more than brin's), backend/executor/execParallel.c\n>> (0xE000000000000008), and PARALLEL_VACUUM_KEY_QUERY_TEXT (0x3) (though\n>> I've not checked that their uses are exactly the same, I'd expect at\n>> least btree to match mostly, if not fully, 1:1).\n>> I think we could probably benefit from a less ad-hoc sharing of query\n>> texts. I don't think that needs to happen specifically in this patch,\n>> but I think it's something to keep in mind in future efforts.\n>>\n>\n> I'm afraid I don't quite get what you mean by \"ad hoc sharing of query\n> texts\". Are you saying we shouldn't propagate the query text to the\n> parallel workers? Why? Or what's the proper solution?\n\nWhat I mean is that we have several different keys that all look like\nthey contain the debug query string, and always for the same debugging\npurposes. For debugging, I think it'd be useful to use one well-known\nkey, rather than N well-known keys in each of the N parallel\nsubsystems.\n\nBut as mentioned, it doesn't need to happen in this patch, as that'd\nincrease scope beyond brin/index ams.\n\n>>> + state->bs_numtuples = brinshared->indtuples;\n>>\n>> My IDE complains about bs_numtuples being an integer. This is a\n>> pre-existing issue, but still valid: we can hit an overflow on tables\n>> with pages_per_range=1 and relsize >= 2^31 pages. Extremely unlikely,\n>> but problematic nonetheless.\n>>\n>\n> True. I think I've been hesitant to make this a double because it seems\n> a bit weird to do +1 with a double, and at some point (d == d+1). But\n> this seems safe, we're guaranteed to be far away from that threshold.\n\nYes, ignoring practical constraints like page space, we \"only\" have\nbitspace for 2^48 tuples in each (non-partitioned) relation, so\ndouble's 56 significant bits should be more than enough to count\ntuples.\n\n>> I also noticed that this is likely to execute `union_tuples` many\n>> times when pages_per_range is coprime with the parallel table scan's\n>> block stride (or when we for other reasons have many tuples returned\n>> for each range); and this `union_tuples` internally allocates and\n>> deletes its own memory context for its deserialization of the 'b'\n>> tuple. I think we should just pass a scratch context instead, so that\n>> we don't have the overhead of continously creating then deleting the\n>> same memory context\n>\n> Perhaps. Looking at the code, isn't it a bit strange how union_tuples\n> uses the context? It creates the context, calls brin_deform_tuple in\n> that context, but then the rest of the function (including datumCopy and\n> similar stuff) happens in the caller's context ...\n\nThe union operator may leak (lots of) memory, so I think it makes\nsense to keep a context around that can be reset after we've extracted\nthe merge result.\n\n> However, I don't think the number of union_tuples calls is likely to be\n> very high, especially for large tables. Because we split the table into\n> 2048 chunks, and then cap the chunk size by 8192. For large tables\n> (where this matters) we're likely close to 8192.\n\nI agree that the merging part of the index creation is the last part,\nand usually has no high impact on the total performance of the reindex\noperation, but in memory-constrained environments releasing and then\nrequesting the same chunk of memory over and over again just isn't\ngreat.\nAlso note that parallel scan chunk sizes decrease when we're about to\nhit the end of the table, and that a different AM may have different\nideas about scanning a table in parallel; it could very well decide to\nuse striped assignments exclusively, as opposed to on-demand chunk\nallocations; both increasing the chance that brin's page ranges are\nprocessed by more than one backend.\n\n>> As a project PostgreSQL seems to be trying to move away from\n>> hardcoding heap into everything in favour of the more AM-agnostic\n>> 'table'. I suggest replacing all mentions of \"heap\" in the arguments\n>> with \"table\", to reduce the work future maintainers need to do to fix\n>> this.\n>\n> I'm not against doing that, but I'd prefer to do that in a separate\n> patch. There's a bunch of preexisting heap references, so and I don't\n> want to introduce inconsistency (patch using table, old code heap) nor\n> do I want to tweak unrelated code.\n\nSure.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 23 Nov 2023 13:33:44 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/23/23 13:33, Matthias van de Meent wrote:\n> Hi,\n> \n> On Wed, 22 Nov 2023 at 20:16, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/20/23 20:48, Matthias van de Meent wrote:\n>>> On Wed, 8 Nov 2023 at 12:03, Tomas Vondra <[email protected]> wrote:\n>>>>\n>>>> Hi,\n>>>>\n>>>> here's an updated patch, addressing the review comments, and reworking\n>>>> how the work is divided between the workers & leader etc.\n>>>>\n>>>\n>>> After code-only review, here are some comments:\n>>>\n>>>> +++ b/src/backend/access/brin/brin.c\n>>>> [...]\n>>>> +/* Magic numbers for parallel state sharing */\n>>>> +#define PARALLEL_KEY_BRIN_SHARED UINT64CONST(0xA000000000000001)\n>>>> +#define PARALLEL_KEY_TUPLESORT UINT64CONST(0xA000000000000002)\n>>>\n>>> These shm keys use the same constants also in use in\n>>> access/nbtree/nbtsort.c. While this shouldn't be an issue in normal\n>>> operations, I'd prefer if we didn't actively introduce conflicting\n>>> identifiers when we still have significant amounts of unused values\n>>> remaining.\n>>>\n>>\n>> Hmmm. Is there some rule of thumb how to pick these key values?\n> \n> None that I know of.\n> There is a warning in various places that define these constants that\n> they take care to not conflict with plan node's node_id: parallel plan\n> execution uses plain plan node IDs as keys, and as node_id is\n> int-sized, any other key value that's created manually of value < 2^32\n> should be sure that it can't be executed in a parallel backend.\n> But apart from that one case, I can't find a convention, no.\n> \n\nOK, in that case 0xB is fine.\n\n>>>> +#define PARALLEL_KEY_QUERY_TEXT UINT64CONST(0xA000000000000003)\n>>>\n>>> This is the fourth definition of a PARALLEL%_KEY_QUERY_TEXT, the\n>>> others being in access/nbtree/nbtsort.c (value 0xA000000000000004, one\n>>> more than brin's), backend/executor/execParallel.c\n>>> (0xE000000000000008), and PARALLEL_VACUUM_KEY_QUERY_TEXT (0x3) (though\n>>> I've not checked that their uses are exactly the same, I'd expect at\n>>> least btree to match mostly, if not fully, 1:1).\n>>> I think we could probably benefit from a less ad-hoc sharing of query\n>>> texts. I don't think that needs to happen specifically in this patch,\n>>> but I think it's something to keep in mind in future efforts.\n>>>\n>>\n>> I'm afraid I don't quite get what you mean by \"ad hoc sharing of query\n>> texts\". Are you saying we shouldn't propagate the query text to the\n>> parallel workers? Why? Or what's the proper solution?\n> \n> What I mean is that we have several different keys that all look like\n> they contain the debug query string, and always for the same debugging\n> purposes. For debugging, I think it'd be useful to use one well-known\n> key, rather than N well-known keys in each of the N parallel\n> subsystems.\n> \n> But as mentioned, it doesn't need to happen in this patch, as that'd\n> increase scope beyond brin/index ams.\n> \n\nAgreed.\n\n>>> I also noticed that this is likely to execute `union_tuples` many\n>>> times when pages_per_range is coprime with the parallel table scan's\n>>> block stride (or when we for other reasons have many tuples returned\n>>> for each range); and this `union_tuples` internally allocates and\n>>> deletes its own memory context for its deserialization of the 'b'\n>>> tuple. I think we should just pass a scratch context instead, so that\n>>> we don't have the overhead of continously creating then deleting the\n>>> same memory context\n>>\n>> Perhaps. Looking at the code, isn't it a bit strange how union_tuples\n>> uses the context? It creates the context, calls brin_deform_tuple in\n>> that context, but then the rest of the function (including datumCopy and\n>> similar stuff) happens in the caller's context ...\n> \n> The union operator may leak (lots of) memory, so I think it makes\n> sense to keep a context around that can be reset after we've extracted\n> the merge result.\n> \n\nBut does the current code actually achieve that? It does create a \"brin\nunion\" context, but then it only does this:\n\n /* Use our own memory context to avoid retail pfree */\n cxt = AllocSetContextCreate(CurrentMemoryContext,\n \"brin union\",\n ALLOCSET_DEFAULT_SIZES);\n oldcxt = MemoryContextSwitchTo(cxt);\n db = brin_deform_tuple(bdesc, b, NULL);\n MemoryContextSwitchTo(oldcxt);\n\nSurely that does not limit the amount of memory used by the actual union\nfunctions in any way?\n\n>> However, I don't think the number of union_tuples calls is likely to be\n>> very high, especially for large tables. Because we split the table into\n>> 2048 chunks, and then cap the chunk size by 8192. For large tables\n>> (where this matters) we're likely close to 8192.\n> \n> I agree that the merging part of the index creation is the last part,\n> and usually has no high impact on the total performance of the reindex\n> operation, but in memory-constrained environments releasing and then\n> requesting the same chunk of memory over and over again just isn't\n> great.\n\nOK, I'll take a look at the scratch context you suggested.\n\nMy point however was we won't actually do that very often, because on\nlarge tables the BRIN ranges are likely smaller than the parallel scan\nchunk size, so few overlaps. OTOH if the table is small, or if the BRIN\nranges are large, there'll be few of them.\n\n> Also note that parallel scan chunk sizes decrease when we're about to\n> hit the end of the table, and that a different AM may have different\n> ideas about scanning a table in parallel; it could very well decide to\n> use striped assignments exclusively, as opposed to on-demand chunk\n> allocations; both increasing the chance that brin's page ranges are\n> processed by more than one backend.\n> \n\nYeah, but the ramp-up and ramp-down should have negligible impact, IMO.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Nov 2023 14:35:08 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Thu, 23 Nov 2023 at 14:35, Tomas Vondra\n<[email protected]> wrote:\n> On 11/23/23 13:33, Matthias van de Meent wrote:\n>> The union operator may leak (lots of) memory, so I think it makes\n>> sense to keep a context around that can be reset after we've extracted\n>> the merge result.\n>>\n>\n> But does the current code actually achieve that? It does create a \"brin\n> union\" context, but then it only does this:\n>\n> /* Use our own memory context to avoid retail pfree */\n> cxt = AllocSetContextCreate(CurrentMemoryContext,\n> \"brin union\",\n> ALLOCSET_DEFAULT_SIZES);\n> oldcxt = MemoryContextSwitchTo(cxt);\n> db = brin_deform_tuple(bdesc, b, NULL);\n> MemoryContextSwitchTo(oldcxt);\n>\n> Surely that does not limit the amount of memory used by the actual union\n> functions in any way?\n\nOh, yes, of course. For some reason I thought that covered the calls\nto the union operator function too, but it indeed only covers\ndeserialization. I do think it is still worthwhile to not do the\ncreate/delete cycle, but won't hold the patch back for that.\n\n>>> However, I don't think the number of union_tuples calls is likely to be\n>>> very high, especially for large tables. Because we split the table into\n>>> 2048 chunks, and then cap the chunk size by 8192. For large tables\n>>> (where this matters) we're likely close to 8192.\n>>\n>> I agree that the merging part of the index creation is the last part,\n>> and usually has no high impact on the total performance of the reindex\n>> operation, but in memory-constrained environments releasing and then\n>> requesting the same chunk of memory over and over again just isn't\n>> great.\n>\n> OK, I'll take a look at the scratch context you suggested.\n>\n> My point however was we won't actually do that very often, because on\n> large tables the BRIN ranges are likely smaller than the parallel scan\n> chunk size, so few overlaps. OTOH if the table is small, or if the BRIN\n> ranges are large, there'll be few of them.\n\nThat's true, so maybe I'm concerned about something that amounts to\nonly marginal gains.\n\nI noticed that the v4 patch doesn't yet update the documentation in\nindexam.sgml with am->amcanbuildparallel.\nOnce that is included and reviewed I think this will be ready, unless\nyou want to address any of my comments upthread (that I marked with\n'not in this patch') in this patch.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 28 Nov 2023 16:39:33 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/28/23 16:39, Matthias van de Meent wrote:\n> On Thu, 23 Nov 2023 at 14:35, Tomas Vondra\n> <[email protected]> wrote:\n>> On 11/23/23 13:33, Matthias van de Meent wrote:\n>>> The union operator may leak (lots of) memory, so I think it makes\n>>> sense to keep a context around that can be reset after we've extracted\n>>> the merge result.\n>>>\n>>\n>> But does the current code actually achieve that? It does create a \"brin\n>> union\" context, but then it only does this:\n>>\n>> /* Use our own memory context to avoid retail pfree */\n>> cxt = AllocSetContextCreate(CurrentMemoryContext,\n>> \"brin union\",\n>> ALLOCSET_DEFAULT_SIZES);\n>> oldcxt = MemoryContextSwitchTo(cxt);\n>> db = brin_deform_tuple(bdesc, b, NULL);\n>> MemoryContextSwitchTo(oldcxt);\n>>\n>> Surely that does not limit the amount of memory used by the actual union\n>> functions in any way?\n> \n> Oh, yes, of course. For some reason I thought that covered the calls\n> to the union operator function too, but it indeed only covers\n> deserialization. I do think it is still worthwhile to not do the\n> create/delete cycle, but won't hold the patch back for that.\n> \n\nI think the union_tuples() changes are better left for a separate patch.\n\n>>>> However, I don't think the number of union_tuples calls is likely to be\n>>>> very high, especially for large tables. Because we split the table into\n>>>> 2048 chunks, and then cap the chunk size by 8192. For large tables\n>>>> (where this matters) we're likely close to 8192.\n>>>\n>>> I agree that the merging part of the index creation is the last part,\n>>> and usually has no high impact on the total performance of the reindex\n>>> operation, but in memory-constrained environments releasing and then\n>>> requesting the same chunk of memory over and over again just isn't\n>>> great.\n>>\n>> OK, I'll take a look at the scratch context you suggested.\n>>\n>> My point however was we won't actually do that very often, because on\n>> large tables the BRIN ranges are likely smaller than the parallel scan\n>> chunk size, so few overlaps. OTOH if the table is small, or if the BRIN\n>> ranges are large, there'll be few of them.\n> \n> That's true, so maybe I'm concerned about something that amounts to\n> only marginal gains.\n> \n\nHowever, after thinking about this a bit more, I think we actually do\nneed to do something about the memory management when merging tuples.\nAFAIK the general assumption was that union_tuple() only runs for a\nsingle range, and then the whole context gets freed. But the way the\nmerging was implemented, it all runs in a single context. And while a\nsingle union_tuple() may not need a lot memory, in total it may be\nannoying. I just added a palloc(1MB) into union_tuples and ended up with\n~160MB allocated in the PortalContext on just 2GB table. In practice the\nmemory will grow more slowly, but not great :-/\n\nThe attached 0003 patch adds a memory context that's reset after\nproducing a merged BRIN tuple for each page range.\n\n> I noticed that the v4 patch doesn't yet update the documentation in\n> indexam.sgml with am->amcanbuildparallel.\n\nShould be fixed by 0002. I decided to add a simple note to ambuild(),\nnot sure if something more is needed.\n\n> Once that is included and reviewed I think this will be ready, unless\n> you want to address any of my comments upthread (that I marked with\n> 'not in this patch') in this patch.\n> \n\nThanks. I believe the attached version addresses it. There's also 0004\nwith some indentation tweaks per pgindent.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Nov 2023 18:59:15 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Tue, 28 Nov 2023 at 18:59, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/28/23 16:39, Matthias van de Meent wrote:\n> > On Thu, 23 Nov 2023 at 14:35, Tomas Vondra\n> > <[email protected]> wrote:\n> >> On 11/23/23 13:33, Matthias van de Meent wrote:\n> >>> The union operator may leak (lots of) memory, so I think it makes\n> >>> sense to keep a context around that can be reset after we've extracted\n> >>> the merge result.\n> >>>\n> >>\n> >> But does the current code actually achieve that? It does create a \"brin\n> >> union\" context, but then it only does this:\n> >>\n> >> /* Use our own memory context to avoid retail pfree */\n> >> cxt = AllocSetContextCreate(CurrentMemoryContext,\n> >> \"brin union\",\n> >> ALLOCSET_DEFAULT_SIZES);\n> >> oldcxt = MemoryContextSwitchTo(cxt);\n> >> db = brin_deform_tuple(bdesc, b, NULL);\n> >> MemoryContextSwitchTo(oldcxt);\n> >>\n> >> Surely that does not limit the amount of memory used by the actual union\n> >> functions in any way?\n> >\n> > Oh, yes, of course. For some reason I thought that covered the calls\n> > to the union operator function too, but it indeed only covers\n> > deserialization. I do think it is still worthwhile to not do the\n> > create/delete cycle, but won't hold the patch back for that.\n> >\n>\n> I think the union_tuples() changes are better left for a separate patch.\n>\n> >>>> However, I don't think the number of union_tuples calls is likely to be\n> >>>> very high, especially for large tables. Because we split the table into\n> >>>> 2048 chunks, and then cap the chunk size by 8192. For large tables\n> >>>> (where this matters) we're likely close to 8192.\n> >>>\n> >>> I agree that the merging part of the index creation is the last part,\n> >>> and usually has no high impact on the total performance of the reindex\n> >>> operation, but in memory-constrained environments releasing and then\n> >>> requesting the same chunk of memory over and over again just isn't\n> >>> great.\n> >>\n> >> OK, I'll take a look at the scratch context you suggested.\n> >>\n> >> My point however was we won't actually do that very often, because on\n> >> large tables the BRIN ranges are likely smaller than the parallel scan\n> >> chunk size, so few overlaps. OTOH if the table is small, or if the BRIN\n> >> ranges are large, there'll be few of them.\n> >\n> > That's true, so maybe I'm concerned about something that amounts to\n> > only marginal gains.\n> >\n>\n> However, after thinking about this a bit more, I think we actually do\n> need to do something about the memory management when merging tuples.\n> AFAIK the general assumption was that union_tuple() only runs for a\n> single range, and then the whole context gets freed.\n\nCorrect, but it is also is (or should be) assumed that union_tuple\nwill be called several times in the same context to fix repeat\nconcurrent updates. Presumably, that only happens rarely, but it's\nsomething that should be kept in mind regardless.\n\n> But the way the\n> merging was implemented, it all runs in a single context. And while a\n> single union_tuple() may not need a lot memory, in total it may be\n> annoying. I just added a palloc(1MB) into union_tuples and ended up with\n> ~160MB allocated in the PortalContext on just 2GB table. In practice the\n> memory will grow more slowly, but not great :-/\n>\n> The attached 0003 patch adds a memory context that's reset after\n> producing a merged BRIN tuple for each page range.\n\nLooks good.\n\nThis also made me think a bit more about how we're working with the\ntuples. With your latest patch, we always deserialize and re-serialize\nthe sorted brin tuples, just in case the next tuple will also be a\nBRIN tuple of the same page range. Could we save some of that\ndeserialization time by optimistically expecting that we're not going\nto need to merge the tuple and only store a local copy of it locally?\nSee attached 0002; this saves some cycles in common cases.\n\nThe v20231128 version of the patchset (as squashed, attached v5-0001)\nlooks good to me.\n\nKind regards,\n\nMatthias van de Meent\nNeon (http://neon.tech)",
"msg_date": "Wed, 29 Nov 2023 15:42:37 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/29/23 15:42, Matthias van de Meent wrote:\n> On Tue, 28 Nov 2023 at 18:59, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/28/23 16:39, Matthias van de Meent wrote:\n>>> On Thu, 23 Nov 2023 at 14:35, Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> On 11/23/23 13:33, Matthias van de Meent wrote:\n>>>>> The union operator may leak (lots of) memory, so I think it makes\n>>>>> sense to keep a context around that can be reset after we've extracted\n>>>>> the merge result.\n>>>>>\n>>>>\n>>>> But does the current code actually achieve that? It does create a \"brin\n>>>> union\" context, but then it only does this:\n>>>>\n>>>> /* Use our own memory context to avoid retail pfree */\n>>>> cxt = AllocSetContextCreate(CurrentMemoryContext,\n>>>> \"brin union\",\n>>>> ALLOCSET_DEFAULT_SIZES);\n>>>> oldcxt = MemoryContextSwitchTo(cxt);\n>>>> db = brin_deform_tuple(bdesc, b, NULL);\n>>>> MemoryContextSwitchTo(oldcxt);\n>>>>\n>>>> Surely that does not limit the amount of memory used by the actual union\n>>>> functions in any way?\n>>>\n>>> Oh, yes, of course. For some reason I thought that covered the calls\n>>> to the union operator function too, but it indeed only covers\n>>> deserialization. I do think it is still worthwhile to not do the\n>>> create/delete cycle, but won't hold the patch back for that.\n>>>\n>>\n>> I think the union_tuples() changes are better left for a separate patch.\n>>\n>>>>>> However, I don't think the number of union_tuples calls is likely to be\n>>>>>> very high, especially for large tables. Because we split the table into\n>>>>>> 2048 chunks, and then cap the chunk size by 8192. For large tables\n>>>>>> (where this matters) we're likely close to 8192.\n>>>>>\n>>>>> I agree that the merging part of the index creation is the last part,\n>>>>> and usually has no high impact on the total performance of the reindex\n>>>>> operation, but in memory-constrained environments releasing and then\n>>>>> requesting the same chunk of memory over and over again just isn't\n>>>>> great.\n>>>>\n>>>> OK, I'll take a look at the scratch context you suggested.\n>>>>\n>>>> My point however was we won't actually do that very often, because on\n>>>> large tables the BRIN ranges are likely smaller than the parallel scan\n>>>> chunk size, so few overlaps. OTOH if the table is small, or if the BRIN\n>>>> ranges are large, there'll be few of them.\n>>>\n>>> That's true, so maybe I'm concerned about something that amounts to\n>>> only marginal gains.\n>>>\n>>\n>> However, after thinking about this a bit more, I think we actually do\n>> need to do something about the memory management when merging tuples.\n>> AFAIK the general assumption was that union_tuple() only runs for a\n>> single range, and then the whole context gets freed.\n> \n> Correct, but it is also is (or should be) assumed that union_tuple\n> will be called several times in the same context to fix repeat\n> concurrent updates. Presumably, that only happens rarely, but it's\n> something that should be kept in mind regardless.\n> \n\nIn theory, yes. But union_tuples() is used only in summarize_range(),\nand that only processes a single page range.\n\n>> But the way the\n>> merging was implemented, it all runs in a single context. And while a\n>> single union_tuple() may not need a lot memory, in total it may be\n>> annoying. I just added a palloc(1MB) into union_tuples and ended up with\n>> ~160MB allocated in the PortalContext on just 2GB table. In practice the\n>> memory will grow more slowly, but not great :-/\n>>\n>> The attached 0003 patch adds a memory context that's reset after\n>> producing a merged BRIN tuple for each page range.\n> \n> Looks good.\n> \n> This also made me think a bit more about how we're working with the\n> tuples. With your latest patch, we always deserialize and re-serialize\n> the sorted brin tuples, just in case the next tuple will also be a\n> BRIN tuple of the same page range. Could we save some of that\n> deserialization time by optimistically expecting that we're not going\n> to need to merge the tuple and only store a local copy of it locally?\n> See attached 0002; this saves some cycles in common cases.\n> \n\nGood idea!\n\n> The v20231128 version of the patchset (as squashed, attached v5-0001)\n> looks good to me.\n> \n\nCool. I'll put this through a bit more stress testing, and then I'll get\nit pushed.\n\nThanks for the reviews!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Nov 2023 15:52:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/29/23 15:52, Tomas Vondra wrote:\n>> ...\n>>\n>> This also made me think a bit more about how we're working with the\n>> tuples. With your latest patch, we always deserialize and re-serialize\n>> the sorted brin tuples, just in case the next tuple will also be a\n>> BRIN tuple of the same page range. Could we save some of that\n>> deserialization time by optimistically expecting that we're not going\n>> to need to merge the tuple and only store a local copy of it locally?\n>> See attached 0002; this saves some cycles in common cases.\n>>\n> \n> Good idea!\n> \n\nFWIW there's a bug, in this part of the optimization:\n\n------------------\n+ if (memtuple == NULL)\n+ memtuple = brin_deform_tuple(state->bs_bdesc, btup,\n+ memtup_holder);\n+\n union_tuples(state->bs_bdesc, memtuple, btup);\n continue;\n------------------\n\nThe deforming should use prevbtup, otherwise union_tuples() jut combines\ntwo copies of the same tuple.\n\nWhich however brings me to the bigger issue with this - my stress test\nfound this issue pretty quickly, but then I spent quite a bit of time\ntrying to find what went wrong. I find this reworked code pretty hard to\nunderstand, and not necessarily because of how it's written. The problem\nis it the same loop tries to juggle multiple pieces of information with\ndifferent lifespans, and so on. I find it really hard to reason about\nhow it behaves ...\n\nI did try to measure how much it actually saves, but none of the tests I\ndid actually found measurable improvement. So I'm tempted to just not\ninclude this part, and accept that we may deserialize some of the tuples\nunnecessarily.\n\nDid you actually observe measurable improvements in some cases?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Nov 2023 18:55:31 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 29 Nov 2023 at 18:55, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/29/23 15:52, Tomas Vondra wrote:\n> >> ...\n> >>\n> >> This also made me think a bit more about how we're working with the\n> >> tuples. With your latest patch, we always deserialize and re-serialize\n> >> the sorted brin tuples, just in case the next tuple will also be a\n> >> BRIN tuple of the same page range. Could we save some of that\n> >> deserialization time by optimistically expecting that we're not going\n> >> to need to merge the tuple and only store a local copy of it locally?\n> >> See attached 0002; this saves some cycles in common cases.\n> >>\n> >\n> > Good idea!\n> >\n>\n> FWIW there's a bug, in this part of the optimization:\n>\n> ------------------\n> + if (memtuple == NULL)\n> + memtuple = brin_deform_tuple(state->bs_bdesc, btup,\n> + memtup_holder);\n> +\n> union_tuples(state->bs_bdesc, memtuple, btup);\n> continue;\n> ------------------\n>\n> The deforming should use prevbtup, otherwise union_tuples() jut combines\n> two copies of the same tuple.\n\nGood point. There were some more issues as well, fixes are attached.\n\n> Which however brings me to the bigger issue with this - my stress test\n> found this issue pretty quickly, but then I spent quite a bit of time\n> trying to find what went wrong. I find this reworked code pretty hard to\n> understand, and not necessarily because of how it's written. The problem\n> is it the same loop tries to juggle multiple pieces of information with\n> different lifespans, and so on. I find it really hard to reason about\n> how it behaves ...\n\nYeah, it'd be nice if we had a peek option for sortsupport, that'd\nimprove context handling.\n\n> I did try to measure how much it actually saves, but none of the tests I\n> did actually found measurable improvement. So I'm tempted to just not\n> include this part, and accept that we may deserialize some of the tuples\n> unnecessarily.\n>\n> Did you actually observe measurable improvements in some cases?\n\nThe improvements would mostly stem from brin indexes with multiple\n(potentially compressed) by-ref types, as they go through more complex\nand expensive code to deserialize, requiring separate palloc() and\nmemcpy() calls each.\nFor single-column and by-value types the improvements are expected to\nbe negligible, because there is no meaningful difference between\ncopying a single by-ref value and copying its container; the\nadditional work done for each tuple is marginal for those.\n\nFor an 8-column BRIN index ((sha256((id)::text::bytea)::text),\n(sha256((id+1)::text::bytea)::text),\n(sha256((id+2)::text::bytea)::text), ...) instrumented with 0003 I\nmeasured a difference of 10x less time spent in the main loop of\n_brin_end_parallel, from ~30ms to 3ms when dealing with 55k 1-block\nranges. It's not a lot, but worth at least something, I guess?\n\nThe attached patch fixes the issue that you called out .\nIt also further updates _brin_end_parallel: the final 'write empty\ntuples' loop is never hit and is thus removed, because if there were\nany tuples in the spool we'd have filled the empty ranges at the end\nof the main loop, and if there were no tuples in the spool then the\nmemtuple would still be at its original initialized value of 0 thus\nresulting in a constant false condition. I also updated some comments.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Wed, 29 Nov 2023 21:30:32 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/29/23 21:30, Matthias van de Meent wrote:\n> On Wed, 29 Nov 2023 at 18:55, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/29/23 15:52, Tomas Vondra wrote:\n>>>> ...\n>>>>\n>>>> This also made me think a bit more about how we're working with the\n>>>> tuples. With your latest patch, we always deserialize and re-serialize\n>>>> the sorted brin tuples, just in case the next tuple will also be a\n>>>> BRIN tuple of the same page range. Could we save some of that\n>>>> deserialization time by optimistically expecting that we're not going\n>>>> to need to merge the tuple and only store a local copy of it locally?\n>>>> See attached 0002; this saves some cycles in common cases.\n>>>>\n>>>\n>>> Good idea!\n>>>\n>>\n>> FWIW there's a bug, in this part of the optimization:\n>>\n>> ------------------\n>> + if (memtuple == NULL)\n>> + memtuple = brin_deform_tuple(state->bs_bdesc, btup,\n>> + memtup_holder);\n>> +\n>> union_tuples(state->bs_bdesc, memtuple, btup);\n>> continue;\n>> ------------------\n>>\n>> The deforming should use prevbtup, otherwise union_tuples() jut combines\n>> two copies of the same tuple.\n> \n> Good point. There were some more issues as well, fixes are attached.\n> \n>> Which however brings me to the bigger issue with this - my stress test\n>> found this issue pretty quickly, but then I spent quite a bit of time\n>> trying to find what went wrong. I find this reworked code pretty hard to\n>> understand, and not necessarily because of how it's written. The problem\n>> is it the same loop tries to juggle multiple pieces of information with\n>> different lifespans, and so on. I find it really hard to reason about\n>> how it behaves ...\n> \n> Yeah, it'd be nice if we had a peek option for sortsupport, that'd\n> improve context handling.\n> \n>> I did try to measure how much it actually saves, but none of the tests I\n>> did actually found measurable improvement. So I'm tempted to just not\n>> include this part, and accept that we may deserialize some of the tuples\n>> unnecessarily.\n>>\n>> Did you actually observe measurable improvements in some cases?\n> \n> The improvements would mostly stem from brin indexes with multiple\n> (potentially compressed) by-ref types, as they go through more complex\n> and expensive code to deserialize, requiring separate palloc() and\n> memcpy() calls each.\n> For single-column and by-value types the improvements are expected to\n> be negligible, because there is no meaningful difference between\n> copying a single by-ref value and copying its container; the\n> additional work done for each tuple is marginal for those.\n> \n> For an 8-column BRIN index ((sha256((id)::text::bytea)::text),\n> (sha256((id+1)::text::bytea)::text),\n> (sha256((id+2)::text::bytea)::text), ...) instrumented with 0003 I\n> measured a difference of 10x less time spent in the main loop of\n> _brin_end_parallel, from ~30ms to 3ms when dealing with 55k 1-block\n> ranges. It's not a lot, but worth at least something, I guess?\n> \n\nIt is something, but I can't really convince myself it's worth the extra\ncode complexity. It's a somewhat extreme example, and the parallelism\ncertainly saves much more than this.\n\n> The attached patch fixes the issue that you called out .\n> It also further updates _brin_end_parallel: the final 'write empty\n> tuples' loop is never hit and is thus removed, because if there were\n> any tuples in the spool we'd have filled the empty ranges at the end\n> of the main loop, and if there were no tuples in the spool then the\n> memtuple would still be at its original initialized value of 0 thus\n> resulting in a constant false condition. I also updated some comments.\n> \n\nAh, right. I'll take a look tomorrow, but I guess I didn't realize we\ninsert the empty ranges in the main loop, because we're already looking\nat the *next* summary.\n\nBut I think the idea was to insert empty ranges if there's a chunk of\nempty ranges at the end of the table, after the last tuple the index\nbuild reads. But I'm not sure that can actually happen ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Nov 2023 21:56:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Wed, 29 Nov 2023 at 21:56, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/29/23 21:30, Matthias van de Meent wrote:\n>> On Wed, 29 Nov 2023 at 18:55, Tomas Vondra\n>> <[email protected]> wrote:\n>>> I did try to measure how much it actually saves, but none of the tests I\n>>> did actually found measurable improvement. So I'm tempted to just not\n>>> include this part, and accept that we may deserialize some of the tuples\n>>> unnecessarily.\n>>>\n>>> Did you actually observe measurable improvements in some cases?\n>>\n>> The improvements would mostly stem from brin indexes with multiple\n>> (potentially compressed) by-ref types, as they go through more complex\n>> and expensive code to deserialize, requiring separate palloc() and\n>> memcpy() calls each.\n>> For single-column and by-value types the improvements are expected to\n>> be negligible, because there is no meaningful difference between\n>> copying a single by-ref value and copying its container; the\n>> additional work done for each tuple is marginal for those.\n>>\n>> For an 8-column BRIN index ((sha256((id)::text::bytea)::text),\n>> (sha256((id+1)::text::bytea)::text),\n>> (sha256((id+2)::text::bytea)::text), ...) instrumented with 0003 I\n>> measured a difference of 10x less time spent in the main loop of\n>> _brin_end_parallel, from ~30ms to 3ms when dealing with 55k 1-block\n>> ranges. It's not a lot, but worth at least something, I guess?\n>>\n>\n> It is something, but I can't really convince myself it's worth the extra\n> code complexity. It's a somewhat extreme example, and the parallelism\n> certainly saves much more than this.\n\nTrue. For this, I usually keep in mind that the docs on multi-column\nindexes still indicate to use 1 N-column brin index over N 1-column\nbrin indexes (assuming the same storage parameters), so multi-column\nBRIN indexes should not be considered to be uncommon:\n\n\"The only reason to have multiple BRIN indexes instead of one\nmulticolumn BRIN index on a single table is to have a different\npages_per_range storage parameter.\"\n\nNote that most of the time in my example index is spent in creating\nthe actual tuples due to the use of hashing for data generation; for\nindex or plain to-text formatting the improvement is much more\npronounced: If I use an 8-column index (id::text, id, ...), index\ncreation takes ~500ms with 4+ workers. Of this, deforming takes some\n20ms, though when skipping the deforming step (i.e.,with my patch) it\ntakes ~3.5ms. That's a 3% shaved off the build time when the index\nshape is beneficial.\n\n> > The attached patch fixes the issue that you called out .\n> > It also further updates _brin_end_parallel: the final 'write empty\n> > tuples' loop is never hit and is thus removed, because if there were\n> > any tuples in the spool we'd have filled the empty ranges at the end\n> > of the main loop, and if there were no tuples in the spool then the\n> > memtuple would still be at its original initialized value of 0 thus\n> > resulting in a constant false condition. I also updated some comments.\n> >\n>\n> Ah, right. I'll take a look tomorrow, but I guess I didn't realize we\n> insert the empty ranges in the main loop, because we're already looking\n> at the *next* summary.\n\nYes, merging adds some significant complexity here. I don't think we\ncan easily get around that though...\n\n> But I think the idea was to insert empty ranges if there's a chunk of\n> empty ranges at the end of the table, after the last tuple the index\n> build reads. But I'm not sure that can actually happen ...\n\nThis would be trivial to construct with partial indexes; e.g. WHERE\n(my_pk IS NULL) would consist of exclusively empty ranges.\nI don't see a lot of value in partial BRIN indexes, but I may be\noverlooking something.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 29 Nov 2023 23:59:33 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/29/23 23:59, Matthias van de Meent wrote:\n> On Wed, 29 Nov 2023 at 21:56, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 11/29/23 21:30, Matthias van de Meent wrote:\n>>> On Wed, 29 Nov 2023 at 18:55, Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> I did try to measure how much it actually saves, but none of the tests I\n>>>> did actually found measurable improvement. So I'm tempted to just not\n>>>> include this part, and accept that we may deserialize some of the tuples\n>>>> unnecessarily.\n>>>>\n>>>> Did you actually observe measurable improvements in some cases?\n>>>\n>>> The improvements would mostly stem from brin indexes with multiple\n>>> (potentially compressed) by-ref types, as they go through more complex\n>>> and expensive code to deserialize, requiring separate palloc() and\n>>> memcpy() calls each.\n>>> For single-column and by-value types the improvements are expected to\n>>> be negligible, because there is no meaningful difference between\n>>> copying a single by-ref value and copying its container; the\n>>> additional work done for each tuple is marginal for those.\n>>>\n>>> For an 8-column BRIN index ((sha256((id)::text::bytea)::text),\n>>> (sha256((id+1)::text::bytea)::text),\n>>> (sha256((id+2)::text::bytea)::text), ...) instrumented with 0003 I\n>>> measured a difference of 10x less time spent in the main loop of\n>>> _brin_end_parallel, from ~30ms to 3ms when dealing with 55k 1-block\n>>> ranges. It's not a lot, but worth at least something, I guess?\n>>>\n>>\n>> It is something, but I can't really convince myself it's worth the extra\n>> code complexity. It's a somewhat extreme example, and the parallelism\n>> certainly saves much more than this.\n> \n> True. For this, I usually keep in mind that the docs on multi-column\n> indexes still indicate to use 1 N-column brin index over N 1-column\n> brin indexes (assuming the same storage parameters), so multi-column\n> BRIN indexes should not be considered to be uncommon:\n> \n> \"The only reason to have multiple BRIN indexes instead of one\n> multicolumn BRIN index on a single table is to have a different\n> pages_per_range storage parameter.\"\n> \n> Note that most of the time in my example index is spent in creating\n> the actual tuples due to the use of hashing for data generation; for\n> index or plain to-text formatting the improvement is much more\n> pronounced: If I use an 8-column index (id::text, id, ...), index\n> creation takes ~500ms with 4+ workers. Of this, deforming takes some\n> 20ms, though when skipping the deforming step (i.e.,with my patch) it\n> takes ~3.5ms. That's a 3% shaved off the build time when the index\n> shape is beneficial.\n> \n\nThat's all true, and while 3.5% is not something to ignore, my POV is\nthat the parallelism speeds this up from ~2000ms to ~500ms. Yes, it\nwould be great to shave off the extra 1% (relative to the original\nduration). But I don't have a great idea how to do code that in a way\nthat is readable, and I don't want to stall the patch indefinitely\nbecause of a comparatively small improvement.\n\nTherefore I propose we get the simpler code committed and leave this as\na future improvement.\n\n>>> The attached patch fixes the issue that you called out .\n>>> It also further updates _brin_end_parallel: the final 'write empty\n>>> tuples' loop is never hit and is thus removed, because if there were\n>>> any tuples in the spool we'd have filled the empty ranges at the end\n>>> of the main loop, and if there were no tuples in the spool then the\n>>> memtuple would still be at its original initialized value of 0 thus\n>>> resulting in a constant false condition. I also updated some comments.\n>>>\n>>\n>> Ah, right. I'll take a look tomorrow, but I guess I didn't realize we\n>> insert the empty ranges in the main loop, because we're already looking\n>> at the *next* summary.\n> \n> Yes, merging adds some significant complexity here. I don't think we\n> can easily get around that though...\n> \n>> But I think the idea was to insert empty ranges if there's a chunk of\n>> empty ranges at the end of the table, after the last tuple the index\n>> build reads. But I'm not sure that can actually happen ...\n> \n> This would be trivial to construct with partial indexes; e.g. WHERE\n> (my_pk IS NULL) would consist of exclusively empty ranges.\n> I don't see a lot of value in partial BRIN indexes, but I may be\n> overlooking something.\n> \n\nOh, I haven't even thought about partial BRIN indexes! I'm sure for\nthose it's even more important to actually fill-in the empty ranges,\notherwise we end up scanning the whole supposedly filtered-out part of\nthe table. I'll do some testing with that.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Nov 2023 01:10:48 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Thu, 30 Nov 2023 at 01:10, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 11/29/23 23:59, Matthias van de Meent wrote:\n>> On Wed, 29 Nov 2023 at 21:56, Tomas Vondra\n>> <[email protected]> wrote:\n>>>\n>>> On 11/29/23 21:30, Matthias van de Meent wrote:\n>>>> On Wed, 29 Nov 2023 at 18:55, Tomas Vondra\n>>>> <[email protected]> wrote:\n>>>>> I did try to measure how much it actually saves, but none of the tests I\n>>>>> did actually found measurable improvement. So I'm tempted to just not\n>>>>> include this part, and accept that we may deserialize some of the tuples\n>>>>> unnecessarily.\n>>>>>\n>>>>> Did you actually observe measurable improvements in some cases?\n>>>>\n>>>> The improvements would mostly stem from brin indexes with multiple\n>>>> (potentially compressed) by-ref types, as they go through more complex\n>>>> and expensive code to deserialize, requiring separate palloc() and\n>>>> memcpy() calls each.\n>>>> For single-column and by-value types the improvements are expected to\n>>>> be negligible, because there is no meaningful difference between\n>>>> copying a single by-ref value and copying its container; the\n>>>> additional work done for each tuple is marginal for those.\n>>>>\n>>>> For an 8-column BRIN index ((sha256((id)::text::bytea)::text),\n>>>> (sha256((id+1)::text::bytea)::text),\n>>>> (sha256((id+2)::text::bytea)::text), ...) instrumented with 0003 I\n>>>> measured a difference of 10x less time spent in the main loop of\n>>>> _brin_end_parallel, from ~30ms to 3ms when dealing with 55k 1-block\n>>>> ranges. It's not a lot, but worth at least something, I guess?\n>>>>\n>>>\n>>> It is something, but I can't really convince myself it's worth the extra\n>>> code complexity. It's a somewhat extreme example, and the parallelism\n>>> certainly saves much more than this.\n>>\n>> True. For this, I usually keep in mind that the docs on multi-column\n>> indexes still indicate to use 1 N-column brin index over N 1-column\n>> brin indexes (assuming the same storage parameters), so multi-column\n>> BRIN indexes should not be considered to be uncommon:\n>>\n>> \"The only reason to have multiple BRIN indexes instead of one\n>> multicolumn BRIN index on a single table is to have a different\n>> pages_per_range storage parameter.\"\n>>\n>> Note that most of the time in my example index is spent in creating\n>> the actual tuples due to the use of hashing for data generation; for\n>> index or plain to-text formatting the improvement is much more\n>> pronounced: If I use an 8-column index (id::text, id, ...), index\n>> creation takes ~500ms with 4+ workers. Of this, deforming takes some\n>> 20ms, though when skipping the deforming step (i.e.,with my patch) it\n>> takes ~3.5ms. That's a 3% shaved off the build time when the index\n>> shape is beneficial.\n>>\n>\n> That's all true, and while 3.5% is not something to ignore, my POV is\n> that the parallelism speeds this up from ~2000ms to ~500ms. Yes, it\n> would be great to shave off the extra 1% (relative to the original\n> duration). But I don't have a great idea how to do code that in a way\n> that is readable, and I don't want to stall the patch indefinitely\n> because of a comparatively small improvement.\n>\n> Therefore I propose we get the simpler code committed and leave this as\n> a future improvement.\n\nThat's fine with me, it is one reason why I kept it as a separate patch file.\n\n>>>> The attached patch fixes the issue that you called out .\n>>>> It also further updates _brin_end_parallel: the final 'write empty\n>>>> tuples' loop is never hit and is thus removed, because if there were\n>>>> any tuples in the spool we'd have filled the empty ranges at the end\n>>>> of the main loop, and if there were no tuples in the spool then the\n>>>> memtuple would still be at its original initialized value of 0 thus\n>>>> resulting in a constant false condition. I also updated some comments.\n>>>>\n>>>\n>>> Ah, right. I'll take a look tomorrow, but I guess I didn't realize we\n>>> insert the empty ranges in the main loop, because we're already looking\n>>> at the *next* summary.\n>>\n>> Yes, merging adds some significant complexity here. I don't think we\n>> can easily get around that though...\n>>\n>>> But I think the idea was to insert empty ranges if there's a chunk of\n>>> empty ranges at the end of the table, after the last tuple the index\n>>> build reads. But I'm not sure that can actually happen ...\n>>\n>> This would be trivial to construct with partial indexes; e.g. WHERE\n>> (my_pk IS NULL) would consist of exclusively empty ranges.\n>> I don't see a lot of value in partial BRIN indexes, but I may be\n>> overlooking something.\n>>\n>\n> Oh, I haven't even thought about partial BRIN indexes! I'm sure for\n> those it's even more important to actually fill-in the empty ranges,\n> otherwise we end up scanning the whole supposedly filtered-out part of\n> the table. I'll do some testing with that.\n\nI just ran some more tests in less favorable environments, and it\nlooks like I hit a bug:\n\n% SET max_parallel_workers = 0;\n% CREATE INDEX ... USING brin (...);\nERROR: cannot update tuples during a parallel operation\n\nFix attached in 0002.\nIn 0003 I add the mentioned backfilling of empty ranges at the end of\nthe table. I added it for both normal and parallel index builds, as\nnormal builds apparently also didn't yet have this yet.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Thu, 30 Nov 2023 18:47:39 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 11/30/23 18:47, Matthias van de Meent wrote:\n> ...\n>\n> I just ran some more tests in less favorable environments, and it\n> looks like I hit a bug:\n> \n> % SET max_parallel_workers = 0;\n> % CREATE INDEX ... USING brin (...);\n> ERROR: cannot update tuples during a parallel operation\n> \n> Fix attached in 0002.\n\nYeah, that's a bug, thanks for the fix. Yeah Just jumping to a \"cleanup\"\nlabel seems a bit cleaner (if that can be said about using goto), so I\ntweaked the patch to do that instead.\n\n> In 0003 I add the mentioned backfilling of empty ranges at the end of\n> the table. I added it for both normal and parallel index builds, as\n> normal builds apparently also didn't yet have this yet.\n> \n\nRight. I was thinking about doing that to, but you beat me to it. I\ndon't want to bury this in the main patch adding parallel builds, it's\nnot really related to parallel CREATE INDEX. And it'd be weird to have\nthis for parallel builds first, so I rebased it as 0001.\n\nAs for the backfilling, I think we need to simplify the code a bit. We\nhave three places doing essentially the same thing (one for serial\nbuilds, two for parallel builds). That's unnecessarily verbose, and\nmakes it harder to understand the code. But more importantly, the three\nplaces are not doing exactly the same - some increment the current range\nbefore, some do it at the end of the loop, etc. I got confused by this\nmultiple times.\n\nSo 0004 simplifies this - the backfilling is done by a function called\nfrom all the places. The main complexity is in ensuring all three places\nhave the same concept of how to specify the range (of ranges) to fill.\n\nNote: The serial might have two places too, but the main loop in\nbrinbuildCallback() does it range by range. It's a bit less efficient as\nit can't use the pre-built empty tuple easily, but that's fine IMO.\n\n\nskipping the last page range?\n-----------------------------\n\nI noticed you explicitly skipped backfilling empty tuple for the last\npage range. Can you explain? I suspect the idea was that the user\nactivity would trigger building the tuple once that page range is\nfilled, but we don't really know if the table receives any changes. It\nmight easily be just a static table, in which case the last range would\nremain unsummarized. If this is the right thing to do, the serial build\nshould do that too probably ...\n\nBut I don't think that's the correct thing to do - I think CREATE INDEX\nis expected to always build a complete index, so my version always\nbuilds an index for all table pages.\n\n\nBlockNumber overflows\n---------------------\n\nThe one thing that I'm not quite sure is correct is whether this handles\noverflows/underflows correctly. I mean, imagine you have a huge table\nthat's almost 0xFFFFFFFF blocks, pages_per_range is prime, and the last\nrange ends less than pages_per_range from 0xFFFFFFFF. Then this\n\n blkno += pages_per_range;\n\ncan overflow, and might start inserting index tuples again (so we'd end\nup with a duplicate).\n\nI do think the current patch does this correctly, but AFAICS this is a\npre-existing issue ...\n\nAnyway, while working on this / stress-testing it, I realized there's a\nbug in how we allocate the emptyTuple. It's allocated lazily, but if can\neasily happen in the per-range context we introduced last week. It needs\nto be allocated in the context covering the whole index build.\n\nI think the best way to do that is per 0006, i.e. allocate it in the\nBrinBuildState, along with the appropriate memory context.\n\nObviously, all of this (0002-0006) should be squashed into a single\ncommit, I only keep it separate to make it clearer what changed.\n\n\nstress-testing script\n---------------------\n\nI'm also attaching the bash script I use to stress test this - it's just\na loop that creates somewhat random table (different number of rows,\ndistinct values, ...), maybe deletes some of it, creates an index\n(possibly partial), and then does various checks on it (checks number of\nranges, queries the table, etc.). It's somewhat primitive but it turned\nout to be very capable in triggering bugs in BlockNumber arithmetic,\nemptyTuple allocations, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Dec 2023 17:46:03 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On Sun, 3 Dec 2023 at 17:46, Tomas Vondra <[email protected]> wrote:\n> On 11/30/23 18:47, Matthias van de Meent wrote:\n> > ...\n> >\n> > I just ran some more tests in less favorable environments, and it\n> > looks like I hit a bug:\n> >\n> > % SET max_parallel_workers = 0;\n> > % CREATE INDEX ... USING brin (...);\n> > ERROR: cannot update tuples during a parallel operation\n> >\n> > Fix attached in 0002.\n>\n> Yeah, that's a bug, thanks for the fix. Yeah Just jumping to a \"cleanup\"\n> label seems a bit cleaner (if that can be said about using goto), so I\n> tweaked the patch to do that instead.\n\nGood point, I agree that's cleaner.\n\n> > In 0003 I add the mentioned backfilling of empty ranges at the end of\n> > the table. I added it for both normal and parallel index builds, as\n> > normal builds apparently also didn't yet have this yet.\n> >\n>\n> Right. I was thinking about doing that to, but you beat me to it. I\n> don't want to bury this in the main patch adding parallel builds, it's\n> not really related to parallel CREATE INDEX. And it'd be weird to have\n> this for parallel builds first, so I rebased it as 0001.\n\nOK.\n\n> As for the backfilling, I think we need to simplify the code a bit.\n>\n> So 0004 simplifies this - the backfilling is done by a function called\n> from all the places. The main complexity is in ensuring all three places\n> have the same concept of how to specify the range (of ranges) to fill.\n\nGood points, +1. However, the simplification in 0005 breaks that with\nan underflow:\n\n> @@ -1669,6 +1672,19 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,\n> state->bs_worker_id = 0;\n> state->bs_spool = NULL;\n>\n> + /*\n> + * Calculate the start of the last page range. Page numbers are 0-based,\n> + * so to get the index of the last page we need to subtract one. Then the\n> + * integer division gives us the proper 0-based range index.\n> + */\n> + state->bs_maxRangeStart = ((tablePages - 1) / pagesPerRange) * pagesPerRange;\n\nWhen the table is empty, this will try to fill all potential ranges up\nto InvalidBlockNo's range, which is obviously invalid. It also breaks\nthe regression tests, as showin in CFBot.\n\n> skipping the last page range?\n> -----------------------------\n>\n> I noticed you explicitly skipped backfilling empty tuple for the last\n> page range. Can you explain? I suspect the idea was that the user\n> activity would trigger building the tuple once that page range is\n> filled, but we don't really know if the table receives any changes. It\n> might easily be just a static table, in which case the last range would\n> remain unsummarized. If this is the right thing to do, the serial build\n> should do that too probably ...\n>\n> But I don't think that's the correct thing to do - I think CREATE INDEX\n> is expected to always build a complete index, so my version always\n> builds an index for all table pages.\n\nHmm. My idea here is to create an index that is closer to what you get\nwhen you hit the insertion path with aminsert. This isn't 1:1 how the\nindex builds ranges during (re)index when there is data for that\nrange, but I thought it to be a close enough analog. Either way, I\ndon't mind it adding an empty range for the last range if that's\nconsidered useful.\n\n> BlockNumber overflows\n> ---------------------\n>\n> The one thing that I'm not quite sure is correct is whether this handles\n> overflows/underflows correctly. I mean, imagine you have a huge table\n> that's almost 0xFFFFFFFF blocks, pages_per_range is prime, and the last\n> range ends less than pages_per_range from 0xFFFFFFFF. Then this\n>\n> blkno += pages_per_range;\n>\n> can overflow, and might start inserting index tuples again (so we'd end\n> up with a duplicate).\n>\n> I do think the current patch does this correctly, but AFAICS this is a\n> pre-existing issue ...\n\nYes, I know I've flagged this at least once before. IIRC, the response\nback then was that it's a very unlikely issue, as you'd have to extend\nthe relation to at least the first block of the last range, which\nwould currently be InvalidBlockNo - 131072 + 1, or just shy of 32TB of\ndata at 8kB BLCKSZ. That's not exactly a common use case, and BRIN\nrange ID wraparound is likely the least of your worries at that point.\n\n> Anyway, while working on this / stress-testing it, I realized there's a\n> bug in how we allocate the emptyTuple. It's allocated lazily, but if can\n> easily happen in the per-range context we introduced last week. It needs\n> to be allocated in the context covering the whole index build.\n\nYeah, I hadn't tested with (very) sparse datasets yet.\n\n> I think the best way to do that is per 0006, i.e. allocate it in the\n> BrinBuildState, along with the appropriate memory context.\n\nThat fix looks fine to me.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 4 Dec 2023 16:00:35 +0100",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 12/4/23 16:00, Matthias van de Meent wrote:\n> On Sun, 3 Dec 2023 at 17:46, Tomas Vondra <[email protected]> wrote:\n>> On 11/30/23 18:47, Matthias van de Meent wrote:\n>>> ...\n>>>\n>>> I just ran some more tests in less favorable environments, and it\n>>> looks like I hit a bug:\n>>>\n>>> % SET max_parallel_workers = 0;\n>>> % CREATE INDEX ... USING brin (...);\n>>> ERROR: cannot update tuples during a parallel operation\n>>>\n>>> Fix attached in 0002.\n>>\n>> Yeah, that's a bug, thanks for the fix. Yeah Just jumping to a \"cleanup\"\n>> label seems a bit cleaner (if that can be said about using goto), so I\n>> tweaked the patch to do that instead.\n> \n> Good point, I agree that's cleaner.\n> \n>>> In 0003 I add the mentioned backfilling of empty ranges at the end of\n>>> the table. I added it for both normal and parallel index builds, as\n>>> normal builds apparently also didn't yet have this yet.\n>>>\n>>\n>> Right. I was thinking about doing that to, but you beat me to it. I\n>> don't want to bury this in the main patch adding parallel builds, it's\n>> not really related to parallel CREATE INDEX. And it'd be weird to have\n>> this for parallel builds first, so I rebased it as 0001.\n> \n> OK.\n> \n>> As for the backfilling, I think we need to simplify the code a bit.\n>>\n>> So 0004 simplifies this - the backfilling is done by a function called\n>> from all the places. The main complexity is in ensuring all three places\n>> have the same concept of how to specify the range (of ranges) to fill.\n> \n> Good points, +1. However, the simplification in 0005 breaks that with\n> an underflow:\n> \n>> @@ -1669,6 +1672,19 @@ initialize_brin_buildstate(Relation idxRel, BrinRevmap *revmap,\n>> state->bs_worker_id = 0;\n>> state->bs_spool = NULL;\n>>\n>> + /*\n>> + * Calculate the start of the last page range. Page numbers are 0-based,\n>> + * so to get the index of the last page we need to subtract one. Then the\n>> + * integer division gives us the proper 0-based range index.\n>> + */\n>> + state->bs_maxRangeStart = ((tablePages - 1) / pagesPerRange) * pagesPerRange;\n> \n> When the table is empty, this will try to fill all potential ranges up\n> to InvalidBlockNo's range, which is obviously invalid. It also breaks\n> the regression tests, as showin in CFBot.\n> \n\nWhoooops! You're right, ofc. If it's empty, we should use 0 instead.\nThat's what we do now anyway, BRIN will have the first range even for\nempty tables.\n\n>> skipping the last page range?\n>> -----------------------------\n>>\n>> I noticed you explicitly skipped backfilling empty tuple for the last\n>> page range. Can you explain? I suspect the idea was that the user\n>> activity would trigger building the tuple once that page range is\n>> filled, but we don't really know if the table receives any changes. It\n>> might easily be just a static table, in which case the last range would\n>> remain unsummarized. If this is the right thing to do, the serial build\n>> should do that too probably ...\n>>\n>> But I don't think that's the correct thing to do - I think CREATE INDEX\n>> is expected to always build a complete index, so my version always\n>> builds an index for all table pages.\n> \n> Hmm. My idea here is to create an index that is closer to what you get\n> when you hit the insertion path with aminsert. This isn't 1:1 how the\n> index builds ranges during (re)index when there is data for that\n> range, but I thought it to be a close enough analog. Either way, I\n> don't mind it adding an empty range for the last range if that's\n> considered useful.\n> \n\nI understand, but I'm not sure if keeping this consistency with aminsert\nhas any material benefit. It's not like we do that now, I think (for\nempty tables we already build the first index range).\n\n>> BlockNumber overflows\n>> ---------------------\n>>\n>> The one thing that I'm not quite sure is correct is whether this handles\n>> overflows/underflows correctly. I mean, imagine you have a huge table\n>> that's almost 0xFFFFFFFF blocks, pages_per_range is prime, and the last\n>> range ends less than pages_per_range from 0xFFFFFFFF. Then this\n>>\n>> blkno += pages_per_range;\n>>\n>> can overflow, and might start inserting index tuples again (so we'd end\n>> up with a duplicate).\n>>\n>> I do think the current patch does this correctly, but AFAICS this is a\n>> pre-existing issue ...\n> \n> Yes, I know I've flagged this at least once before. IIRC, the response\n> back then was that it's a very unlikely issue, as you'd have to extend\n> the relation to at least the first block of the last range, which\n> would currently be InvalidBlockNo - 131072 + 1, or just shy of 32TB of\n> data at 8kB BLCKSZ. That's not exactly a common use case, and BRIN\n> range ID wraparound is likely the least of your worries at that point.\n> \n\nProbably true, but it seems somewhat careless and untidy ...\n\n>> Anyway, while working on this / stress-testing it, I realized there's a\n>> bug in how we allocate the emptyTuple. It's allocated lazily, but if can\n>> easily happen in the per-range context we introduced last week. It needs\n>> to be allocated in the context covering the whole index build.\n> \n> Yeah, I hadn't tested with (very) sparse datasets yet.\n> \n\nI haven't actually checked what the failing cases look like, but I don't\nthink it needs to be particularly sparse. AFAIK it's just that the\nscript deletes a chunk of the data somewhere in the table and/or it also\ncreates a partial index.\n\n>> I think the best way to do that is per 0006, i.e. allocate it in the\n>> BrinBuildState, along with the appropriate memory context.\n> \n> That fix looks fine to me.\n> \n\nThanks!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 4 Dec 2023 16:33:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hi,\n\nI've pushed the first two parts (backfill of empty ranges for serial\nbuilds, allowing parallelism) after a bit more cleanup, adding a simple\npageinspect test to 0001, improving comments and some minor adjustments.\n\nI ended up removing the protections against BlockNumber overflows, and\nmoved them into a separate WIP patch. I still think we should probably\nreconsider the position that we don't need to worry about issues so\nclose to the 32TB boundary, but it seemed rather weird to fix only the\nnew bits and leave the existing issues in place.\n\nI'm attaching that as a WIP patch, but I don't know if/when I'll get\nback to this.\n\n\nThanks for the reviews/reworks/ideas!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 8 Dec 2023 18:28:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hi,\n\nWhile cleaning up some unnecessary bits of the code and slightly\ninaccurate comments, I ran into a failure when the parallel scan (used\nby the parallel build) happened to be a synchronized scan. When the scan\ndid not start on page 0, the parallel callback failed to correctly\nhandle tuples after wrapping around to the start of the table.\n\nAFAICS the extensive testing I did during development did not detect\nthis because strictly speaking the index was \"correct\" (as in not\nreturning incorrect results in queries), just less efficient (missing\nsome ranges, and some ranges being \"wider\" than necessary). Or perhaps\nthe tests happened to not trigger synchronized scans.\n\nShould be fixed by 1ccab5038eaf261f. It took me ages to realize what the\nproblem is, and I initially suspected there's some missing coordination\nbetween the workers/leader, or something.\n\nSo I started comparing the code to btree, which is where it originated,\nand I realized there's indeed one difference - the BRIN code only does\nhalf the work with the workersdonecv variable. The workers do correctly\nupdate the count and notify the leader, but the leader never waits for\nthe count to be 0. That is, there's nothing like _bt_parallel_heapscan.\n\nI wonder whether this actually is a problem, considering the differences\nbetween the flow in BRIN and BTREE. In particular, the \"leader\" does the\nwork in _brin_end_parallel() after WaitForParallelWorkersToFinish(). So\nit's not like there might be a worker still processing data, I think.\n\nBut now that I think about it, maybe it's not such a great idea to do\nthis kind of work in _brin_end_parallel(). Maybe it should do just stuff\nrelated to termination of workers etc. and the merging of results should\nhappen elsewhere - earlier in brinbuild()? Then it'd make sense to have\nsomething like _bt_parallel_heapscan ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Dec 2023 23:42:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hi,\n\nWhile preparing a differential code coverage report between 16 and HEAD, one\nthing that stands out is the parallel brin build code. Neither on\ncoverage.postgresql.org nor locally is that code reached during our tests.\n\nhttps://coverage.postgresql.org/src/backend/access/brin/brin.c.gcov.html#2333\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 13 Apr 2024 01:36:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/13/24 10:36, Andres Freund wrote:\n> Hi,\n> \n> While preparing a differential code coverage report between 16 and HEAD, one\n> thing that stands out is the parallel brin build code. Neither on\n> coverage.postgresql.org nor locally is that code reached during our tests.\n>\n\nThanks for pointing this out, it's definitely something that I need to\nimprove (admittedly, should have been part of the patch). I'll also look\ninto eliminating the difference between BTREE and BRIN parallel builds,\nmentioned in my last message in this thread.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 13 Apr 2024 11:19:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/13/24 11:19, Tomas Vondra wrote:\n> On 4/13/24 10:36, Andres Freund wrote:\n>> Hi,\n>>\n>> While preparing a differential code coverage report between 16 and HEAD, one\n>> thing that stands out is the parallel brin build code. Neither on\n>> coverage.postgresql.org nor locally is that code reached during our tests.\n>>\n> \n> Thanks for pointing this out, it's definitely something that I need to\n> improve (admittedly, should have been part of the patch). I'll also look\n> into eliminating the difference between BTREE and BRIN parallel builds,\n> mentioned in my last message in this thread.\n> \n\nHere's a couple patches adding a test for the parallel CREATE INDEX with\nBRIN. The actual test is 0003/0004 - I added the test to pageinspect,\nbecause that allows cross-checking the index to one built without\nparallelism, which I think is better than just doing CREATE INDEX\nwithout properly testing it produces correct results.\n\nIt's not entirely trivial because for some opclasses (e.g. minmax-multi)\nthe results depend on the order in which values are added, and order in\nwhich summaries from different workers are merged.\n\nFunnily enough, while adding the test, I ran into two pre-existing bugs.\nOne is that brin_bloom_union forgot to update the number of bits set in\nthe bitmap, another one is that 6bcda4a721 changes PG_DETOAST_DATUM to\nthe _PACKED version, which however does the wrong thing. Both of which\nare mostly harmless - it only affects the output function, which is\nunused outside pageinspect. No impact on query correctness etc.\n\nThe test needs a bit more work to make sure it works on 32-bit machines\netc. which I think may affect available space on a page, which in turn\nmight affect the minmax-multi summaries. But I'll take care this early\nnext week.\n\n\nFunnily\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 13 Apr 2024 23:04:33 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "I've pushed this, including backpatching the two fixes. I've reduced the\namount of data needed by the test, and made sure it works on 32-bits too\n(I was a bit worried it might be sensitive to that, but that seems not\nto be the case).\n\nThere's still the question of maybe removing the differences between the\nBTREE and BRIN code for parallel builds, I mentioned in [1]. That's more\nof a cosmetic issue, but I'll add it as an open item for myself.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/3733d042-71e1-6ae6-5fac-00c12db62db6%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 14 Apr 2024 19:09:26 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "Hello Tomas,\n\n14.04.2024 20:09, Tomas Vondra wrote:\n> I've pushed this, including backpatching the two fixes. I've reduced the\n> amount of data needed by the test, and made sure it works on 32-bits too\n> (I was a bit worried it might be sensitive to that, but that seems not\n> to be the case).\n\nI've discovered that that test addition brings some instability to the test.\nWith the following pageinspect/Makefile modification:\n-REGRESS = page btree brin gin gist hash checksum oldextversions\n+REGRESS = page btree brin $(shell printf 'brin %.0s' `seq 99`) gin gist hash checksum oldextversions\n\necho \"autovacuum_naptime = 1\" > /tmp/temp.config\nTEMP_CONFIG=/tmp/temp.config make -s check -C contrib/pageinspect\nfails for me as below:\n...\nok 17 - brin 127 ms\nnot ok 18 - brin 140 ms\nok 19 - brin 125 ms\n...\n# 4 of 107 tests failed.\n\nThe following change:\n-CREATE TABLE brin_parallel_test (a int, b text, c bigint) WITH (fillfactor=40);\n+CREATE TEMP TABLE brin_parallel_test (a int, b text, c bigint) WITH (fillfactor=40);\n(similar to e2933a6e1) makes the test pass reliably for me.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 15 Apr 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "\n\nOn 4/15/24 08:00, Alexander LAW wrote:\n> Hello Tomas,\n> \n> 14.04.2024 20:09, Tomas Vondra wrote:\n>> I've pushed this, including backpatching the two fixes. I've reduced the\n>> amount of data needed by the test, and made sure it works on 32-bits too\n>> (I was a bit worried it might be sensitive to that, but that seems not\n>> to be the case).\n> \n> I've discovered that that test addition brings some instability to the\n> test.\n> With the following pageinspect/Makefile modification:\n> -REGRESS = page btree brin gin gist hash checksum oldextversions\n> +REGRESS = page btree brin $(shell printf 'brin %.0s' `seq 99`) gin gist\n> hash checksum oldextversions\n> \n> echo \"autovacuum_naptime = 1\" > /tmp/temp.config\n> TEMP_CONFIG=/tmp/temp.config make -s check -C contrib/pageinspect\n> fails for me as below:\n> ...\n> ok 17 - brin 127 ms\n> not ok 18 - brin 140 ms\n> ok 19 - brin 125 ms\n> ...\n> # 4 of 107 tests failed.\n> \n> The following change\n> -CREATE TABLE brin_parallel_test (a int, b text, c bigint) WITH\n> (fillfactor=40);\n> +CREATE TEMP TABLE brin_parallel_test (a int, b text, c bigint) WITH\n> (fillfactor=40);\n> (similar to e2933a6e1) makes the test pass reliably for me.\n> \n\nThanks! This reproduces the issue for me.\n\nI believe this happens because the test does \"DELETE + VACUUM\" to\ngenerate \"gaps\" in the table, to get empty ranges in the BRIN. I guess\nwhat's happening is that something (autovacuum or likely something else)\nblocks the explicit VACUUM from cleaning some of the pages with deleted\ntuples, but then the cleanup happens shortly after between building the\nthe serial/parallel indexes. That would explain the differences reported\nby the regression test.\n\nWhen I thought about this while writing the test, my reasoning was that\neven if the explicit vacuum occasionally fails to clean something, it\nshould affect all the indexes equally. Which is why I wrote the test to\ncompare the results using EXCEPT, not checking the exact output.\n\nI'm not a huge fan of temporary tables in regression tests, because it\ndisappears at the end, making it impossible to inspect the data after a\nfailure. But the only other option I could think of is disabling\nautovacuum on the table, but that does not seem to prevent the failures.\n\nI'll try a bit more to make this work without the temp table.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Apr 2024 10:18:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/15/24 10:18, Tomas Vondra wrote:\n> ...\n>\n> I'll try a bit more to make this work without the temp table.\n> \n\nConsidering the earlier discussion in e2933a6e1, I think making the\ntable TEMP is the best fix, so I'll do that. Thanks for remembering that\nchange, Alexander!\n\nAttached is the cleanup I thought about doing earlier in this patch [1]\nto make the code more like btree. The diff might make it seem like a big\nchange, but it really just moves the merge code into a separate function\nand makes it use using the conditional variable. I still believe the old\ncode is correct, but this seems like an improvement so plan to push this\nsoon and resolve the open item.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 15 Apr 2024 20:35:20 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/15/24 20:35, Tomas Vondra wrote:\n> On 4/15/24 10:18, Tomas Vondra wrote:\n>> ...\n>>\n>> I'll try a bit more to make this work without the temp table.\n>>\n> \n> Considering the earlier discussion in e2933a6e1, I think making the\n> table TEMP is the best fix, so I'll do that. Thanks for remembering that\n> change, Alexander!\n> \n\nD'oh! I pushed this fix to stabilize the test earlier today, but I just\nrealized it unfortunately makes the test useless. The idea of the test\nwas to build BRIN indexes with/without parallelism, and check that the\nindexes are exactly the same.\n\nThe instability comes from deletes, which I added to get \"empty\" ranges\nin the table, which may not be cleaned up in time for the CREATE INDEX\ncommands, depending on what else is happening. A TEMPORARY table does\nnot have this issue (as observed in e2933a6e1), but there's the minor\nproblem that plan_create_index_workers() does this:\n\n /*\n * Determine if it's safe to proceed.\n *\n * Currently, parallel workers can't access the leader's temporary\n * tables. Furthermore, any index predicate or index expressions must\n * be parallel safe.\n */\n if (heap->rd_rel->relpersistence == RELPERSISTENCE_TEMP ||\n !is_parallel_safe(root, (Node *) RelationGetIndexExpressions(index)) ||\n !is_parallel_safe(root, (Node *) RelationGetIndexPredicate(index)))\n {\n parallel_workers = 0;\n goto done;\n }\n\nThat is, no parallel index builds on temporary tables. Which means the\ntest is not actually testing anything :-( Much more stable, but not very\nuseful for finding issues.\n\nI think the best way to stabilize the test is to just not delete the\nrows. That means we won't have any \"empty\" ranges (runs of pages without\nany tuples).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Apr 2024 22:33:21 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/16/24 22:33, Tomas Vondra wrote:\n> On 4/15/24 20:35, Tomas Vondra wrote:\n>> On 4/15/24 10:18, Tomas Vondra wrote:\n>\n> ...\n> \n> That is, no parallel index builds on temporary tables. Which means the\n> test is not actually testing anything :-( Much more stable, but not very\n> useful for finding issues.\n> \n> I think the best way to stabilize the test is to just not delete the\n> rows. That means we won't have any \"empty\" ranges (runs of pages without\n> any tuples).\n> \n\nI just pushed a revert and a patch to stabilize the test in a different\nway - Matthias mentioned to me off-list that DELETE is not the only way\nto generate empty ranges in a BRIN index, because partial indexes have\nthe same effect. After playing with that a bit, that seems to work fine\n(works with parallel builds, not affected by cleanup), so done that way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Apr 2024 16:28:08 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 4/15/24 20:35, Tomas Vondra wrote:\n> ...\n>\n> Attached is the cleanup I thought about doing earlier in this patch [1]\n> to make the code more like btree. The diff might make it seem like a big\n> change, but it really just moves the merge code into a separate function\n> and makes it use using the conditional variable. I still believe the old\n> code is correct, but this seems like an improvement so plan to push this\n> soon and resolve the open item.\n>\n\nI've now pushed this cleanup patch, after rewording the commit message a\nlittle bit, etc. I believe this resolves the open item tracking this, so\nI've moved it to the \"resolved\" part.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Apr 2024 18:37:07 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 13.04.24 23:04, Tomas Vondra wrote:\n>>> While preparing a differential code coverage report between 16 and HEAD, one\n>>> thing that stands out is the parallel brin build code. Neither on\n>>> coverage.postgresql.org nor locally is that code reached during our tests.\n>>>\n>>\n>> Thanks for pointing this out, it's definitely something that I need to\n>> improve (admittedly, should have been part of the patch). I'll also look\n>> into eliminating the difference between BTREE and BRIN parallel builds,\n>> mentioned in my last message in this thread.\n>>\n> \n> Here's a couple patches adding a test for the parallel CREATE INDEX with\n> BRIN. The actual test is 0003/0004 - I added the test to pageinspect,\n> because that allows cross-checking the index to one built without\n> parallelism, which I think is better than just doing CREATE INDEX\n> without properly testing it produces correct results.\n\nThese pageinspect tests added a new use of the md5() function. We got\nrid of those in the tests for PG17. You could write the test case with\nsomething like\n\n SELECT (CASE WHEN (mod(i,231) = 0) OR (i BETWEEN 3500 AND 4000) THEN NULL ELSE i END),\n- (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN NULL ELSE md5(i::text) END),\n+ (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN NULL ELSE encode(sha256(i::text::bytea), 'hex') END),\n (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3850 AND 4500) THEN NULL ELSE (i/100) + mod(i,8) END)\n\nBut this changes the test output slightly and I'm not sure if this gives\nyou the data distribution that you need for you test. Could your check\nthis please?\n\n\n\n",
"msg_date": "Thu, 15 Aug 2024 15:48:19 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 8/15/24 15:48, Peter Eisentraut wrote:\n> On 13.04.24 23:04, Tomas Vondra wrote:\n>>>> While preparing a differential code coverage report between 16 and\n>>>> HEAD, one\n>>>> thing that stands out is the parallel brin build code. Neither on\n>>>> coverage.postgresql.org nor locally is that code reached during our\n>>>> tests.\n>>>>\n>>>\n>>> Thanks for pointing this out, it's definitely something that I need to\n>>> improve (admittedly, should have been part of the patch). I'll also look\n>>> into eliminating the difference between BTREE and BRIN parallel builds,\n>>> mentioned in my last message in this thread.\n>>>\n>>\n>> Here's a couple patches adding a test for the parallel CREATE INDEX with\n>> BRIN. The actual test is 0003/0004 - I added the test to pageinspect,\n>> because that allows cross-checking the index to one built without\n>> parallelism, which I think is better than just doing CREATE INDEX\n>> without properly testing it produces correct results.\n> \n> These pageinspect tests added a new use of the md5() function. We got\n> rid of those in the tests for PG17. You could write the test case with\n> something like\n> \n> SELECT (CASE WHEN (mod(i,231) = 0) OR (i BETWEEN 3500 AND 4000) THEN\n> NULL ELSE i END),\n> - (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN\n> NULL ELSE md5(i::text) END),\n> + (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN\n> NULL ELSE encode(sha256(i::text::bytea), 'hex') END),\n> (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3850 AND 4500) THEN\n> NULL ELSE (i/100) + mod(i,8) END)\n> \n> But this changes the test output slightly and I'm not sure if this gives\n> you the data distribution that you need for you test. Could your check\n> this please?\n> \n\nI think this is fine. The output only changes because sha256 produces\nlonger values than md5, so that the summaries are longer the index gets\na page longer. AFAIK that has no impact on the test.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 16 Aug 2024 11:22:45 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
},
{
"msg_contents": "On 16.08.24 11:22, Tomas Vondra wrote:\n>> These pageinspect tests added a new use of the md5() function. We got\n>> rid of those in the tests for PG17. You could write the test case with\n>> something like\n>>\n>> SELECT (CASE WHEN (mod(i,231) = 0) OR (i BETWEEN 3500 AND 4000) THEN\n>> NULL ELSE i END),\n>> - (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN\n>> NULL ELSE md5(i::text) END),\n>> + (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3750 AND 4250) THEN\n>> NULL ELSE encode(sha256(i::text::bytea), 'hex') END),\n>> (CASE WHEN (mod(i,233) = 0) OR (i BETWEEN 3850 AND 4500) THEN\n>> NULL ELSE (i/100) + mod(i,8) END)\n>>\n>> But this changes the test output slightly and I'm not sure if this gives\n>> you the data distribution that you need for you test. Could your check\n>> this please?\n>>\n> \n> I think this is fine. The output only changes because sha256 produces\n> longer values than md5, so that the summaries are longer the index gets\n> a page longer. AFAIK that has no impact on the test.\n\nOk, I have committed that. Thanks for checking.\n\n\n\n",
"msg_date": "Fri, 16 Aug 2024 17:32:17 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel CREATE INDEX for BRIN indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nI am not a native English speaker, but shouldn't there be a \"to\" before \"detect\"?\n\nThese two additions make it possible detect a concurrent page split\n\nRegards\nDaniel\n\n",
"msg_date": "Thu, 8 Jun 2023 14:11:18 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Typo in src/backend/access/nbtree/README?"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:11 AM Daniel Westermann (DWE)\n<[email protected]> wrote:\n>\n> ... shouldn't there be a \"to\" before \"detect\"?\n>\n> These two additions make it possible detect a concurrent page split\n\nAgreed. Attached is a small patch that fixes this.\n\nThanks for the report!\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Thu, 8 Jun 2023 19:36:41 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Typo in src/backend/access/nbtree/README?"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 10:37 AM Gurjeet Singh <[email protected]> wrote:\n\n> On Thu, Jun 8, 2023 at 7:11 AM Daniel Westermann (DWE)\n> <[email protected]> wrote:\n> >\n> > ... shouldn't there be a \"to\" before \"detect\"?\n> >\n> > These two additions make it possible detect a concurrent page split\n>\n> Agreed. Attached is a small patch that fixes this.\n\n\n+1. A little nitpick: the new line seems overly long compared to\nadjacent lines, should we wrap it?\n\nThanks\nRichard\n\nOn Fri, Jun 9, 2023 at 10:37 AM Gurjeet Singh <[email protected]> wrote:On Thu, Jun 8, 2023 at 7:11 AM Daniel Westermann (DWE)\n<[email protected]> wrote:\n>\n> ... shouldn't there be a \"to\" before \"detect\"?\n>\n> These two additions make it possible detect a concurrent page split\n\nAgreed. Attached is a small patch that fixes this.+1. A little nitpick: the new line seems overly long compared toadjacent lines, should we wrap it?ThanksRichard",
"msg_date": "Fri, 9 Jun 2023 11:29:02 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Typo in src/backend/access/nbtree/README?"
},
{
"msg_contents": "On Fri, Jun 09, 2023 at 11:29:02AM +0800, Richard Guo wrote:\n> On Fri, Jun 9, 2023 at 10:37 AM Gurjeet Singh <[email protected]> wrote:\n> \n>> On Thu, Jun 8, 2023 at 7:11 AM Daniel Westermann (DWE)\n>> <[email protected]> wrote:\n>> >\n>> > ... shouldn't there be a \"to\" before \"detect\"?\n>> >\n>> > These two additions make it possible detect a concurrent page split\n>>\n>> Agreed. Attached is a small patch that fixes this.\n> \n> \n> +1. A little nitpick: the new line seems overly long compared to\n> adjacent lines, should we wrap it?\n\nCommitted, thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Jun 2023 21:24:22 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Typo in src/backend/access/nbtree/README?"
}
] |
[
{
"msg_contents": "Hi,\n\nAt pgcon unconference I presented a PoC patch adding prefetching for\nindexes, along with some benchmark results demonstrating the (pretty\nsignificant) benefits etc. The feedback was quite positive, so let me\nshare the current patch more widely.\n\n\nMotivation\n----------\n\nImagine we have a huge table (much larger than RAM), with an index, and\nthat we're doing a regular index scan (e.g. using a btree index). We\nfirst walk the index to the leaf page, read the item pointers from the\nleaf page and then start issuing fetches from the heap.\n\nThe index access is usually pretty cheap, because non-leaf pages are\nvery likely cached, so we may do perhaps I/O for the leaf. But the\nfetches from heap are likely very expensive - unless the page is\nclustered, we'll do a random I/O for each item pointer. Easily ~200 or\nmore I/O requests per leaf page. The problem is index scans do these\nrequests synchronously at the moment - we get the next TID, fetch the\nheap page, process the tuple, continue to the next TID etc.\n\nThat is slow and can't really leverage the bandwidth of modern storage,\nwhich require longer queues. This patch aims to improve this by async\nprefetching.\n\nWe already do prefetching for bitmap index scans, where the bitmap heap\nscan prefetches future pages based on effective_io_concurrency. I'm not\nsure why exactly was prefetching implemented only for bitmap scans, but\nI suspect the reasoning was that it only helps when there's many\nmatching tuples, and that's what bitmap index scans are for. So it was\nnot worth the implementation effort.\n\nBut there's three shortcomings in logic:\n\n1) It's not clear the thresholds for prefetching being beneficial and\nswitching to bitmap index scans are the same value. And as I'll\ndemonstrate later, the prefetching threshold is indeed much lower\n(perhaps a couple dozen matching tuples) on large tables.\n\n2) Our estimates / planning are not perfect, so we may easily pick an\nindex scan instead of a bitmap scan. It'd be nice to limit the damage a\nbit by still prefetching.\n\n3) There are queries that can't do a bitmap scan (at all, or because\nit's hopelessly inefficient). Consider queries that require ordering, or\nqueries by distance with GiST/SP-GiST index.\n\n\nImplementation\n--------------\n\nWhen I started looking at this, I only really thought about btree. If\nyou look at BTScanPosData, which is what the index scans use to\nrepresent the current leaf page, you'll notice it has \"items\", which is\nthe array of item pointers (TIDs) that we'll fetch from the heap. Which\nis exactly the thing we need.\n\nThe easiest thing would be to just do prefetching from the btree code.\nBut then I realized there's no particular reason why other index types\n(except for GIN, which only allows bitmap scans) couldn't do prefetching\ntoo. We could have a copy in each AM, of course, but that seems sloppy\nand also violation of layering. After all, bitmap heap scans do prefetch\nfrom the executor, so AM seems way too low level.\n\nSo I ended up moving most of the prefetching logic up into indexam.c,\nsee the index_prefetch() function. It can't be entirely separate,\nbecause each AM represents the current state in a different way (e.g.\nSpGistScanOpaque and BTScanOpaque are very different).\n\nSo what I did is introducing a IndexPrefetch struct, which is part of\nIndexScanDesc, maintaining all the info about prefetching for that\nparticular scan - current/maximum distance, progress, etc.\n\nIt also contains two AM-specific callbacks (get_range and get_block)\nwhich say valid range of indexes (into the internal array), and block\nnumber for a given index.\n\nThis mostly does the trick, although index_prefetch() is still called\nfrom the amgettuple() functions. That seems wrong, we should call it\nfrom indexam.c right aftter calling amgettuple.\n\n\nProblems / Open questions\n-------------------------\n\nThere's a couple issues I ran into, I'll try to list them in the order\nof importance (most serious ones first).\n\n1) pairing-heap in GiST / SP-GiST\n\nFor most AMs, the index state is pretty trivial - matching items from a\nsingle leaf page. Prefetching that is pretty trivial, even if the\ncurrent API is a bit cumbersome.\n\nDistance queries on GiST and SP-GiST are a problem, though, because\nthose do not just read the pointers into a simple array, as the distance\nordering requires passing stuff through a pairing-heap :-(\n\nI don't know how to best deal with that, especially not in the simple\nAPI. I don't think we can \"scan forward\" stuff from the pairing heap, so\nthe only idea I have is actually having two pairing-heaps. Or maybe\nusing the pairing heap for prefetching, but stashing the prefetched\npointers into an array and then returning stuff from it.\n\nIn the patch I simply prefetch items before we add them to the pairing\nheap, which is good enough for demonstrating the benefits.\n\n\n2) prefetching from executor\n\nAnother question is whether the prefetching shouldn't actually happen\neven higher - in the executor. That's what Andres suggested during the\nunconference, and it kinda makes sense. That's where we do prefetching\nfor bitmap heap scans, so why should this happen lower, right?\n\nI'm also not entirely sure the way this interfaces with the AM (through\nthe get_range / get_block callbaces) is very elegant. It did the trick,\nbut it seems a bit cumbersome. I wonder if someone has a better/nicer\nidea how to do this ...\n\n\n3) prefetch distance\n\nI think we can do various smart things about the prefetch distance.\n\nThe current code does about the same thing bitmap scans do - it starts\nwith distance 0 (no prefetching), and then simply ramps the distance up\nuntil the maximum value from get_tablespace_io_concurrency(). Which is\neither effective_io_concurrency, or per-tablespace value.\n\nI think we could be a bit smarter, and also consider e.g. the estimated\nnumber of matching rows (but we shouldn't be too strict, because it's\njust an estimate). We could also track some statistics for each scan and\nuse that during a rescans (think index scan in a nested loop).\n\nBut the patch doesn't do any of that now.\n\n\n4) per-leaf prefetching\n\nThe code is restricted only prefetches items from one leaf page. If the\nindex scan needs to scan multiple (many) leaf pages, we have to process\nthe first leaf page first before reading / prefetching the next one.\n\nI think this is acceptable limitation, certainly for v0. Prefetching\nacross multiple leaf pages seems way more complex (particularly for the\ncases using pairing heap), so let's leave this for the future.\n\n\n5) index-only scans\n\nI'm not sure what to do about index-only scans. On the one hand, the\npoint of IOS is not to read stuff from the heap at all, so why prefetch\nit. OTOH if there are many allvisible=false pages, we still have to\naccess that. And if that happens, this leads to the bizarre situation\nthat IOS is slower than regular index scan. But to address this, we'd\nhave to consider the visibility during prefetching.\n\n\nBenchmarks\n----------\n\n1) OLTP\n\nFor OLTP, this tested different queries with various index types, on\ndata sets constructed to have certain number of matching rows, forcing\ndifferent types of query plans (bitmap, index, seqscan).\n\nThe data sets have ~34GB, which is much more than available RAM (8GB).\n\nFor example for BTREE, we have a query like this:\n\n SELECT * FROM btree_test WHERE a = $v\n\nwith data matching 1, 10, 100, ..., 100000 rows for each $v. The results\nlook like this:\n\n rows bitmapscan master patched seqscan\n 1 19.8 20.4 18.8 31875.5\n 10 24.4 23.8 23.2 30642.4\n 100 27.7 40.0 26.3 31871.3\n 1000 45.8 178.0 45.4 30754.1\n 10000 171.8 1514.9 174.5 30743.3\n 100000 1799.0 15993.3 1777.4 30937.3\n\nThis says that the query takes ~31s with a seqscan, 1.8s with a bitmap\nscan and 16s index scan (on master). With the prefetching patch, it\ntakes about ~1.8s, i.e. about the same as the bitmap scan.\n\nI don't know where exactly would the plan switch from index scan to\nbitmap scan, but the table has ~100M rows, so all of this is tiny. I'd\nbet most of the cases would do plain index scan.\n\n\nFor a query with ordering:\n\n SELECT * FROM btree_test WHERE a >= $v ORDER BY a LIMIT $n\n\nthe results look a bit different:\n\n rows bitmapscan master patched seqscan\n 1 52703.9 19.5 19.5 31145.6\n 10 51208.1 22.7 24.7 30983.5\n 100 49038.6 39.0 26.3 32085.3\n 1000 53760.4 193.9 48.4 31479.4\n 10000 56898.4 1600.7 187.5 32064.5\n 100000 50975.2 15978.7 1848.9 31587.1\n\nThis is a good illustration of a query where bitmapscan is terrible\n(much worse than seqscan, in fact), and the patch is a massive\nimprovement over master (about an order of magnitude).\n\nOf course, if you only scan a couple rows, the benefits are much more\nmodest (say 40% for 100 rows, which is still significant).\n\nThe results for other index types (HASH, GiST, SP-GiST) follow roughly\nthe same pattern. See the attached PDF for more charts, and [1] for\ncomplete results.\n\n\nBenchmark / TPC-H\n-----------------\n\nI ran the 22 queries on 100GB data set, with parallel query either\ndisabled or enabled. And I measured timing (and speedup) for each query.\nThe speedup results look like this (see the attached PDF for details):\n\n query serial parallel\n 1 101% 99%\n 2 119% 100%\n 3 100% 99%\n 4 101% 100%\n 5 101% 100%\n 6 12% 99%\n 7 100% 100%\n 8 52% 67%\n 10 102% 101%\n 11 100% 72%\n 12 101% 100%\n 13 100% 101%\n 14 13% 100%\n 15 101% 100%\n 16 99% 99%\n 17 95% 101%\n 18 101% 106%\n 19 30% 40%\n 20 99% 100%\n 21 101% 100%\n 22 101% 107%\n\nThe percentage is (timing patched / master, so <100% means faster, >100%\nmeans slower).\n\nThe different queries are affected depending on the query plan - many\nqueries are close to 100%, which means \"no difference\". For the serial\ncase, there are about 4 queries that improved a lot (6, 8, 14, 19),\nwhile for the parallel case the benefits are somewhat less significant.\n\nMy explanation is that either (a) parallel case used a different plan\nwith fewer index scans or (b) the parallel query does more concurrent\nI/O simply by using parallel workers. Or maybe both.\n\nThere are a couple regressions too, I believe those are due to doing too\nmuch prefetching in some cases, and some of the heuristics mentioned\nearlier should eliminate most of this, I think.\n\n\nregards\n\n\n[1] https://github.com/tvondra/index-prefetch-tests\n[2] https://github.com/tvondra/postgres/tree/dev/index-prefetch\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Jun 2023 17:40:12 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "index prefetching"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 8:40 AM Tomas Vondra\n<[email protected]> wrote:\n> We already do prefetching for bitmap index scans, where the bitmap heap\n> scan prefetches future pages based on effective_io_concurrency. I'm not\n> sure why exactly was prefetching implemented only for bitmap scans, but\n> I suspect the reasoning was that it only helps when there's many\n> matching tuples, and that's what bitmap index scans are for. So it was\n> not worth the implementation effort.\n\nI have an educated guess as to why prefetching was limited to bitmap\nindex scans this whole time: it might have been due to issues with\nScalarArrayOpExpr quals.\n\nCommit 9e8da0f757 taught nbtree to deal with ScalarArrayOpExpr quals\n\"natively\". This meant that \"indexedcol op ANY(ARRAY[...])\" conditions\nwere supported by both index scans and index-only scans -- not just\nbitmap scans, which could handle ScalarArrayOpExpr quals even without\nnbtree directly understanding them. The commit was in late 2011,\nshortly after the introduction of index-only scans -- which seems to\nhave been the real motivation. And so it seems to me that support for\nScalarArrayOpExpr was built with bitmap scans and index-only scans in\nmind. Plain index scan ScalarArrayOpExpr quals do work, but support\nfor them seems kinda perfunctory to me (maybe you can think of a\nspecific counter-example where plain index scans really benefit from\nScalarArrayOpExpr, but that doesn't seem particularly relevant to the\noriginal motivation).\n\nScalarArrayOpExpr for plain index scans don't really make that much\nsense right now because there is no heap prefetching in the index scan\ncase, which is almost certainly going to be the major bottleneck\nthere. At the same time, adding useful prefetching for\nScalarArrayOpExpr execution more or less requires that you first\nimprove how nbtree executes ScalarArrayOpExpr quals in general. Bear\nin mind that ScalarArrayOpExpr execution (whether for bitmap index\nscans or index scans) is related to skip scan/MDAM techniques -- so\nthere are tricky dependencies that need to be considered together.\n\nRight now, nbtree ScalarArrayOpExpr execution must call _bt_first() to\ndescend the B-Tree for each array constant -- even though in principle\nwe could avoid all that work in cases that happen to have locality. In\nother words we'll often descend the tree multiple times and land on\nexactly the same leaf page again and again, without ever noticing that\nwe could have gotten away with only descending the tree once (it'd\nalso be possible to start the next \"descent\" one level up, not at the\nroot, intelligently reusing some of the work from an initial descent\n-- but you don't need anything so fancy to greatly improve matters\nhere).\n\nThis lack of smarts around how many times we call _bt_first() to\ndescend the index is merely a silly annoyance when it happens in\nbtgetbitmap(). We do at least sort and deduplicate the array up-front\n(inside _bt_sort_array_elements()), so there will be significant\nlocality of access each time we needlessly descend the tree.\nImportantly, there is no prefetching \"pipeline\" to mess up in the\nbitmap index scan case -- since that all happens later on. Not so for\nthe superficially similar (though actually rather different) plain\nindex scan case -- at least not once you add prefetching. If you're\nuselessly processing the same leaf page multiple times, then there is\nno way that heap prefetching can notice that it should be batching\nthings up. The context that would allow prefetching to work well isn't\nreally available right now. So the plain index scan case is kinda at a\ngratuitous disadvantage (with prefetching) relative to the bitmap\nindex scan case.\n\nQueries with (say) quals with many constants appearing in an \"IN()\"\nare both common and particularly likely to benefit from prefetching.\nI'm not suggesting that you need to address this to get to a\ncommittable patch. But you should definitely think about it now. I'm\nstrongly considering working on this problem for 17 anyway, so we may\nend up collaborating on these aspects of prefetching. Smarter\nScalarArrayOpExpr execution for index scans is likely to be quite\ncompelling if it enables heap prefetching.\n\n> But there's three shortcomings in logic:\n>\n> 1) It's not clear the thresholds for prefetching being beneficial and\n> switching to bitmap index scans are the same value. And as I'll\n> demonstrate later, the prefetching threshold is indeed much lower\n> (perhaps a couple dozen matching tuples) on large tables.\n\nAs I mentioned during the pgCon unconference session, I really like\nyour framing of the problem; it makes a lot of sense to directly\ncompare an index scan's execution against a very similar bitmap index\nscan execution -- there is an imaginary continuum between index scan\nand bitmap index scan. If the details of when and how we scan the\nindex are rather similar in each case, then there is really no reason\nwhy the performance shouldn't be fairly similar. I suspect that it\nwill be useful to ask the same question for various specific cases,\nthat you might not have thought about just yet. Things like\nScalarArrayOpExpr queries, where bitmap index scans might look like\nthey have a natural advantage due to an inherent need for random heap\naccess in the plain index scan case.\n\nIt's important to carefully distinguish between cases where plain\nindex scans really are at an inherent disadvantage relative to bitmap\nindex scans (because there really is no getting around the need to\naccess the same heap page many times with an index scan) versus cases\nthat merely *appear* that way. Implementation restrictions that only\nreally affect the plain index scan case (e.g., the lack of a\nreasonably sized prefetch buffer, or the ScalarArrayOpExpr thing)\nshould be accounted for when assessing the viability of index scan +\nprefetch over bitmap index scan + prefetch. This is very subtle, but\nimportant.\n\nThat's what I was mostly trying to get at when I talked about testing\nstrategy at the unconference session (this may have been unclear at\nthe time). It could be done in a way that helps you to think about the\nproblem from first principles. It could be really useful as a way of\navoiding confusing cases where plain index scan + prefetch does badly\ndue to implementation restrictions, versus cases where it's\n*inherently* the wrong strategy. And a testing strategy that starts\nwith very basic ideas about what I/O is truly necessary might help you\nto notice and fix regressions. The difference will never be perfectly\ncrisp, of course (isn't bitmap index scan basically just index scan\nwith a really huge prefetch buffer anyway?), but it still seems like a\nuseful direction to go in.\n\n> Implementation\n> --------------\n>\n> When I started looking at this, I only really thought about btree. If\n> you look at BTScanPosData, which is what the index scans use to\n> represent the current leaf page, you'll notice it has \"items\", which is\n> the array of item pointers (TIDs) that we'll fetch from the heap. Which\n> is exactly the thing we need.\n\n> So I ended up moving most of the prefetching logic up into indexam.c,\n> see the index_prefetch() function. It can't be entirely separate,\n> because each AM represents the current state in a different way (e.g.\n> SpGistScanOpaque and BTScanOpaque are very different).\n\nMaybe you were right to do that, but I'm not entirely sure.\n\nBear in mind that the ScalarArrayOpExpr case already looks like a\nsingle index scan whose qual involves an array to the executor, even\nthough nbtree more or less implements it as multiple index scans with\nplain constant quals (one per unique-ified array element). Index scans\nwhose results can be \"OR'd together\". Is that a modularity violation?\nAnd if so, why? As I've pointed out earlier in this email, we don't do\nvery much with that context right now -- but clearly we should.\n\nIn other words, maybe you're right to suspect that doing this in AMs\nlike nbtree is a modularity violation. OTOH, maybe it'll turn out that\nthat's exactly the right place to do it, because that's the only way\nto make the full context available in one place. I myself struggled\nwith this when I reviewed the skip scan patch. I was sure that Tom\nwouldn't like the way that the skip-scan patch doubles-down on adding\nmore intelligence/planning around how to execute queries with\nskippable leading columns. But, it turned out that he saw the merit in\nit, and basically accepted that general approach. Maybe this will turn\nout to be a little like that situation, where (counter to intuition)\nwhat you really need to do is add a new \"layering violation\".\nSometimes that's the only thing that'll allow the information to flow\nto the right place. It's tricky.\n\n> 4) per-leaf prefetching\n>\n> The code is restricted only prefetches items from one leaf page. If the\n> index scan needs to scan multiple (many) leaf pages, we have to process\n> the first leaf page first before reading / prefetching the next one.\n>\n> I think this is acceptable limitation, certainly for v0. Prefetching\n> across multiple leaf pages seems way more complex (particularly for the\n> cases using pairing heap), so let's leave this for the future.\n\nI tend to agree that this sort of thing doesn't need to happen in the\nfirst committed version. But FWIW nbtree could be taught to scan\nmultiple index pages and act as if it had just processed them as one\nsingle index page -- up to a point. This is at least possible with\nplain index scans that use MVCC snapshots (though not index-only\nscans), since we already drop the pin on the leaf page there anyway.\nAFAICT stops us from teaching nbtree to \"lie\" to the executor and tell\nit that we processed 1 leaf page, even though it was actually 5 leaf pages\n(maybe there would also have to be restrictions for the markpos stuff).\n\n> the results look a bit different:\n>\n> rows bitmapscan master patched seqscan\n> 1 52703.9 19.5 19.5 31145.6\n> 10 51208.1 22.7 24.7 30983.5\n> 100 49038.6 39.0 26.3 32085.3\n> 1000 53760.4 193.9 48.4 31479.4\n> 10000 56898.4 1600.7 187.5 32064.5\n> 100000 50975.2 15978.7 1848.9 31587.1\n>\n> This is a good illustration of a query where bitmapscan is terrible\n> (much worse than seqscan, in fact), and the patch is a massive\n> improvement over master (about an order of magnitude).\n>\n> Of course, if you only scan a couple rows, the benefits are much more\n> modest (say 40% for 100 rows, which is still significant).\n\nNice! And, it'll be nice to be able to use the kill_prior_tuple\noptimization in many more cases (possible by teaching the optimizer to\nfavor index scans over bitmap index scans more often).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 8 Jun 2023 11:56:28 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 6/8/23 20:56, Peter Geoghegan wrote:\n> On Thu, Jun 8, 2023 at 8:40 AM Tomas Vondra\n> <[email protected]> wrote:\n>> We already do prefetching for bitmap index scans, where the bitmap heap\n>> scan prefetches future pages based on effective_io_concurrency. I'm not\n>> sure why exactly was prefetching implemented only for bitmap scans, but\n>> I suspect the reasoning was that it only helps when there's many\n>> matching tuples, and that's what bitmap index scans are for. So it was\n>> not worth the implementation effort.\n> \n> I have an educated guess as to why prefetching was limited to bitmap\n> index scans this whole time: it might have been due to issues with\n> ScalarArrayOpExpr quals.\n> \n> Commit 9e8da0f757 taught nbtree to deal with ScalarArrayOpExpr quals\n> \"natively\". This meant that \"indexedcol op ANY(ARRAY[...])\" conditions\n> were supported by both index scans and index-only scans -- not just\n> bitmap scans, which could handle ScalarArrayOpExpr quals even without\n> nbtree directly understanding them. The commit was in late 2011,\n> shortly after the introduction of index-only scans -- which seems to\n> have been the real motivation. And so it seems to me that support for\n> ScalarArrayOpExpr was built with bitmap scans and index-only scans in\n> mind. Plain index scan ScalarArrayOpExpr quals do work, but support\n> for them seems kinda perfunctory to me (maybe you can think of a\n> specific counter-example where plain index scans really benefit from\n> ScalarArrayOpExpr, but that doesn't seem particularly relevant to the\n> original motivation).\n>\nI don't think SAOP is the reason. I did a bit of digging in the list\narchives, and found thread [1], which says:\n\n Regardless of what mechanism is used and who is responsible for\n doing it someone is going to have to figure out which blocks are\n specifically interesting to prefetch. Bitmap index scans happen\n to be the easiest since we've already built up a list of blocks\n we plan to read. Somehow that information has to be pushed to the\n storage manager to be acted upon.\n\n Normal index scans are an even more interesting case but I'm not\n sure how hard it would be to get that information. It may only be\n convenient to get the blocks from the last leaf page we looked at,\n for example.\n\nSo this suggests we simply started prefetching for the case where the\ninformation was readily available, and it'd be harder to do for index\nscans so that's it.\n\nThere's a couple more ~2008 threads mentioning prefetching, bitmap scans\nand even regular index scans (like [2]). None of them even mentions SAOP\nstuff at all.\n\n[1]\nhttps://www.postgresql.org/message-id/871wa17vxb.fsf%40oxford.xeocode.com\n\n[2]\nhttps://www.postgresql.org/message-id/87wsnnz046.fsf%40oxford.xeocode.com\n\n> ScalarArrayOpExpr for plain index scans don't really make that much\n> sense right now because there is no heap prefetching in the index scan\n> case, which is almost certainly going to be the major bottleneck\n> there. At the same time, adding useful prefetching for\n> ScalarArrayOpExpr execution more or less requires that you first\n> improve how nbtree executes ScalarArrayOpExpr quals in general. Bear\n> in mind that ScalarArrayOpExpr execution (whether for bitmap index\n> scans or index scans) is related to skip scan/MDAM techniques -- so\n> there are tricky dependencies that need to be considered together.\n> \n> Right now, nbtree ScalarArrayOpExpr execution must call _bt_first() to\n> descend the B-Tree for each array constant -- even though in principle\n> we could avoid all that work in cases that happen to have locality. In\n> other words we'll often descend the tree multiple times and land on\n> exactly the same leaf page again and again, without ever noticing that\n> we could have gotten away with only descending the tree once (it'd\n> also be possible to start the next \"descent\" one level up, not at the\n> root, intelligently reusing some of the work from an initial descent\n> -- but you don't need anything so fancy to greatly improve matters\n> here).\n> \n> This lack of smarts around how many times we call _bt_first() to\n> descend the index is merely a silly annoyance when it happens in\n> btgetbitmap(). We do at least sort and deduplicate the array up-front\n> (inside _bt_sort_array_elements()), so there will be significant\n> locality of access each time we needlessly descend the tree.\n> Importantly, there is no prefetching \"pipeline\" to mess up in the\n> bitmap index scan case -- since that all happens later on. Not so for\n> the superficially similar (though actually rather different) plain\n> index scan case -- at least not once you add prefetching. If you're\n> uselessly processing the same leaf page multiple times, then there is\n> no way that heap prefetching can notice that it should be batching\n> things up. The context that would allow prefetching to work well isn't\n> really available right now. So the plain index scan case is kinda at a\n> gratuitous disadvantage (with prefetching) relative to the bitmap\n> index scan case.\n> \n> Queries with (say) quals with many constants appearing in an \"IN()\"\n> are both common and particularly likely to benefit from prefetching.\n> I'm not suggesting that you need to address this to get to a\n> committable patch. But you should definitely think about it now. I'm\n> strongly considering working on this problem for 17 anyway, so we may\n> end up collaborating on these aspects of prefetching. Smarter\n> ScalarArrayOpExpr execution for index scans is likely to be quite\n> compelling if it enables heap prefetching.\n> \nEven if SAOP (probably) wasn't the reason, I think you're right it may\nbe an issue for prefetching, causing regressions. It didn't occur to me\nbefore, because I'm not that familiar with the btree code and/or how it\ndeals with SAOP (and didn't really intend to study it too deeply).\n\nSo if you're planning to work on this for PG17, collaborating on it\nwould be great.\n\nFor now I plan to just ignore SAOP, or maybe just disabling prefetching\nfor SAOP index scans if it proves to be prone to regressions. That's not\ngreat, but at least it won't make matters worse.\n\n>> But there's three shortcomings in logic:\n>>\n>> 1) It's not clear the thresholds for prefetching being beneficial and\n>> switching to bitmap index scans are the same value. And as I'll\n>> demonstrate later, the prefetching threshold is indeed much lower\n>> (perhaps a couple dozen matching tuples) on large tables.\n> \n> As I mentioned during the pgCon unconference session, I really like\n> your framing of the problem; it makes a lot of sense to directly\n> compare an index scan's execution against a very similar bitmap index\n> scan execution -- there is an imaginary continuum between index scan\n> and bitmap index scan. If the details of when and how we scan the\n> index are rather similar in each case, then there is really no reason\n> why the performance shouldn't be fairly similar. I suspect that it\n> will be useful to ask the same question for various specific cases,\n> that you might not have thought about just yet. Things like\n> ScalarArrayOpExpr queries, where bitmap index scans might look like\n> they have a natural advantage due to an inherent need for random heap\n> access in the plain index scan case.\n> \n\nYeah, although all the tests were done with a random table generated\nlike this:\n\n insert into btree_test select $d * random(), md5(i::text)\n from generate_series(1, $ROWS) s(i)\n\nSo it's damn random anyway. Although maybe it's random even for the\nbitmap case, so maybe if the SAOP had some sort of locality, that'd be\nan advantage for the bitmap scan. But how would such table look like?\n\nI guess something like this might be a \"nice\" bad case:\n\n insert into btree_test mod(i,100000), md5(i::text)\n from generate_series(1, $ROWS) s(i)\n\n select * from btree_test where a in (999, 1000, 1001, 1002)\n\nThe values are likely colocated on the same heap page, the bitmap scan\nis going to do a single prefetch. With index scan we'll prefetch them\nrepeatedly. I'll give it a try.\n\n\n> It's important to carefully distinguish between cases where plain\n> index scans really are at an inherent disadvantage relative to bitmap\n> index scans (because there really is no getting around the need to\n> access the same heap page many times with an index scan) versus cases\n> that merely *appear* that way. Implementation restrictions that only\n> really affect the plain index scan case (e.g., the lack of a\n> reasonably sized prefetch buffer, or the ScalarArrayOpExpr thing)\n> should be accounted for when assessing the viability of index scan +\n> prefetch over bitmap index scan + prefetch. This is very subtle, but\n> important.\n> \n\nI do agree, but what do you mean by \"assessing\"? Wasn't the agreement at\nthe unconference session was we'd not tweak costing? So ultimately, this\ndoes not really affect which scan type we pick. We'll keep doing the\nsame planning decisions as today, no?\n\nIf we pick index scan and enable prefetching, causing a regression (e.g.\nfor the SAOP with locality), that'd be bad. But how is that related to\nviability of index scans over bitmap index scans?\n\n\n> That's what I was mostly trying to get at when I talked about testing\n> strategy at the unconference session (this may have been unclear at\n> the time). It could be done in a way that helps you to think about the\n> problem from first principles. It could be really useful as a way of\n> avoiding confusing cases where plain index scan + prefetch does badly\n> due to implementation restrictions, versus cases where it's\n> *inherently* the wrong strategy. And a testing strategy that starts\n> with very basic ideas about what I/O is truly necessary might help you\n> to notice and fix regressions. The difference will never be perfectly\n> crisp, of course (isn't bitmap index scan basically just index scan\n> with a really huge prefetch buffer anyway?), but it still seems like a\n> useful direction to go in.\n> \n\nI'm all for building a more comprehensive set of test cases - the stuff\npresented at pgcon was good for demonstration, but it certainly is not\nenough for testing. The SAOP queries are a great addition, I also plan\nto run those queries on different (less random) data sets, etc. We'll\nprobably discover more interesting cases as the patch improves.\n\n\n>> Implementation\n>> --------------\n>>\n>> When I started looking at this, I only really thought about btree. If\n>> you look at BTScanPosData, which is what the index scans use to\n>> represent the current leaf page, you'll notice it has \"items\", which is\n>> the array of item pointers (TIDs) that we'll fetch from the heap. Which\n>> is exactly the thing we need.\n> \n>> So I ended up moving most of the prefetching logic up into indexam.c,\n>> see the index_prefetch() function. It can't be entirely separate,\n>> because each AM represents the current state in a different way (e.g.\n>> SpGistScanOpaque and BTScanOpaque are very different).\n> \n> Maybe you were right to do that, but I'm not entirely sure.\n> \n> Bear in mind that the ScalarArrayOpExpr case already looks like a\n> single index scan whose qual involves an array to the executor, even\n> though nbtree more or less implements it as multiple index scans with\n> plain constant quals (one per unique-ified array element). Index scans\n> whose results can be \"OR'd together\". Is that a modularity violation?\n> And if so, why? As I've pointed out earlier in this email, we don't do\n> very much with that context right now -- but clearly we should.\n> \n> In other words, maybe you're right to suspect that doing this in AMs\n> like nbtree is a modularity violation. OTOH, maybe it'll turn out that\n> that's exactly the right place to do it, because that's the only way\n> to make the full context available in one place. I myself struggled\n> with this when I reviewed the skip scan patch. I was sure that Tom\n> wouldn't like the way that the skip-scan patch doubles-down on adding\n> more intelligence/planning around how to execute queries with\n> skippable leading columns. But, it turned out that he saw the merit in\n> it, and basically accepted that general approach. Maybe this will turn\n> out to be a little like that situation, where (counter to intuition)\n> what you really need to do is add a new \"layering violation\".\n> Sometimes that's the only thing that'll allow the information to flow\n> to the right place. It's tricky.\n> \n\nThere are two aspects why I think AM is not the right place:\n\n- accessing table from index code seems backwards\n\n- we already do prefetching from the executor (nodeBitmapHeapscan.c)\n\nIt feels kinda wrong in hindsight.\n\n>> 4) per-leaf prefetching\n>>\n>> The code is restricted only prefetches items from one leaf page. If the\n>> index scan needs to scan multiple (many) leaf pages, we have to process\n>> the first leaf page first before reading / prefetching the next one.\n>>\n>> I think this is acceptable limitation, certainly for v0. Prefetching\n>> across multiple leaf pages seems way more complex (particularly for the\n>> cases using pairing heap), so let's leave this for the future.\n> \n> I tend to agree that this sort of thing doesn't need to happen in the\n> first committed version. But FWIW nbtree could be taught to scan\n> multiple index pages and act as if it had just processed them as one\n> single index page -- up to a point. This is at least possible with\n> plain index scans that use MVCC snapshots (though not index-only\n> scans), since we already drop the pin on the leaf page there anyway.\n> AFAICT stops us from teaching nbtree to \"lie\" to the executor and tell\n> it that we processed 1 leaf page, even though it was actually 5 leaf pages\n> (maybe there would also have to be restrictions for the markpos stuff).\n> \n\nYeah, I'm not saying it's impossible, and imagined we might teach nbtree\nto do that. But it seems like work for future someone.\n\n>> the results look a bit different:\n>>\n>> rows bitmapscan master patched seqscan\n>> 1 52703.9 19.5 19.5 31145.6\n>> 10 51208.1 22.7 24.7 30983.5\n>> 100 49038.6 39.0 26.3 32085.3\n>> 1000 53760.4 193.9 48.4 31479.4\n>> 10000 56898.4 1600.7 187.5 32064.5\n>> 100000 50975.2 15978.7 1848.9 31587.1\n>>\n>> This is a good illustration of a query where bitmapscan is terrible\n>> (much worse than seqscan, in fact), and the patch is a massive\n>> improvement over master (about an order of magnitude).\n>>\n>> Of course, if you only scan a couple rows, the benefits are much more\n>> modest (say 40% for 100 rows, which is still significant).\n> \n> Nice! And, it'll be nice to be able to use the kill_prior_tuple\n> optimization in many more cases (possible by teaching the optimizer to\n> favor index scans over bitmap index scans more often).\n> \n\nRight, I forgot to mention that benefit. Although, that'd only happen if\nwe actually choose index scans in more places, which I guess would\nrequire tweaking the costing model ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Jun 2023 00:17:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 3:17 PM Tomas Vondra\n<[email protected]> wrote:\n> Normal index scans are an even more interesting case but I'm not\n> sure how hard it would be to get that information. It may only be\n> convenient to get the blocks from the last leaf page we looked at,\n> for example.\n>\n> So this suggests we simply started prefetching for the case where the\n> information was readily available, and it'd be harder to do for index\n> scans so that's it.\n\nWhat the exact historical timeline is may not be that important. My\nemphasis on ScalarArrayOpExpr is partly due to it being a particularly\ncompelling case for both parallel index scan and prefetching, in\ngeneral. There are many queries that have huge in() lists that\nnaturally benefit a great deal from prefetching. Plus they're common.\n\n> Even if SAOP (probably) wasn't the reason, I think you're right it may\n> be an issue for prefetching, causing regressions. It didn't occur to me\n> before, because I'm not that familiar with the btree code and/or how it\n> deals with SAOP (and didn't really intend to study it too deeply).\n\nI'm pretty sure that you understand this already, but just in case:\nScalarArrayOpExpr doesn't even \"get the blocks from the last leaf\npage\" in many important cases. Not really -- not in the sense that\nyou'd hope and expect. We're senselessly processing the same index\nleaf page multiple times and treating it as a different, independent\nleaf page. That makes heap prefetching of the kind you're working on\nutterly hopeless, since it effectively throws away lots of useful\ncontext. Obviously that's the fault of nbtree ScalarArrayOpExpr\nhandling, not the fault of your patch.\n\n> So if you're planning to work on this for PG17, collaborating on it\n> would be great.\n>\n> For now I plan to just ignore SAOP, or maybe just disabling prefetching\n> for SAOP index scans if it proves to be prone to regressions. That's not\n> great, but at least it won't make matters worse.\n\nMakes sense, but I hope that it won't come to that.\n\nIMV it's actually quite reasonable that you didn't expect to have to\nthink about ScalarArrayOpExpr at all -- it would make a lot of sense\nif that was already true. But the fact is that it works in a way\nthat's pretty silly and naive right now, which will impact\nprefetching. I wasn't really thinking about regressions, though. I was\nactually more concerned about missing opportunities to get the most\nout of prefetching. ScalarArrayOpExpr really matters here.\n\n> I guess something like this might be a \"nice\" bad case:\n>\n> insert into btree_test mod(i,100000), md5(i::text)\n> from generate_series(1, $ROWS) s(i)\n>\n> select * from btree_test where a in (999, 1000, 1001, 1002)\n>\n> The values are likely colocated on the same heap page, the bitmap scan\n> is going to do a single prefetch. With index scan we'll prefetch them\n> repeatedly. I'll give it a try.\n\nThis is the sort of thing that I was thinking of. What are the\nconditions under which bitmap index scan starts to make sense? Why is\nthe break-even point whatever it is in each case, roughly? And, is it\nactually because of laws-of-physics level trade-off? Might it not be\ndue to implementation-level issues that are much less fundamental? In\nother words, might it actually be that we're just doing something\nstoopid in the case of plain index scans? Something that is just\npapered-over by bitmap index scans right now?\n\nI see that your patch has logic that avoids repeated prefetching of\nthe same block -- plus you have comments that wonder about going\nfurther by adding a \"small lru array\" in your new index_prefetch()\nfunction. I asked you about this during the unconference presentation.\nBut I think that my understanding of the situation was slightly\ndifferent to yours. That's relevant here.\n\nI wonder if you should go further than this, by actually sorting the\nitems that you need to fetch as part of processing a given leaf page\n(I said this at the unconference, you may recall). Why should we\n*ever* pin/access the same heap page more than once per leaf page\nprocessed per index scan? Nothing stops us from returning the tuples\nto the executor in the original logical/index-wise order, despite\nhaving actually accessed each leaf page's pointed-to heap pages\nslightly out of order (with the aim of avoiding extra pin/unpin\ntraffic that isn't truly necessary). We can sort the heap TIDs in\nscratch memory, then do our actual prefetching + heap access, and then\nrestore the original order before returning anything.\n\nThis is conceptually a \"mini bitmap index scan\", though one that takes\nplace \"inside\" a plain index scan, as it processes one particular leaf\npage. That's the kind of design that \"plain index scan vs bitmap index\nscan as a continuum\" leads me to (a little like the continuum between\nnested loop joins, block nested loop joins, and merge joins). I bet it\nwould be practical to do things this way, and help a lot with some\nkinds of queries. It might even be simpler than avoiding excessive\nprefetching using an LRU cache thing.\n\nI'm talking about problems that exist today, without your patch.\n\nI'll show a concrete example of the kind of index/index scan that\nmight be affected.\n\nAttached is an extract of the server log when the regression tests ran\nagainst a server patched to show custom instrumentation. The log\noutput shows exactly what's going on with one particular nbtree\nopportunistic deletion (my point has nothing to do with deletion, but\nit happens to be convenient to make my point in this fashion). This\nspecific example involves deletion of tuples from the system catalog\nindex \"pg_type_typname_nsp_index\". There is nothing very atypical\nabout it; it just shows a certain kind of heap fragmentation that's\nprobably very common.\n\nImagine a plain index scan involving a query along the lines of\n\"select * from pg_type where typname like 'part%' \", or similar. This\nquery runs an instant before the example LD_DEAD-bit-driven\nopportunistic deletion (a \"simple deletion\" in nbtree parlance) took\nplace. You'll be able to piece together from the log output that there\nwould only be about 4 heap blocks involved with such a query. Ideally,\nour hypothetical index scan would pin each buffer/heap page exactly\nonce, for a total of 4 PinBuffer()/UnpinBuffer() calls. After all,\nwe're talking about a fairly selective query here, that only needs to\nscan precisely one leaf page (I verified this part too) -- so why\nwouldn't we expect \"index scan parity\"?\n\nWhile there is significant clustering on this example leaf page/key\nspace, heap TID is not *perfectly* correlated with the\nlogical/keyspace order of the index -- which can have outsized\nconsequences. Notice that some heap blocks are non-contiguous\nrelative to logical/keyspace/index scan/index page offset number order.\n\nWe'll end up pinning each of the 4 or so heap pages more than once\n(sometimes several times each), when in principle we could have pinned\neach heap page exactly once. In other words, there is way too much of\na difference between the case where the tuples we scan are *almost*\nperfectly clustered (which is what you see in my example) and the case\nwhere they're exactly perfectly clustered. In other other words, there\nis way too much of a difference between plain index scan, and bitmap\nindex scan.\n\n(What I'm saying here is only true because this is a composite index\nand our query uses \"like\", returning rows matches a prefix -- if our\nindex was on the column \"typname\" alone and we used a simple equality\ncondition in our query then the Postgres 12 nbtree work would be\nenough to avoid the extra PinBuffer()/UnpinBuffer() calls. I suspect\nthat there are still relatively many important cases where we perform\nextra PinBuffer()/UnpinBuffer() calls during plain index scans that\nonly touch one leaf page anyway.)\n\nObviously we should expect bitmap index scans to have a natural\nadvantage over plain index scans whenever there is little or no\ncorrelation -- that's clear. But that's not what we see here -- we're\nway too sensitive to minor imperfections in clustering that are\nnaturally present on some kinds of leaf pages. The potential\ndifference in pin/unpin traffic (relative to the bitmap index scan\ncase) seems pathological to me. Ideally, we wouldn't have these kinds\nof differences at all. It's going to disrupt usage_count on the\nbuffers.\n\n> > It's important to carefully distinguish between cases where plain\n> > index scans really are at an inherent disadvantage relative to bitmap\n> > index scans (because there really is no getting around the need to\n> > access the same heap page many times with an index scan) versus cases\n> > that merely *appear* that way. Implementation restrictions that only\n> > really affect the plain index scan case (e.g., the lack of a\n> > reasonably sized prefetch buffer, or the ScalarArrayOpExpr thing)\n> > should be accounted for when assessing the viability of index scan +\n> > prefetch over bitmap index scan + prefetch. This is very subtle, but\n> > important.\n> >\n>\n> I do agree, but what do you mean by \"assessing\"?\n\nI mean performance validation. There ought to be a theoretical model\nthat describes the relationship between index scan and bitmap index\nscan, that has actual predictive power in the real world, across a\nvariety of different cases. Something that isn't sensitive to the\ncurrent phase of the moon (e.g., heap fragmentation along the lines of\nmy pg_type_typname_nsp_index log output). I particularly want to avoid\nnasty discontinuities that really make no sense.\n\n> Wasn't the agreement at\n> the unconference session was we'd not tweak costing? So ultimately, this\n> does not really affect which scan type we pick. We'll keep doing the\n> same planning decisions as today, no?\n\nI'm not really talking about tweaking the costing. What I'm saying is\nthat we really should expect index scans to behave similarly to bitmap\nindex scans at runtime, for queries that really don't have much to\ngain from using a bitmap heap scan (queries that may or may not also\nbenefit from prefetching). There are several reasons why this makes\nsense to me.\n\nOne reason is that it makes tweaking the actual costing easier later\non. Also, your point about plan robustness was a good one. If we make\nthe wrong choice about index scan vs bitmap index scan, and the\nconsequences aren't so bad, that's a very useful enhancement in\nitself.\n\nThe most important reason of all may just be to build confidence in\nthe design. I'm interested in understanding when and how prefetching\nstops helping.\n\n> I'm all for building a more comprehensive set of test cases - the stuff\n> presented at pgcon was good for demonstration, but it certainly is not\n> enough for testing. The SAOP queries are a great addition, I also plan\n> to run those queries on different (less random) data sets, etc. We'll\n> probably discover more interesting cases as the patch improves.\n\nDefinitely.\n\n> There are two aspects why I think AM is not the right place:\n>\n> - accessing table from index code seems backwards\n>\n> - we already do prefetching from the executor (nodeBitmapHeapscan.c)\n>\n> It feels kinda wrong in hindsight.\n\nI'm willing to accept that we should do it the way you've done it in\nthe patch provisionally. It's complicated enough that it feels like I\nshould reserve the right to change my mind.\n\n> >> I think this is acceptable limitation, certainly for v0. Prefetching\n> >> across multiple leaf pages seems way more complex (particularly for the\n> >> cases using pairing heap), so let's leave this for the future.\n\n> Yeah, I'm not saying it's impossible, and imagined we might teach nbtree\n> to do that. But it seems like work for future someone.\n\nRight. You probably noticed that this is another case where we'd be\nmaking index scans behave more like bitmap index scans (perhaps even\nincluding the downsides for kill_prior_tuple that accompany not\nprocessing each leaf page inline). There is probably a point where\nthat ceases to be sensible, but I don't know what that point is.\nThey're way more similar than we seem to imagine.\n\n\n--\nPeter Geoghegan",
"msg_date": "Thu, 8 Jun 2023 16:38:13 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 17:40:12 +0200, Tomas Vondra wrote:\n> At pgcon unconference I presented a PoC patch adding prefetching for\n> indexes, along with some benchmark results demonstrating the (pretty\n> significant) benefits etc. The feedback was quite positive, so let me\n> share the current patch more widely.\n\nI'm really excited about this work.\n\n\n> 1) pairing-heap in GiST / SP-GiST\n> \n> For most AMs, the index state is pretty trivial - matching items from a\n> single leaf page. Prefetching that is pretty trivial, even if the\n> current API is a bit cumbersome.\n> \n> Distance queries on GiST and SP-GiST are a problem, though, because\n> those do not just read the pointers into a simple array, as the distance\n> ordering requires passing stuff through a pairing-heap :-(\n> \n> I don't know how to best deal with that, especially not in the simple\n> API. I don't think we can \"scan forward\" stuff from the pairing heap, so\n> the only idea I have is actually having two pairing-heaps. Or maybe\n> using the pairing heap for prefetching, but stashing the prefetched\n> pointers into an array and then returning stuff from it.\n> \n> In the patch I simply prefetch items before we add them to the pairing\n> heap, which is good enough for demonstrating the benefits.\n\nI think it'd be perfectly fair to just not tackle distance queries for now.\n\n\n> 2) prefetching from executor\n> \n> Another question is whether the prefetching shouldn't actually happen\n> even higher - in the executor. That's what Andres suggested during the\n> unconference, and it kinda makes sense. That's where we do prefetching\n> for bitmap heap scans, so why should this happen lower, right?\n\nYea. I think it also provides potential for further optimizations in the\nfuture to do it at that layer.\n\nOne thing I have been wondering around this is whether we should not have\nsplit the code for IOS and plain indexscans...\n\n\n> 4) per-leaf prefetching\n> \n> The code is restricted only prefetches items from one leaf page. If the\n> index scan needs to scan multiple (many) leaf pages, we have to process\n> the first leaf page first before reading / prefetching the next one.\n> \n> I think this is acceptable limitation, certainly for v0. Prefetching\n> across multiple leaf pages seems way more complex (particularly for the\n> cases using pairing heap), so let's leave this for the future.\n\nHm. I think that really depends on the shape of the API we end up with. If we\nmove the responsibility more twoards to the executor, I think it very well\ncould end up being just as simple to prefetch across index pages.\n\n\n> 5) index-only scans\n> \n> I'm not sure what to do about index-only scans. On the one hand, the\n> point of IOS is not to read stuff from the heap at all, so why prefetch\n> it. OTOH if there are many allvisible=false pages, we still have to\n> access that. And if that happens, this leads to the bizarre situation\n> that IOS is slower than regular index scan. But to address this, we'd\n> have to consider the visibility during prefetching.\n\nThat should be easy to do, right?\n\n\n\n> Benchmark / TPC-H\n> -----------------\n> \n> I ran the 22 queries on 100GB data set, with parallel query either\n> disabled or enabled. And I measured timing (and speedup) for each query.\n> The speedup results look like this (see the attached PDF for details):\n> \n> query serial parallel\n> 1 101% 99%\n> 2 119% 100%\n> 3 100% 99%\n> 4 101% 100%\n> 5 101% 100%\n> 6 12% 99%\n> 7 100% 100%\n> 8 52% 67%\n> 10 102% 101%\n> 11 100% 72%\n> 12 101% 100%\n> 13 100% 101%\n> 14 13% 100%\n> 15 101% 100%\n> 16 99% 99%\n> 17 95% 101%\n> 18 101% 106%\n> 19 30% 40%\n> 20 99% 100%\n> 21 101% 100%\n> 22 101% 107%\n> \n> The percentage is (timing patched / master, so <100% means faster, >100%\n> means slower).\n> \n> The different queries are affected depending on the query plan - many\n> queries are close to 100%, which means \"no difference\". For the serial\n> case, there are about 4 queries that improved a lot (6, 8, 14, 19),\n> while for the parallel case the benefits are somewhat less significant.\n> \n> My explanation is that either (a) parallel case used a different plan\n> with fewer index scans or (b) the parallel query does more concurrent\n> I/O simply by using parallel workers. Or maybe both.\n> \n> There are a couple regressions too, I believe those are due to doing too\n> much prefetching in some cases, and some of the heuristics mentioned\n> earlier should eliminate most of this, I think.\n\nI'm a bit confused by some of these numbers. How can OS-level prefetching lead\nto massive prefetching in the alread cached case, e.g. in tpch q06 and q08?\nUnless I missed what \"xeon / cached (speedup)\" indicates?\n\nI think it'd be good to run a performance comparison of the unpatched vs\npatched cases, with prefetching disabled for both. It's possible that\nsomething in the patch caused unintended changes (say spilling during a\nhashagg, due to larger struct sizes).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:06:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 4:38 PM Peter Geoghegan <[email protected]> wrote:\n> This is conceptually a \"mini bitmap index scan\", though one that takes\n> place \"inside\" a plain index scan, as it processes one particular leaf\n> page. That's the kind of design that \"plain index scan vs bitmap index\n> scan as a continuum\" leads me to (a little like the continuum between\n> nested loop joins, block nested loop joins, and merge joins). I bet it\n> would be practical to do things this way, and help a lot with some\n> kinds of queries. It might even be simpler than avoiding excessive\n> prefetching using an LRU cache thing.\n\nI'll now give a simpler (though less realistic) example of a case\nwhere \"mini bitmap index scan\" would be expected to help index scans\nin general, and prefetching during index scans in particular.\nSomething very simple:\n\ncreate table bitmap_parity_test(randkey int4, filler text);\ncreate index on bitmap_parity_test (randkey);\ninsert into bitmap_parity_test select (random()*1000),\nrepeat('filler',10) from generate_series(1,250) i;\n\nThis gives me a table with 4 pages, and an index with 2 pages.\n\nThe following query selects about half of the rows from the table:\n\nselect * from bitmap_parity_test where randkey < 500;\n\nIf I force the query to use a bitmap index scan, I see that the total\nnumber of buffers hit is exactly as expected (according to\nEXPLAIN(ANALYZE,BUFFERS), that is): there are 5 buffers/pages hit. We\nneed to access every single heap page once, and we need to access the\nonly leaf page in the index once.\n\nI'm sure that you know where I'm going with this already. I'll force\nthe same query to use a plain index scan, and get a very different\nresult. Now EXPLAIN(ANALYZE,BUFFERS) shows that there are a total of\n89 buffers hit -- 88 of which must just be the same 5 heap pages,\nagain and again. That's just silly. It's probably not all that much\nslower, but it's not helping things. And it's likely that this effect\ninterferes with the prefetching in your patch.\n\nObviously you can come up with a variant of this test case where\nbitmap index scan does way fewer buffer accesses in a way that really\nmakes sense -- that's not in question. This is a fairly selective\nindex scan, since it only touches one index page -- and yet we still\nsee this difference.\n\n(Anybody pedantic enough to want to dispute whether or not this index\nscan counts as \"selective\" should run \"insert into bitmap_parity_test\nselect i, repeat('actshually',10) from generate_series(2000,1e5) i\"\nbefore running the \"randkey < 500\" query, which will make the index\nmuch larger without changing any of the details of how the query pins\npages -- non-pedants should just skip that step.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:40:15 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 6/9/23 02:06, Andres Freund wrote:\n> Hi,\n> \n> On 2023-06-08 17:40:12 +0200, Tomas Vondra wrote:\n>> At pgcon unconference I presented a PoC patch adding prefetching for\n>> indexes, along with some benchmark results demonstrating the (pretty\n>> significant) benefits etc. The feedback was quite positive, so let me\n>> share the current patch more widely.\n> \n> I'm really excited about this work.\n> \n> \n>> 1) pairing-heap in GiST / SP-GiST\n>>\n>> For most AMs, the index state is pretty trivial - matching items from a\n>> single leaf page. Prefetching that is pretty trivial, even if the\n>> current API is a bit cumbersome.\n>>\n>> Distance queries on GiST and SP-GiST are a problem, though, because\n>> those do not just read the pointers into a simple array, as the distance\n>> ordering requires passing stuff through a pairing-heap :-(\n>>\n>> I don't know how to best deal with that, especially not in the simple\n>> API. I don't think we can \"scan forward\" stuff from the pairing heap, so\n>> the only idea I have is actually having two pairing-heaps. Or maybe\n>> using the pairing heap for prefetching, but stashing the prefetched\n>> pointers into an array and then returning stuff from it.\n>>\n>> In the patch I simply prefetch items before we add them to the pairing\n>> heap, which is good enough for demonstrating the benefits.\n> \n> I think it'd be perfectly fair to just not tackle distance queries for now.\n> \n\nMy concern is that if we cut this from v0 entirely, we'll end up with an\nAPI that'll not be suitable for adding distance queries later.\n\n> \n>> 2) prefetching from executor\n>>\n>> Another question is whether the prefetching shouldn't actually happen\n>> even higher - in the executor. That's what Andres suggested during the\n>> unconference, and it kinda makes sense. That's where we do prefetching\n>> for bitmap heap scans, so why should this happen lower, right?\n> \n> Yea. I think it also provides potential for further optimizations in the\n> future to do it at that layer.\n> \n> One thing I have been wondering around this is whether we should not have\n> split the code for IOS and plain indexscans...\n> \n\nWhich code? We already have nodeIndexscan.c and nodeIndexonlyscan.c? Or\ndid you mean something else?\n\n> \n>> 4) per-leaf prefetching\n>>\n>> The code is restricted only prefetches items from one leaf page. If the\n>> index scan needs to scan multiple (many) leaf pages, we have to process\n>> the first leaf page first before reading / prefetching the next one.\n>>\n>> I think this is acceptable limitation, certainly for v0. Prefetching\n>> across multiple leaf pages seems way more complex (particularly for the\n>> cases using pairing heap), so let's leave this for the future.\n> \n> Hm. I think that really depends on the shape of the API we end up with. If we\n> move the responsibility more twoards to the executor, I think it very well\n> could end up being just as simple to prefetch across index pages.\n> \n\nMaybe. I'm open to that idea if you have idea how to shape the API to\nmake this possible (although perhaps not in v0).\n\n> \n>> 5) index-only scans\n>>\n>> I'm not sure what to do about index-only scans. On the one hand, the\n>> point of IOS is not to read stuff from the heap at all, so why prefetch\n>> it. OTOH if there are many allvisible=false pages, we still have to\n>> access that. And if that happens, this leads to the bizarre situation\n>> that IOS is slower than regular index scan. But to address this, we'd\n>> have to consider the visibility during prefetching.\n> \n> That should be easy to do, right?\n> \n\nIt doesn't seem particularly complicated (famous last words), and we\nneed to do the VM checks anyway so it seems like it wouldn't add a lot\nof overhead either\n\n> \n> \n>> Benchmark / TPC-H\n>> -----------------\n>>\n>> I ran the 22 queries on 100GB data set, with parallel query either\n>> disabled or enabled. And I measured timing (and speedup) for each query.\n>> The speedup results look like this (see the attached PDF for details):\n>>\n>> query serial parallel\n>> 1 101% 99%\n>> 2 119% 100%\n>> 3 100% 99%\n>> 4 101% 100%\n>> 5 101% 100%\n>> 6 12% 99%\n>> 7 100% 100%\n>> 8 52% 67%\n>> 10 102% 101%\n>> 11 100% 72%\n>> 12 101% 100%\n>> 13 100% 101%\n>> 14 13% 100%\n>> 15 101% 100%\n>> 16 99% 99%\n>> 17 95% 101%\n>> 18 101% 106%\n>> 19 30% 40%\n>> 20 99% 100%\n>> 21 101% 100%\n>> 22 101% 107%\n>>\n>> The percentage is (timing patched / master, so <100% means faster, >100%\n>> means slower).\n>>\n>> The different queries are affected depending on the query plan - many\n>> queries are close to 100%, which means \"no difference\". For the serial\n>> case, there are about 4 queries that improved a lot (6, 8, 14, 19),\n>> while for the parallel case the benefits are somewhat less significant.\n>>\n>> My explanation is that either (a) parallel case used a different plan\n>> with fewer index scans or (b) the parallel query does more concurrent\n>> I/O simply by using parallel workers. Or maybe both.\n>>\n>> There are a couple regressions too, I believe those are due to doing too\n>> much prefetching in some cases, and some of the heuristics mentioned\n>> earlier should eliminate most of this, I think.\n> \n> I'm a bit confused by some of these numbers. How can OS-level prefetching lead\n> to massive prefetching in the alread cached case, e.g. in tpch q06 and q08?\n> Unless I missed what \"xeon / cached (speedup)\" indicates?\n> \n\nI forgot to explain what \"cached\" means in the TPC-H case. It means\nsecond execution of the query, so you can imagine it like this:\n\nfor q in `seq 1 22`; do\n\n 1. drop caches and restart postgres\n\n 2. run query $q -> uncached\n\n 3. run query $q -> cached\n\ndone\n\nSo the second execution has a chance of having data in memory - but\nmaybe not all, because this is a 100GB data set (so ~200GB after\nloading), but the machine only has 64GB of RAM.\n\nI think a likely explanation is some of the data wasn't actually in\nmemory, so prefetching still did something.\n\n> I think it'd be good to run a performance comparison of the unpatched vs\n> patched cases, with prefetching disabled for both. It's possible that\n> something in the patch caused unintended changes (say spilling during a\n> hashagg, due to larger struct sizes).\n> \n\nThat's certainly a good idea. I'll do that in the next round of tests. I\nalso plan to do a test on data set that fits into RAM, to test \"properly\ncached\" case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Jun 2023 12:18:11 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 6/9/23 01:38, Peter Geoghegan wrote:\n> On Thu, Jun 8, 2023 at 3:17 PM Tomas Vondra\n> <[email protected]> wrote:\n>> Normal index scans are an even more interesting case but I'm not\n>> sure how hard it would be to get that information. It may only be\n>> convenient to get the blocks from the last leaf page we looked at,\n>> for example.\n>>\n>> So this suggests we simply started prefetching for the case where the\n>> information was readily available, and it'd be harder to do for index\n>> scans so that's it.\n> \n> What the exact historical timeline is may not be that important. My\n> emphasis on ScalarArrayOpExpr is partly due to it being a particularly\n> compelling case for both parallel index scan and prefetching, in\n> general. There are many queries that have huge in() lists that\n> naturally benefit a great deal from prefetching. Plus they're common.\n> \n\nDid you mean parallel index scan or bitmap index scan?\n\nBut yeah, I get the point that SAOP queries are an interesting example\nof queries to explore. I'll add some to the next round of tests.\n\n>> Even if SAOP (probably) wasn't the reason, I think you're right it may\n>> be an issue for prefetching, causing regressions. It didn't occur to me\n>> before, because I'm not that familiar with the btree code and/or how it\n>> deals with SAOP (and didn't really intend to study it too deeply).\n> \n> I'm pretty sure that you understand this already, but just in case:\n> ScalarArrayOpExpr doesn't even \"get the blocks from the last leaf\n> page\" in many important cases. Not really -- not in the sense that\n> you'd hope and expect. We're senselessly processing the same index\n> leaf page multiple times and treating it as a different, independent\n> leaf page. That makes heap prefetching of the kind you're working on\n> utterly hopeless, since it effectively throws away lots of useful\n> context. Obviously that's the fault of nbtree ScalarArrayOpExpr\n> handling, not the fault of your patch.\n> \n\nI think I understand, although maybe my mental model is wrong. I agree\nit seems inefficient, but I'm not sure why would it make prefetching\nhopeless. Sure, it puts index scans at a disadvantage (compared to\nbitmap scans), but it we pick index scan it should still be an\nimprovement, right?\n\nI guess I need to do some testing on a range of data sets / queries, and\nsee how it works in practice.\n\n>> So if you're planning to work on this for PG17, collaborating on it\n>> would be great.\n>>\n>> For now I plan to just ignore SAOP, or maybe just disabling prefetching\n>> for SAOP index scans if it proves to be prone to regressions. That's not\n>> great, but at least it won't make matters worse.\n> \n> Makes sense, but I hope that it won't come to that.\n> \n> IMV it's actually quite reasonable that you didn't expect to have to\n> think about ScalarArrayOpExpr at all -- it would make a lot of sense\n> if that was already true. But the fact is that it works in a way\n> that's pretty silly and naive right now, which will impact\n> prefetching. I wasn't really thinking about regressions, though. I was\n> actually more concerned about missing opportunities to get the most\n> out of prefetching. ScalarArrayOpExpr really matters here.\n> \n\nOK\n\n>> I guess something like this might be a \"nice\" bad case:\n>>\n>> insert into btree_test mod(i,100000), md5(i::text)\n>> from generate_series(1, $ROWS) s(i)\n>>\n>> select * from btree_test where a in (999, 1000, 1001, 1002)\n>>\n>> The values are likely colocated on the same heap page, the bitmap scan\n>> is going to do a single prefetch. With index scan we'll prefetch them\n>> repeatedly. I'll give it a try.\n> \n> This is the sort of thing that I was thinking of. What are the\n> conditions under which bitmap index scan starts to make sense? Why is\n> the break-even point whatever it is in each case, roughly? And, is it\n> actually because of laws-of-physics level trade-off? Might it not be\n> due to implementation-level issues that are much less fundamental? In\n> other words, might it actually be that we're just doing something\n> stoopid in the case of plain index scans? Something that is just\n> papered-over by bitmap index scans right now?\n> \n\nYeah, that's partially why I do this kind of testing on a wide range of\nsynthetic data sets - to find cases that behave in unexpected way (say,\nseem like they should improve but don't).\n\n> I see that your patch has logic that avoids repeated prefetching of\n> the same block -- plus you have comments that wonder about going\n> further by adding a \"small lru array\" in your new index_prefetch()\n> function. I asked you about this during the unconference presentation.\n> But I think that my understanding of the situation was slightly\n> different to yours. That's relevant here.\n> \n> I wonder if you should go further than this, by actually sorting the\n> items that you need to fetch as part of processing a given leaf page\n> (I said this at the unconference, you may recall). Why should we\n> *ever* pin/access the same heap page more than once per leaf page\n> processed per index scan? Nothing stops us from returning the tuples\n> to the executor in the original logical/index-wise order, despite\n> having actually accessed each leaf page's pointed-to heap pages\n> slightly out of order (with the aim of avoiding extra pin/unpin\n> traffic that isn't truly necessary). We can sort the heap TIDs in\n> scratch memory, then do our actual prefetching + heap access, and then\n> restore the original order before returning anything.\n> \n\nI think that's possible, and I thought about that a bit (not just for\nbtree, but especially for the distance queries on GiST). But I don't\nhave a good idea if this would be 1% or 50% improvement, and I was\nconcerned it might easily lead to regressions if we don't actually need\nall the tuples.\n\nI mean, imagine we have TIDs\n\n [T1, T2, T3, T4, T5, T6]\n\nMaybe T1, T5, T6 are from the same page, so per your proposal we might\nreorder and prefetch them in this order:\n\n [T1, T5, T6, T2, T3, T4]\n\nBut maybe we only need [T1, T2] because of a LIMIT, and the extra work\nwe did on processing T5, T6 is wasted.\n\n> This is conceptually a \"mini bitmap index scan\", though one that takes\n> place \"inside\" a plain index scan, as it processes one particular leaf\n> page. That's the kind of design that \"plain index scan vs bitmap index\n> scan as a continuum\" leads me to (a little like the continuum between\n> nested loop joins, block nested loop joins, and merge joins). I bet it\n> would be practical to do things this way, and help a lot with some\n> kinds of queries. It might even be simpler than avoiding excessive\n> prefetching using an LRU cache thing.\n> \n> I'm talking about problems that exist today, without your patch.\n> \n> I'll show a concrete example of the kind of index/index scan that\n> might be affected.\n> \n> Attached is an extract of the server log when the regression tests ran\n> against a server patched to show custom instrumentation. The log\n> output shows exactly what's going on with one particular nbtree\n> opportunistic deletion (my point has nothing to do with deletion, but\n> it happens to be convenient to make my point in this fashion). This\n> specific example involves deletion of tuples from the system catalog\n> index \"pg_type_typname_nsp_index\". There is nothing very atypical\n> about it; it just shows a certain kind of heap fragmentation that's\n> probably very common.\n> \n> Imagine a plain index scan involving a query along the lines of\n> \"select * from pg_type where typname like 'part%' \", or similar. This\n> query runs an instant before the example LD_DEAD-bit-driven\n> opportunistic deletion (a \"simple deletion\" in nbtree parlance) took\n> place. You'll be able to piece together from the log output that there\n> would only be about 4 heap blocks involved with such a query. Ideally,\n> our hypothetical index scan would pin each buffer/heap page exactly\n> once, for a total of 4 PinBuffer()/UnpinBuffer() calls. After all,\n> we're talking about a fairly selective query here, that only needs to\n> scan precisely one leaf page (I verified this part too) -- so why\n> wouldn't we expect \"index scan parity\"?\n> \n> While there is significant clustering on this example leaf page/key\n> space, heap TID is not *perfectly* correlated with the\n> logical/keyspace order of the index -- which can have outsized\n> consequences. Notice that some heap blocks are non-contiguous\n> relative to logical/keyspace/index scan/index page offset number order.\n> \n> We'll end up pinning each of the 4 or so heap pages more than once\n> (sometimes several times each), when in principle we could have pinned\n> each heap page exactly once. In other words, there is way too much of\n> a difference between the case where the tuples we scan are *almost*\n> perfectly clustered (which is what you see in my example) and the case\n> where they're exactly perfectly clustered. In other other words, there\n> is way too much of a difference between plain index scan, and bitmap\n> index scan.\n> \n> (What I'm saying here is only true because this is a composite index\n> and our query uses \"like\", returning rows matches a prefix -- if our\n> index was on the column \"typname\" alone and we used a simple equality\n> condition in our query then the Postgres 12 nbtree work would be\n> enough to avoid the extra PinBuffer()/UnpinBuffer() calls. I suspect\n> that there are still relatively many important cases where we perform\n> extra PinBuffer()/UnpinBuffer() calls during plain index scans that\n> only touch one leaf page anyway.)\n> \n> Obviously we should expect bitmap index scans to have a natural\n> advantage over plain index scans whenever there is little or no\n> correlation -- that's clear. But that's not what we see here -- we're\n> way too sensitive to minor imperfections in clustering that are\n> naturally present on some kinds of leaf pages. The potential\n> difference in pin/unpin traffic (relative to the bitmap index scan\n> case) seems pathological to me. Ideally, we wouldn't have these kinds\n> of differences at all. It's going to disrupt usage_count on the\n> buffers.\n> \n\nI'm not sure I understand all the nuance here, but the thing I take away\nis to add tests with different levels of correlation, and probably also\nsome multi-column indexes.\n\n>>> It's important to carefully distinguish between cases where plain\n>>> index scans really are at an inherent disadvantage relative to bitmap\n>>> index scans (because there really is no getting around the need to\n>>> access the same heap page many times with an index scan) versus cases\n>>> that merely *appear* that way. Implementation restrictions that only\n>>> really affect the plain index scan case (e.g., the lack of a\n>>> reasonably sized prefetch buffer, or the ScalarArrayOpExpr thing)\n>>> should be accounted for when assessing the viability of index scan +\n>>> prefetch over bitmap index scan + prefetch. This is very subtle, but\n>>> important.\n>>>\n>>\n>> I do agree, but what do you mean by \"assessing\"?\n> \n> I mean performance validation. There ought to be a theoretical model\n> that describes the relationship between index scan and bitmap index\n> scan, that has actual predictive power in the real world, across a\n> variety of different cases. Something that isn't sensitive to the\n> current phase of the moon (e.g., heap fragmentation along the lines of\n> my pg_type_typname_nsp_index log output). I particularly want to avoid\n> nasty discontinuities that really make no sense.\n> \n>> Wasn't the agreement at\n>> the unconference session was we'd not tweak costing? So ultimately, this\n>> does not really affect which scan type we pick. We'll keep doing the\n>> same planning decisions as today, no?\n> \n> I'm not really talking about tweaking the costing. What I'm saying is\n> that we really should expect index scans to behave similarly to bitmap\n> index scans at runtime, for queries that really don't have much to\n> gain from using a bitmap heap scan (queries that may or may not also\n> benefit from prefetching). There are several reasons why this makes\n> sense to me.\n> \n> One reason is that it makes tweaking the actual costing easier later\n> on. Also, your point about plan robustness was a good one. If we make\n> the wrong choice about index scan vs bitmap index scan, and the\n> consequences aren't so bad, that's a very useful enhancement in\n> itself.\n> \n> The most important reason of all may just be to build confidence in\n> the design. I'm interested in understanding when and how prefetching\n> stops helping.\n> \n\nAgreed.\n\n>> I'm all for building a more comprehensive set of test cases - the stuff\n>> presented at pgcon was good for demonstration, but it certainly is not\n>> enough for testing. The SAOP queries are a great addition, I also plan\n>> to run those queries on different (less random) data sets, etc. We'll\n>> probably discover more interesting cases as the patch improves.\n> \n> Definitely.\n> \n>> There are two aspects why I think AM is not the right place:\n>>\n>> - accessing table from index code seems backwards\n>>\n>> - we already do prefetching from the executor (nodeBitmapHeapscan.c)\n>>\n>> It feels kinda wrong in hindsight.\n> \n> I'm willing to accept that we should do it the way you've done it in\n> the patch provisionally. It's complicated enough that it feels like I\n> should reserve the right to change my mind.\n> \n>>>> I think this is acceptable limitation, certainly for v0. Prefetching\n>>>> across multiple leaf pages seems way more complex (particularly for the\n>>>> cases using pairing heap), so let's leave this for the future.\n> \n>> Yeah, I'm not saying it's impossible, and imagined we might teach nbtree\n>> to do that. But it seems like work for future someone.\n> \n> Right. You probably noticed that this is another case where we'd be\n> making index scans behave more like bitmap index scans (perhaps even\n> including the downsides for kill_prior_tuple that accompany not\n> processing each leaf page inline). There is probably a point where\n> that ceases to be sensible, but I don't know what that point is.\n> They're way more similar than we seem to imagine.\n> \n\nOK. Thanks for all the comments.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Jun 2023 12:44:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 3:45 AM Tomas Vondra\n<[email protected]> wrote:\n> > What the exact historical timeline is may not be that important. My\n> > emphasis on ScalarArrayOpExpr is partly due to it being a particularly\n> > compelling case for both parallel index scan and prefetching, in\n> > general. There are many queries that have huge in() lists that\n> > naturally benefit a great deal from prefetching. Plus they're common.\n> >\n>\n> Did you mean parallel index scan or bitmap index scan?\n\nI meant parallel index scan (also parallel bitmap index scan). Note\nthat nbtree parallel index scans have special ScalarArrayOpExpr\nhandling code.\n\nScalarArrayOpExpr is kind of special -- it is simultaneously one big\nindex scan (to the executor), and lots of small index scans (to\nnbtree). Unlike the queries that you've looked at so far, which really\nonly have one plausible behavior at execution time, there are many\nways that ScalarArrayOpExpr index scans can be executed at runtime --\nsome much faster than others. The nbtree implementation can in\nprinciple reorder how it processes ranges from the key space (i.e.\neach range of array elements) with significant flexibility.\n\n> I think I understand, although maybe my mental model is wrong. I agree\n> it seems inefficient, but I'm not sure why would it make prefetching\n> hopeless. Sure, it puts index scans at a disadvantage (compared to\n> bitmap scans), but it we pick index scan it should still be an\n> improvement, right?\n\nHopeless might have been too strong of a word. More like it'd fall far\nshort of what is possible to do with a ScalarArrayOpExpr with a given\nhigh end server.\n\nThe quality of the implementation (including prefetching) could make a\nhuge difference to how well we make use of the available hardware\nresources. A really high quality implementation of ScalarArrayOpExpr +\nprefetching can keep the system busy with useful work, which is less\ntrue with other types of queries, which have inherently less\npredictable I/O (and often have less I/O overall). What could be more\namenable to predicting I/O patterns than a query with a large IN()\nlist, with many constants that can be processed in whatever order\nmakes sense at runtime?\n\nWhat I'd like to do with ScalarArrayOpExpr is to teach nbtree to\ncoalesce together those \"small index scans\" into \"medium index scans\"\ndynamically, where that makes sense. That's the main part that's\nmissing right now. Dynamic behavior matters a lot with\nScalarArrayOpExpr stuff -- that's where the challenge lies, but also\nwhere the opportunities are. Prefetching builds on all that.\n\n> I guess I need to do some testing on a range of data sets / queries, and\n> see how it works in practice.\n\nIf I can figure out a way of getting ScalarArrayOpExpr to visit each\nleaf page exactly once, that might be enough to make things work\nreally well most of the time. Maybe it won't even be necessary to\ncoordinate very much, in the end. Unsure.\n\nI've already done a lot of work that tries to minimize the chances of\nregular (non-ScalarArrayOpExpr) queries accessing more than a single\nleaf page, which will help your strategy of just prefetching items\nfrom a single leaf page at a time -- that will get you pretty far\nalready. Consider the example of the tenk2_hundred index from the\nbt_page_items documentation. You'll notice that the high key for the\npage shown in the docs (and every other page in the same index) nicely\nmakes the leaf page boundaries \"aligned\" with natural keyspace\nboundaries, due to suffix truncation. That helps index scans to access\nno more than a single leaf page when accessing any one distinct\n\"hundred\" value.\n\nWe are careful to do the right thing with the \"boundary cases\" when we\ndescend the tree, too. This _bt_search behavior builds on the way that\nsuffix truncation influences the on-disk structure of indexes. Queries\nsuch as \"select * from tenk2 where hundred = ?\" will each return 100\nrows spread across almost as many heap pages. That's a fairly large\nnumber of rows/heap pages, but we still only need to access one leaf\npage for every possible constant value (every \"hundred\" value that\nmight be specified as the ? in my point query example). It doesn't\nmatter if it's the leftmost or rightmost item on a leaf page -- we\nalways descend to exactly the correct leaf page directly, and we\nalways terminate the scan without having to move to the right sibling\npage (we check the high key before going to the right page in some\ncases, per the optimization added by commit 29b64d1d).\n\nThe same kind of behavior is also seen with the TPC-C line items\nprimary key index, which is a composite index. We want to access the\nitems from a whole order in one go, from one leaf page -- and we\nreliably do the right thing there too (though with some caveats about\nCREATE INDEX). We should never have to access more than one leaf page\nto read a single order's line items. This matters because it's quite\nnatural to want to access whole orders with that particular\ntable/workload (it's also unnatural to only access one single item\nfrom any given order).\n\nObviously there are many queries that need to access two or more leaf\npages, because that's just what needs to happen. My point is that we\n*should* only do that when it's truly necessary on modern Postgres\nversions, since the boundaries between pages are \"aligned\" with the\n\"natural boundaries\" from the keyspace/application. Maybe your testing\nshould verify that this effect is actually present, though. It would\nbe a shame if we sometimes messed up prefetching that could have\nworked well due to some issue with how page splits divide up items.\n\nCREATE INDEX is much less smart about suffix truncation -- it isn't\ncapable of the same kind of tricks as nbtsplitloc.c, even though it\ncould be taught to do roughly the same thing. Hopefully this won't be\nan issue for your work. The tenk2 case still works as expected with\nCREATE INDEX/REINDEX, due to help from deduplication. Indexes like the\nTPC-C line items PK will leave the index with some \"orders\" (or\nwhatever the natural grouping of things is) that span more than a\nsingle leaf page, which is undesirable, and might hinder your\nprefetching work. I wouldn't mind fixing that if it turned out to hurt\nyour leaf-page-at-a-time prefetching patch. Something to consider.\n\nWe can fit at most 17 TPC-C orders on each order line PK leaf page.\nCould be as few as 15. If we do the wrong thing with prefetching for 2\nout of every 15 orders then that's a real problem, but is still subtle enough\nto easily miss with conventional benchmarking. I've had a lot of success\nwith paying close attention to all the little boundary cases, which is why\nI'm kind of zealous about it now.\n\n> > I wonder if you should go further than this, by actually sorting the\n> > items that you need to fetch as part of processing a given leaf page\n> > (I said this at the unconference, you may recall). Why should we\n> > *ever* pin/access the same heap page more than once per leaf page\n> > processed per index scan? Nothing stops us from returning the tuples\n> > to the executor in the original logical/index-wise order, despite\n> > having actually accessed each leaf page's pointed-to heap pages\n> > slightly out of order (with the aim of avoiding extra pin/unpin\n> > traffic that isn't truly necessary). We can sort the heap TIDs in\n> > scratch memory, then do our actual prefetching + heap access, and then\n> > restore the original order before returning anything.\n> >\n>\n> I think that's possible, and I thought about that a bit (not just for\n> btree, but especially for the distance queries on GiST). But I don't\n> have a good idea if this would be 1% or 50% improvement, and I was\n> concerned it might easily lead to regressions if we don't actually need\n> all the tuples.\n\nI get that it could be invasive. I have the sense that just pinning\nthe same heap page more than once in very close succession is just the\nwrong thing to do, with or without prefetching.\n\n> I mean, imagine we have TIDs\n>\n> [T1, T2, T3, T4, T5, T6]\n>\n> Maybe T1, T5, T6 are from the same page, so per your proposal we might\n> reorder and prefetch them in this order:\n>\n> [T1, T5, T6, T2, T3, T4]\n>\n> But maybe we only need [T1, T2] because of a LIMIT, and the extra work\n> we did on processing T5, T6 is wasted.\n\nYeah, that's possible. But isn't that par for the course? Any\noptimization that involves speculation (including all prefetching)\ncomes with similar risks. They can be managed.\n\nI don't think that we'd literally order by TID...we wouldn't change\nthe order that each heap page was *initially* pinned. We'd just\nreorder the tuples minimally using an approach that is sufficient to\navoid repeated pinning of heap pages during processing of any one leaf\npage's heap TIDs. ISTM that the risk of wasting work is limited to\nwasting cycles on processing extra tuples from a heap page that we\ndefinitely had to process at least one tuple from already. That\ndoesn't seem particularly risky, as speculative optimizations go. The\ndownside is bounded and well understood, while the upside could be\nsignificant.\n\nI really don't have that much confidence in any of this just yet. I'm\nnot trying to make this project more difficult. I just can't help but\nnotice that the order that index scans end up pinning heap pages\nalready has significant problems, and is sensitive to things like\nsmall amounts of heap fragmentation -- maybe that's not a great basis\nfor prefetching. I *really* hate any kind of sharp discontinuity,\nwhere a minor change in an input (e.g., from minor amounts of heap\nfragmentation) has outsized impact on an output (e.g., buffers\npinned). Interactions like that tend to be really pernicious -- they\nlead to bad performance that goes unnoticed and unfixed because the\nproblem effectively camouflages itself. It may even be easier to make\nthe conservative (perhaps paranoid) assumption that weird nasty\ninteractions will cause harm somewhere down the line...why take a\nchance?\n\nI might end up prototyping this myself. I may have to put my money\nwhere my mouth is. :-)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 9 Jun 2023 11:23:56 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 11:40 AM Tomas Vondra <[email protected]>\nwrote:\n\n> We already do prefetching for bitmap index scans, where the bitmap heap\n> scan prefetches future pages based on effective_io_concurrency. I'm not\n> sure why exactly was prefetching implemented only for bitmap scans\n\n\nAt the point Greg Stark was hacking on this, the underlying OS async I/O\nfeatures were tricky to fix into PG's I/O model, and both of us did much\nreview work just to find working common ground that PG could plug into.\nLinux POSIX advisories were completely different from Solaris's async\nmodel, the other OS used for validation that the feature worked, with the\nhope being that designing against two APIs would be better than just\nfocusing on Linux. Since that foundation was all so brittle and limited,\nscope was limited to just the heap scan, since it seemed to have the best\nreturn on time invested given the parts of async I/O that did and didn't\nscale as expected.\n\nAs I remember it, the idea was to get the basic feature out the door and\ngather feedback about things like whether the effective_io_concurrency knob\nworked as expected before moving onto other prefetching. Then that got\nlost in filesystem upheaval land, with so much drama around Solaris/ZFS and\nOracle's btrfs work. I think it's just that no one ever got back to it.\n\nI have all the workloads that I use for testing automated into\npgbench-tools now, and this change would be easy to fit into testing on\nthem as I'm very heavy on block I/O tests. To get PG to reach full read\nspeed on newer storage I've had to do some strange tests, like doing index\nrange scans that touch 25+ pages. Here's that one as a pgbench script:\n\n\\set range 67 * (:multiplier + 1)\n\\set limit 100000 * :scale\n\\set limit :limit - :range\n\\set aid random(1, :limit)\nSELECT aid,abalance FROM pgbench_accounts WHERE aid >= :aid ORDER BY aid\nLIMIT :range;\n\nAnd then you use '-Dmultiplier=10' or such to crank it up. Database 4X\nRAM, multiplier=25 with 16 clients is my starting point on it when I want\nto saturate storage. Anything that lets me bring those numbers down would\nbe valuable.\n\n--\nGreg Smith [email protected]\nDirector of Open Source Strategy\n\nOn Thu, Jun 8, 2023 at 11:40 AM Tomas Vondra <[email protected]> wrote:We already do prefetching for bitmap index scans, where the bitmap heap\nscan prefetches future pages based on effective_io_concurrency. I'm not\nsure why exactly was prefetching implemented only for bitmap scansAt the point Greg Stark was hacking on this, the underlying OS async I/O features were tricky to fix into PG's I/O model, and both of us did much review work just to find working common ground that PG could plug into. Linux POSIX advisories were completely different from Solaris's async model, the other OS used for validation that the feature worked, with the hope being that designing against two APIs would be better than just focusing on Linux. Since that foundation was all so brittle and limited, scope was limited to just the heap scan, since it seemed to have the best return on time invested given the parts of async I/O that did and didn't scale as expected. As I remember it, the idea was to get the basic feature out the door and gather feedback about things like whether the effective_io_concurrency knob worked as expected before moving onto other prefetching. Then that got lost in filesystem upheaval land, with so much drama around Solaris/ZFS and Oracle's btrfs work. I think it's just that no one ever got back to it.I have all the workloads that I use for testing automated into pgbench-tools now, and this change would be easy to fit into testing on them as I'm very heavy on block I/O tests. To get PG to reach full read speed on newer storage I've had to do some strange tests, like doing index range scans that touch 25+ pages. Here's that one as a pgbench script:\\set range 67 * (:multiplier + 1)\n\n\n\\set limit 100000 * :scale\n\n\n\\set limit :limit - :range\n\n\n\\set aid random(1, :limit)\n\n\nSELECT aid,abalance FROM pgbench_accounts WHERE aid >= :aid ORDER BY aid LIMIT :range;And then you use '-Dmultiplier=10' or such to crank it up. Database 4X RAM, multiplier=25 with 16 clients is my starting point on it when I want to saturate storage. Anything that lets me bring those numbers down would be valuable.--Greg Smith [email protected] of Open Source Strategy",
"msg_date": "Fri, 9 Jun 2023 17:19:47 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 12:18:11 +0200, Tomas Vondra wrote:\n> > \n> >> 2) prefetching from executor\n> >>\n> >> Another question is whether the prefetching shouldn't actually happen\n> >> even higher - in the executor. That's what Andres suggested during the\n> >> unconference, and it kinda makes sense. That's where we do prefetching\n> >> for bitmap heap scans, so why should this happen lower, right?\n> > \n> > Yea. I think it also provides potential for further optimizations in the\n> > future to do it at that layer.\n> > \n> > One thing I have been wondering around this is whether we should not have\n> > split the code for IOS and plain indexscans...\n> > \n> \n> Which code? We already have nodeIndexscan.c and nodeIndexonlyscan.c? Or\n> did you mean something else?\n\nYes, I meant that.\n\n> >> 4) per-leaf prefetching\n> >>\n> >> The code is restricted only prefetches items from one leaf page. If the\n> >> index scan needs to scan multiple (many) leaf pages, we have to process\n> >> the first leaf page first before reading / prefetching the next one.\n> >>\n> >> I think this is acceptable limitation, certainly for v0. Prefetching\n> >> across multiple leaf pages seems way more complex (particularly for the\n> >> cases using pairing heap), so let's leave this for the future.\n> > \n> > Hm. I think that really depends on the shape of the API we end up with. If we\n> > move the responsibility more twoards to the executor, I think it very well\n> > could end up being just as simple to prefetch across index pages.\n> > \n> \n> Maybe. I'm open to that idea if you have idea how to shape the API to\n> make this possible (although perhaps not in v0).\n\nI'll try to have a look.\n\n\n> > I'm a bit confused by some of these numbers. How can OS-level prefetching lead\n> > to massive prefetching in the alread cached case, e.g. in tpch q06 and q08?\n> > Unless I missed what \"xeon / cached (speedup)\" indicates?\n> > \n> \n> I forgot to explain what \"cached\" means in the TPC-H case. It means\n> second execution of the query, so you can imagine it like this:\n> \n> for q in `seq 1 22`; do\n> \n> 1. drop caches and restart postgres\n\nAre you doing it in that order? If so, the pagecache can end up being seeded\nby postgres writing out dirty buffers.\n\n\n> 2. run query $q -> uncached\n> \n> 3. run query $q -> cached\n> \n> done\n> \n> So the second execution has a chance of having data in memory - but\n> maybe not all, because this is a 100GB data set (so ~200GB after\n> loading), but the machine only has 64GB of RAM.\n> \n> I think a likely explanation is some of the data wasn't actually in\n> memory, so prefetching still did something.\n\nAh, ok.\n\n\n> > I think it'd be good to run a performance comparison of the unpatched vs\n> > patched cases, with prefetching disabled for both. It's possible that\n> > something in the patch caused unintended changes (say spilling during a\n> > hashagg, due to larger struct sizes).\n> > \n> \n> That's certainly a good idea. I'll do that in the next round of tests. I\n> also plan to do a test on data set that fits into RAM, to test \"properly\n> cached\" case.\n\nCool. It'd be good to measure both the case of all data already being in s_b\n(to see the overhead of the buffer mapping lookups) and the case where the\ndata is in the kernel pagecache (to see the overhead of pointless\nposix_fadvise calls).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Jun 2023 13:34:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 6/10/23 22:34, Andres Freund wrote:\n> Hi,\n> \n> On 2023-06-09 12:18:11 +0200, Tomas Vondra wrote:\n>>>\n>>>> 2) prefetching from executor\n>>>>\n>>>> Another question is whether the prefetching shouldn't actually happen\n>>>> even higher - in the executor. That's what Andres suggested during the\n>>>> unconference, and it kinda makes sense. That's where we do prefetching\n>>>> for bitmap heap scans, so why should this happen lower, right?\n>>>\n>>> Yea. I think it also provides potential for further optimizations in the\n>>> future to do it at that layer.\n>>>\n>>> One thing I have been wondering around this is whether we should not have\n>>> split the code for IOS and plain indexscans...\n>>>\n>>\n>> Which code? We already have nodeIndexscan.c and nodeIndexonlyscan.c? Or\n>> did you mean something else?\n> \n> Yes, I meant that.\n> \n\nAh, you meant that maybe we shouldn't have done that. Sorry, I\nmisunderstood.\n\n>>>> 4) per-leaf prefetching\n>>>>\n>>>> The code is restricted only prefetches items from one leaf page. If the\n>>>> index scan needs to scan multiple (many) leaf pages, we have to process\n>>>> the first leaf page first before reading / prefetching the next one.\n>>>>\n>>>> I think this is acceptable limitation, certainly for v0. Prefetching\n>>>> across multiple leaf pages seems way more complex (particularly for the\n>>>> cases using pairing heap), so let's leave this for the future.\n>>>\n>>> Hm. I think that really depends on the shape of the API we end up with. If we\n>>> move the responsibility more twoards to the executor, I think it very well\n>>> could end up being just as simple to prefetch across index pages.\n>>>\n>>\n>> Maybe. I'm open to that idea if you have idea how to shape the API to\n>> make this possible (although perhaps not in v0).\n> \n> I'll try to have a look.\n> \n> \n>>> I'm a bit confused by some of these numbers. How can OS-level prefetching lead\n>>> to massive prefetching in the alread cached case, e.g. in tpch q06 and q08?\n>>> Unless I missed what \"xeon / cached (speedup)\" indicates?\n>>>\n>>\n>> I forgot to explain what \"cached\" means in the TPC-H case. It means\n>> second execution of the query, so you can imagine it like this:\n>>\n>> for q in `seq 1 22`; do\n>>\n>> 1. drop caches and restart postgres\n> \n> Are you doing it in that order? If so, the pagecache can end up being seeded\n> by postgres writing out dirty buffers.\n> \n\nActually no, I do it the other way around - first restart, then drop. It\nshouldn't matter much, though, because after building the data set (and\nvacuum + checkpoint), the data is not modified - all the queries run on\nthe same data set. So there shouldn't be any dirty buffers.\n\n> \n>> 2. run query $q -> uncached\n>>\n>> 3. run query $q -> cached\n>>\n>> done\n>>\n>> So the second execution has a chance of having data in memory - but\n>> maybe not all, because this is a 100GB data set (so ~200GB after\n>> loading), but the machine only has 64GB of RAM.\n>>\n>> I think a likely explanation is some of the data wasn't actually in\n>> memory, so prefetching still did something.\n> \n> Ah, ok.\n> \n> \n>>> I think it'd be good to run a performance comparison of the unpatched vs\n>>> patched cases, with prefetching disabled for both. It's possible that\n>>> something in the patch caused unintended changes (say spilling during a\n>>> hashagg, due to larger struct sizes).\n>>>\n>>\n>> That's certainly a good idea. I'll do that in the next round of tests. I\n>> also plan to do a test on data set that fits into RAM, to test \"properly\n>> cached\" case.\n> \n> Cool. It'd be good to measure both the case of all data already being in s_b\n> (to see the overhead of the buffer mapping lookups) and the case where the\n> data is in the kernel pagecache (to see the overhead of pointless\n> posix_fadvise calls).\n> \n\nOK, I'll make sure the next round of tests includes a sufficiently small\ndata set too. I should have some numbers sometime early next week.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Jun 2023 23:10:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, 2023-06-08 at 17:40 +0200, Tomas Vondra wrote:\n> Hi,\n> \n> At pgcon unconference I presented a PoC patch adding prefetching for\n> indexes, along with some benchmark results demonstrating the (pretty\n> significant) benefits etc. The feedback was quite positive, so let me\n> share the current patch more widely.\n> \n\nI added entry to \nhttps://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference\nbased on notes I took during that session.\nHope it helps.\n\n-- \nTomasz Rybak, Debian Developer <[email protected]>\nGPG: A565 CE64 F866 A258 4DDC F9C7 ECB7 3E37 E887 AA8C\n\n\n",
"msg_date": "Mon, 12 Jun 2023 23:27:04 +0200",
"msg_from": "Tomasz Rybak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 9:10 PM Tomas Vondra\n<[email protected]> wrote:\n\n> We already do prefetching for bitmap index scans, where the bitmap heap\n> scan prefetches future pages based on effective_io_concurrency. I'm not\n> sure why exactly was prefetching implemented only for bitmap scans, but\n> I suspect the reasoning was that it only helps when there's many\n> matching tuples, and that's what bitmap index scans are for. So it was\n> not worth the implementation effort.\n\nOne of the reasons IMHO is that in the bitmap scan before starting the\nheap fetch TIDs are already sorted in heap block order. So it is\nquite obvious that once we prefetch a heap block most of the\nsubsequent TIDs will fall on that block i.e. each prefetch will\nsatisfy many immediate requests. OTOH, in the index scan the I/O\nrequest is very random so we might have to prefetch many blocks even\nfor satisfying the request for TIDs falling on one index page. I\nagree with prefetching with an index scan will definitely help in\nreducing the random I/O, but this is my guess that thinking of\nprefetching with a Bitmap scan appears more natural and that would\nhave been one of the reasons for implementing this only for a bitmap\nscan.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:56:46 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nI have results from the new extended round of prefetch tests. I've\npushed everything to\n\n https://github.com/tvondra/index-prefetch-tests-2\n\nThere are scripts I used to run this (run-*.sh), raw results and various\nkinds of processed summaries (pdf, ods, ...) that I'll mention later.\n\n\nAs before, this tests a number of query types:\n\n- point queries with btree and hash (equality)\n- ORDER BY queries with btree (inequality + order by)\n- SAOP queries with btree (column IN (values))\n\nIt's probably futile to go through details of all the tests - it's\neasier to go through the (hopefully fairly readable) shell scripts.\n\nBut in principle, runs some simple queries while varying both the data\nset and workload:\n\n- data set may be random, sequential or cyclic (with different length)\n\n- the number of matches per value differs (i.e. equality condition may\n match 1, 10, 100, ..., 100k rows)\n\n- forces a particular scan type (indexscan, bitmapscan, seqscan)\n\n- each query is executed twice - first run (right after restarting DB\n and dropping caches) is uncached, second run should have data cached\n\n- the query is executed 5x with different parameters (so 10x in total)\n\n\nThis is tested with three basic data sizes - fits into shared buffers,\nfits into RAM and exceeds RAM. The sizes are roughly 350MB, 3.5GB and\n20GB (i5) / 40GB (xeon).\n\nNote: xeon has 64GB RAM, so technically the largest scale fits into RAM.\nBut should not matter, thanks to drop-caches and restart.\n\nI also attempted to pin the backend to a particular core, in effort to\neliminate scheduling-related noise. It's mostly what taskset does, but I\ndid that from extension (https://github.com/tvondra/taskset) which\nallows me to do that as part of the SQL script.\n\n\nFor the results, I'll talk about the v1 patch (as submitted here) fist.\nI'll use the PDF results in the \"pdf\" directory which generally show a\npivot table by different test parameters, comparing the results by\ndifferent parameters (prefetching on/off, master/patched).\n\nFeel free to do your own analysis from the raw CSV data, ofc.\n\n\nFor example, this:\n\nhttps://github.com/tvondra/index-prefetch-tests-2/blob/master/pdf/patch-v1-point-queries-builds.pdf\n\nshows how the prefetching affects timing for point queries with\ndifferent numbers of matches (1 to 100k). The numbers are timings for\nmaster and patched build. The last group is (patched/master), so the\nlower the number the better - 50% means patch makes the query 2x faster.\nThere's also a heatmap, with green=good, red=bad, which makes it easier\nto cases that got slower/faster.\n\nThe really interesting stuff starts on page 7 (in this PDF), because the\nfirst couple pages are \"cached\" (so it's more about measuring overhead\nwhen prefetching has no benefit).\n\nRight on page 7 you can see a couple cases with a mix of slower/faster\ncases, roughtly in the +/- 30% range. However, this is unrelated from\nthe patch because those are results for bitmapheapscan.\n\nFor indexscans (page 8), the results are invariably improved - the more\nmatches the better (up to ~10x faster for 100k matches).\n\nThose were results for the \"cyclic\" data set. For random data set (pages\n9-11) the results are pretty similar, but for \"sequential\" data (11-13)\nthe prefetching is actually harmful - there are red clusters, with up to\n500% slowdowns.\n\nI'm not going to explain the summary for SAOP queries\n(https://github.com/tvondra/index-prefetch-tests-2/blob/master/pdf/patch-v1-saop-queries-builds.pdf),\nthe story is roughly the same, except that there are more tested query\ncombinations (because we also vary the pattern in the IN() list - number\nof values etc.).\n\n\nSo, the conclusion from this is - generally very good results for random\nand cyclic data sets, but pretty bad results for sequential. But even\nfor the random/cyclic cases there are combinations (especially with many\nmatches) where prefetching doesn't help or even hurts.\n\nThe only way to deal with this is (I think) a cheap way to identify and\nskip inefficient prefetches, essentially by doing two things:\n\na) remembering more recently prefetched blocks (say, 1000+) and not\n prefetching them over and over\n\nb) ability to identify sequential pattern, when readahead seems to do\n pretty good job already (although I heard some disagreement)\n\nI've been thinking about how to do this - doing (a) seem pretty hard,\nbecause on the one hand we want to remember a fair number of blocks and\nwe want the check \"did we prefetch X\" to be very cheap. So a hash table\nseems nice. OTOH we want to expire \"old\" blocks and only keep the most\nrecent ones, and hash table doesn't really support that.\n\nPerhaps there is a great data structure for this, not sure. But after\nthinking about this I realized we don't need a perfect accuracy - it's\nfine to have false positives/negatives - it's fine to forget we already\nprefetched block X and prefetch it again, or prefetch it again. It's not\na matter of correctness, just a matter of efficiency - after all, we\ncan't know if it's still in memory, we only know if we prefetched it\nfairly recently.\n\nThis led me to a \"hash table of LRU caches\" thing. Imagine a tiny LRU\ncache that's small enough to be searched linearly (say, 8 blocks). And\nwe have many of them (e.g. 128), so that in total we can remember 1024\nblock numbers. Now, every block number is mapped to a single LRU by\nhashing, as if we had a hash table\n\n index = hash(blockno) % 128\n\nand we only use tha one LRU to track this block. It's tiny so we can\nsearch it linearly.\n\nTo expire prefetched blocks, there's a counter incremented every time we\nprefetch a block, and we store it in the LRU with the block number. When\nchecking the LRU we ignore old entries (with counter more than 1000\nvalues back), and we also evict/replace the oldest entry if needed.\n\nThis seems to work pretty well for the first requirement, but it doesn't\nallow identifying the sequential pattern cheaply. To do that, I added a\ntiny queue with a couple entries that can checked it the last couple\nentries are sequential.\n\nAnd this is what the attached 0002+0003 patches do. There are PDF with\nresults for this build prefixed with \"patch-v3\" and the results are\npretty good - the regressions are largely gone.\n\nIt's even cleared in the PDFs comparing the impact of the two patches:\n\n\nhttps://github.com/tvondra/index-prefetch-tests-2/blob/master/pdf/comparison-point.pdf\n\n\nhttps://github.com/tvondra/index-prefetch-tests-2/blob/master/pdf/comparison-saop.pdf\n\nWhich simply shows the \"speedup heatmap\" for the two patches, and the\n\"v3\" heatmap has much less red regression clusters.\n\nNote: The comparison-point.pdf summary has another group of columns\nillustrating if this scan type would be actually used, with \"green\"\nmeaning \"yes\". This provides additional context, because e.g. for the\n\"noisy bitmapscans\" it's all white, i.e. without setting the GUcs the\noptimizer would pick something else (hence it's a non-issue).\n\n\nLet me know if the results are not clear enough (I tried to cover the\nimportant stuff, but I'm sure there's a lot of details I didn't cover),\nor if you think some other summary would be better.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 19 Jun 2023 21:27:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nattached is a v4 of the patch, with a fairly major shift in the approach.\n\nUntil now the patch very much relied on the AM to provide information\nwhich blocks to prefetch next (based on the current leaf index page).\nThis seemed like a natural approach when I started working on the PoC,\nbut over time I ran into various drawbacks:\n\n* a lot of the logic is at the AM level\n\n* can't prefetch across the index page boundary (have to wait until the\n next index leaf page is read by the indexscan)\n\n* doesn't work for distance searches (gist/spgist),\n\nAfter thinking about this, I decided to ditch this whole idea of\nexchanging prefetch information through an API, and make the prefetching\nalmost entirely in the indexam code.\n\nThe new patch maintains a queue of TIDs (read from index_getnext_tid),\nwith up to effective_io_concurrency entries - calling getnext_slot()\nadds a TID at the queue tail, issues a prefetch for the block, and then\nreturns TID from the queue head.\n\nMaintaining the queue is up to index_getnext_slot() - it can't be done\nin index_getnext_tid(), because then it'd affect IOS (and prefetching\nheap would mostly defeat the whole point of IOS). And we can't do that\nabove index_getnext_slot() because that already fetched the heap page.\n\nI still think prefetching for IOS is doable (and desirable), in mostly\nthe same way - except that we'd need to maintain the queue from some\nother place, as IOS doesn't do index_getnext_slot().\n\nFWIW there's also the \"index-only filters without IOS\" patch [1] which\nswitches even regular index scans to index_getnext_tid(), so maybe\nrelying on index_getnext_slot() is a lost cause anyway.\n\nAnyway, this has the nice consequence that it makes AM code entirely\noblivious of prefetching - there's no need to API, we just get TIDs as\nbefore, and the prefetching magic happens after that. Thus it also works\nfor searches ordered by distance (gist/spgist). The patch got much\nsmaller (about 40kB, down from 80kB), which is nice.\n\nI ran the benchmarks [2] with this v4 patch, and the results for the\n\"point\" queries are almost exactly the same as for v3. The SAOP part is\nstill running - I'll add those results in a day or two, but I expect\nsimilar outcome as for point queries.\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/43/4352/\n\n[2] https://github.com/tvondra/index-prefetch-tests-2/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Jun 2023 13:38:06 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Here's a v5 of the patch, rebased to current master and fixing a couple\ncompiler warnings reported by cfbot (%lu vs. UINT64_FORMAT in some debug\nmessages). No other changes compared to v4.\n\ncfbot also reported a failure on windows in pg_dump [1], but it seem\npretty strange:\n\n[11:42:48.708] ------------------------------------- 8<\n-------------------------------------\n[11:42:48.708] stderr:\n[11:42:48.708] # Failed test 'connecting to an invalid database: matches'\n\nThe patch does nothing related to pg_dump, and the test works perfectly\nfine for me (I don't have windows machine, but 32-bit and 64-bit linux\nworks fine for me).\n\n\nregards\n\n\n[1] https://cirrus-ci.com/task/6398095366291456\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 14 Jul 2023 22:31:57 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nAttached is a v6 of the patch, which rebases v5 (just some minor\nbitrot), and also does a couple changes which I kept in separate patches\nto make it obvious what changed.\n\n\n0001-v5-20231016.patch\n----------------------\n\nRebase to current master.\n\n\n0002-comments-and-minor-cleanup-20231012.patch\n----------------------------------------------\n\nVarious comment improvements (remove obsolete ones clarify a bunch of\nother comments, etc.). I tried to explain the reasoning why some places\ndisable prefetching (e.g. in catalogs, replication, ...), explain how\nthe caching / LRU works etc.\n\n\n0003-remove-prefetch_reset-20231016.patch\n-----------------------------------------\n\nI decided to remove the separate prefetch_reset parameter, so that all\nthe index_beginscan() methods only take a parameter specifying the\nmaximum prefetch target. The reset was added early when the prefetch\nhappened much lower in the AM code, at the index page level, and the\nreset was when moving to the next index page. But now after the prefetch\nmoved to the executor, this doesn't make much sense - the resets happen\non rescans, and it seems right to just reset to 0 (just like for bitmap\nheap scans).\n\n\n0004-PoC-prefetch-for-IOS-20231016.patch\n----------------------------------------\n\nThis is a PoC adding the prefetch to index-only scans too. At first that\nmay seem rather strange, considering eliminating the heap fetches is the\nwhole point of IOS. But if the pages are not marked as all-visible (say,\nthe most recent part of the table), we may still have to fetch them. In\nwhich case it'd be easy to see cases that IOS is slower than a regular\nindex scan (with prefetching).\n\nThe code is quite rough. It adds a separate index_getnext_tid_prefetch()\nfunction, adding prefetching on top of index_getnext_tid(). I'm not sure\nit's the right pattern, but it's pretty much what index_getnext_slot()\ndoes too, except that it also does the fetch + store to the slot.\n\nNote: There's a second patch adding index-only filters, which requires\nthe regular index scans from index_getnext_slot() to _tid() too.\n\nThe prefetching then happens only after checking the visibility map (if\nrequested). This part definitely needs improvements - for example\nthere's no attempt to reuse the VM buffer, which I guess might be expensive.\n\n\nindex-prefetch.pdf\n------------------\n\nAttached is also a PDF with results of the same benchmark I did before,\ncomparing master vs. patched with various data patterns and scan types.\nIt's not 100% comparable to earlier results as I only ran it on a\nlaptop, and it's a bit noisier too. The overall behavior and conclusions\nare however the same.\n\nI was specifically interested in the IOS behavior, so I added two more\ncases to test - indexonlyscan and indexonlyscan-clean. The first is the\nworst-case scenario, with no pages marked as all-visible in VM (the test\nsimply deletes the VM), while indexonlyscan-clean is the good-case (no\nheap fetches needed).\n\nThe results mostly match the expected behavior, particularly for the\nuncached runs (when the data is expected to not be in memory):\n\n* indexonlyscan (i.e. bad case) - About the same results as\n \"indexscans\", with the same speedups etc. Which is a good thing\n (i.e. IOS is not unexpectedly slower than regular indexscans).\n\n* indexonlyscan-clean (i.e. good case) - Seems to have mostly the same\n performance as without the prefetching, except for the low-cardinality\n runs with many rows per key. I haven't checked what's causing this,\n but I'd bet it's the extra buffer lookups/management I mentioned.\n\n\nI noticed there's another prefetching-related patch [1] from Thomas\nMunro. I haven't looked at it yet, so hard to say how much it interferes\nwith this patch. But the idea looks interesting.\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Oct 2023 17:34:44 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's a new WIP version of the patch set adding prefetching to indexes,\nexploring a couple alternative approaches. After the patch 2023/10/16\nversion, I happened to have an off-list discussion with Andres, and he\nsuggested to try a couple things, and there's a couple more things I\ntried on my own too.\n\nAttached is the patch series starting with the 2023/10/16 patch, and\nthen trying different things in separate patches (discussed later). As\nusual, there's also a bunch of benchmark results - due to size I'm\nunable to attach all of them here (the PDFs are pretty large), but you\ncan find them at (with all the scripts etc.):\n\n https://github.com/tvondra/index-prefetch-tests/tree/master/2023-11-23\n\nI'll attach only a couple small PNG with highlighted speedup/regression\npatterns, but it's unreadable and more of a pointer to the PDF.\n\n\nA quick overview of the patches\n-------------------------------\n\nv20231124-0001-prefetch-2023-10-16.patch\n\n - same as the October 16 patch, with only minor comment tweaks\n\nv20231124-0002-rely-on-PrefetchBuffer-instead-of-custom-c.patch\n\n - removes custom cache of recently prefetched blocks, replaces it\n simply by calling PrefetchBuffer (which check shared buffers)\n\nv20231124-0003-check-page-cache-using-preadv2.patch\n\n - adds a check using preadv2(RWF_NOWAIT) to check if the whole\n page is in page cache\n\nv20231124-0004-reintroduce-the-LRU-cache-of-recent-blocks.patch\n\n - adds back a small LRU cache to identify sequential patterns\n (based on benchmarks of 0002/0003 patches)\n\nv20231124-0005-hold-the-vm-buffer-for-IOS-prefetching.patch\nv20231124-0006-poc-reuse-vm-information.patch\n\n - optimizes the visibilitymap handling when prefetching for IOS\n (to deal with overhead in the all-visible cases) by\n\nv20231124-0007-20231016-reworked.patch\n\n - returns back to the 20231016 patch, but this time with the VM\n optimizations in patches 0005/0006 (in retrospect I might have\n simply moved 0005+0006 right after 0001, but the patch evolved\n differently - shouldn't matter here)\n\nNow, let's talk about the patches one by one ...\n\n\nPrefetchBuffer + preadv2 (0002+0003)\n------------------------------------\n\nAfter I posted the patch in October, I happened to have an off-list\ndiscussion with Andres, and he suggested to try ditching the local cache\nof recently prefetched blocks, and instead:\n\n1) call PrefetchBuffer (which checks if the page is in shared buffers,\nand skips the prefetch if it's already there)\n\n2) if the page is not in shared buffers, use preadv2(RWF_NOWAIT) to\ncheck if it's in the kernel page cache\n\nDoing (1) is trivial - PrefetchBuffer() already does the shared buffer\ncheck, so 0002 simply removes the custom cache code.\n\nDoing (2) needs a bit more code to actually call preadv2() - 0003 adds\nFileCached() to fd.c, smgrcached() to smgr.c, and then calls it from\nPrefetchBuffer() right before smgrprefetch(). There's a couple loose\nends (e.g. configure should check if preadv2 is supported), but in\nprinciple I think this is generally correct.\n\nUnfortunately, these changes led to a bunch of clear regressions :-(\n\nTake a look at the attached point-4-regressions-small.png, which is page\n5 from the full results PDF [1][2]. As before, I plotted this as a huge\npivot table with various parameters (test, dataset, prefetch, ...) on\nthe left, and (build, nmatches) on the top. So each column shows timings\nfor a particular patch and query returning nmatches rows.\n\nAfter the pivot table (on the right) is a heatmap, comparing timings for\neach build to master (the first couple of columns). As usual, the\nnumbers are \"timing compared to master\" so e.g. 50% means the query\ncompleted in 1/2 the time compared to master. Color coding is simple\ntoo, green means \"good\" (speedup), red means \"bad\" (regression). The\nhigher the saturation, the bigger the difference.\n\nI find this visualization handy as it quickly highlights differences\nbetween the various patches. Just look for changes in red/green areas.\n\nIn the points-5-regressions-small.png image, you can see three areas of\nclear regressions, either compared to the master or the 20231016 patch.\nAll of this is for \"uncached\" runs, i.e. after instance got restarted\nand the page cache was dropped too.\n\nThe first regression is for bitmapscan. The first two builds show no\ndifference compared to master - which makes sense, because the 20231016\npatch does not touch any code used by bitmapscan, and the 0003 patch\nsimply uses PrefetchBuffer as is. But then 0004 adds preadv2 to it, and\nthe performance immediately sinks, with timings being ~5-6x higher for\nqueries matching 1k-100k rows.\n\nThe patches 0005/0006 can't possibly improve this, because visibilitymap\n are entirely unrelated to bitmapscans, and so is the small LRU to\ndetect sequential patterns.\n\nThe indexscan regression #1 shows a similar pattern, but in the opposite\ndirection - indesxcan cases massively improved with the 20231016 patch\n(and even after just using PrefetchBuffer) revert back to master with\n0003 (adding preadv2). Ditching the preadv2 restores the gains (the last\nbuild results are nicely green again).\n\nThe indexscan regression #2 is interesting too, and it illustrates the\nimportance of detecting sequential access patterns. It shows that as\nsoon as we call PrefetBuffer() directly, the timings increase to maybe\n2-5x compared to master. That's pretty terrible. Once the small LRU\ncache used to detect sequential patterns is added back, the performance\nrecovers and regression disappears. Clearly, this detection matters.\n\nUnfortunately, the LRU can't do anything for the two other regresisons,\nbecause those are on random/cyclic patterns, so the LRU won't work\n(certainly not for the random case).\n\npreadv2 issues?\n---------------\n\nI'm not entirely sure if I'm using preadv2 somehow wrong, but it doesn't\nseem to perform terribly well in this use case. I decided to do some\nmicrobenchmarks, measuring how long it takes to do preadv2 when the\npages are [not] in cache etc. The C files are at [3].\n\npreadv2-test simply reads file twice, first with NOWAIT and then without\nit. With clean page cache, the results look like this:\n\n file: ./tmp.img size: 1073741824 (131072) block 8192 check 8192\n preadv2 NOWAIT time 78472 us calls 131072 hits 0 misses 131072\n preadv2 WAIT time 9849082 us calls 131072 hits 131072 misses 0\n\nand then, if you run it again with the file still being in page cache:\n\n file: ./tmp.img size: 1073741824 (131072) block 8192 check 8192\n preadv2 NOWAIT time 258880 us calls 131072 hits 131072 misses 0\n preadv2 WAIT time 213196 us calls 131072 hits 131072 misses 0\n\nThis is pretty terrible, IMO. It says that if the page is not in cache,\nthe preadv2 calls take ~80ms. Which is very cheap, compared to the total\nread time (so if we can speed that up by prefetching, it's worth it).\nBut if the file is already in cache, it takes ~260ms, and actually\nexceeds the time needed to just do preadv2() without the NOWAIT flag.\n\nAFAICS the problem is preadv2() doesn't just check if the data is\navailable, it also copies the data and all that. But even if we only ask\nfor the first byte, it's still way more expensive than with empty cache:\n\n file: ./tmp.img size: 1073741824 (131072) block 8192 check 1\n preadv2 NOWAIT time 119751 us calls 131072 hits 131072 misses 0\n preadv2 WAIT time 208136 us calls 131072 hits 131072 misses 0\n\nThere's also a fadvise-test microbenchmark that just does fadvise all\nthe time, and even that is way cheaper than using preadv2(NOWAIT) in\nboth cases:\n\n no cache:\n\n file: ./tmp.img size: 1073741824 (131072) block 8192\n fadvise time 631686 us calls 131072 hits 0 misses 0\n preadv2 time 207483 us calls 131072 hits 131072 misses 0\n\n cache:\n\n file: ./tmp.img size: 1073741824 (131072) block 8192\n fadvise time 79874 us calls 131072 hits 0 misses 0\n preadv2 time 239141 us calls 131072 hits 131072 misses 0\n\nSo that's 300ms vs. 500ms in the caches case (the difference in the\nno-cache case is even more significant).\n\nIt's entirely possible I'm doing something wrong, or maybe I just think\nabout this the wrong way, but I can't quite imagine this being useful\nfor this working - at least not for reasonably good local storage. Maybe\nit could help for slow/remote storage, or something?\n\nFor now, I think the right approach is to go back to the cache of\nrecently prefetched blocks. I liked on the preadv2 approach is that it\nknows exactly what is currently in page cache, while the local cache is\njust an approximation cache of recently prefetched blocks. And it also\nknows about stuff prefetched by other backends, while the local cache is\nprivate to the particular backend (or even to the particular scan node).\n\nBut the local cache seems to perform much better, so there's that.\n\n\nLRU cache of recent blocks (0004)\n---------------------------------\n\nThe importance of this optimization is clearly visible in the regression\nimage mentioned earlier - the \"indexscan regression #2\" shows that the\nsequential pattern regresses with 0002+0003 patches, but once the small\nLRU cache is introduced back and uses to skip prefetching for sequential\npatterns, the regression disappears. Ofc, this is part of the origina\n20231016 patch, so going back to that version naturally includes this.\n\n\nvisibility map optimizations (0005/0006)\n----------------------------------------\n\nEarlier benchmark results showed a bit annoying regression for\nindex-only scans that don't need prefetching (i.e. with all pages\nall-visible). There was quite a bit of inefficiency because both the\nprefetcher and IOS code accessed the visibilitymap independently, and\nthe prefetcher did that in a rather inefficient way. These patches make\nthe prefetcher more efficient by reusing buffer, and also share the\nvisibility info between prefetcher and the IOS code.\n\nI'm sure this needs more work / cleanup, but the regresion is mostly\ngone, as illustrated by the attached point-0-ios-improvement-small.png.\n\n\nlayering questions\n------------------\n\nAside from the preadv2() question, the main open question remains to be\nthe \"layering\", i.e. which code should be responsible for prefetching.\nAt the moment all the magic happens in indexam.c, in index_getnext_*\nfunctions, so that all callers benefit from prefetching.\n\nBut as mentioned earlier in this thread, indexam.c seems to be the wrong\nlayer, and I think I agree. The problem is - the prefetching needs to\nhappen in index_getnext_* so that all index_getnext_* callers benefit\nfrom it. We could do that in the executor for index_getnext_tid(), but\nthat's a bit weird - it'd work for index-only scans, but the primary\ntarget is regular index scans, which calls index_getnext_slot().\n\nHowever, it seems it'd be good if the prefetcher and the executor code\ncould exchange/share information more easily. Take for example the\nvisibilitymap stuff in IOS in patches 0005/0006). I made it work, but it\nsure looks inconvenient, partially due to the split between executor and\nindexam code.\n\nThe only idea I have is to have the prefetcher code somewhere in the\nexecutor, but then pass it to index_getnext_* functions, either as a new\nparameter (with NULL => no prefetching), or maybe as a field of scandesc\n(but that seems wrong, to point from the desc to something that's\nessentially a part of the executor state).\n\nThere's also the thing that the prefetcher is part of IndexScanDesc, but\nit really should be in the IndexScanState. That's weird, but mostly down\nto my general laziness.\n\n\nregards\n\n\n[1]\nhttps://github.com/tvondra/index-prefetch-tests/blob/master/2023-11-23/pdf/point.pdf\n\n[2]\nhttps://github.com/tvondra/index-prefetch-tests/blob/master/2023-11-23/png/point-4.png\n\n[3]\nhttps://github.com/tvondra/index-prefetch-tests/tree/master/2023-11-23/preadv-tests\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 24 Nov 2023 17:25:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's a simplified version of the patch series, with two important\nchanges from the last version shared on 2023/11/24.\n\nFirstly, it abandons the idea to use preadv2() to check page cache. This\ninitially seemed like a great way to check if prefetching is needed, but\nin practice it seems so expensive it's not really beneficial (especially\nin the \"cached\" case, which is where it matters most).\n\nNote: There's one more reason to not want rely on preadv2() that I\nforgot to mention - it's a Linux-specific thing. I wouldn't mind using\nit to improve already acceptable behavior, but it doesn't seem like a\ngreat idea if performance without would be poor.\n\nSecondly, this reworks multiple aspects of the \"layering\".\n\nUntil now, the prefetching info was stored in IndexScanDesc and\ninitialized in indexam.c in the various \"beginscan\" functions. That was\nobviously wrong - IndexScanDesc is just a description of what the scan\nshould do, not a place where execution state (which the prefetch queue\nis) should be stored. IndexScanState (and IndexOnlyScanState) is a more\nappropriate place, so I moved it there.\n\nThis also means the various \"beginscan\" functions don't need any changes\n(i.e. not even get prefetch_max), which is nice. Because the prefetch\nstate is created/initialized elsewhere.\n\nBut there's a layering problem that I don't know how to solve - I don't\nsee how we could make indexam.c entirely oblivious to the prefetching,\nand move it entirely to the executor. Because how else would you know\nwhat to prefetch?\n\nWith index_getnext_tid() I can imagine fetching XIDs ahead, stashing\nthem into a queue, and prefetching based on that. That's kinda what the\npatch does, except that it does it from inside index_getnext_tid(). But\nthat does not work for index_getnext_slot(), because that already reads\nthe heap tuples.\n\nWe could say prefetching only works for index_getnext_tid(), but that\nseems a bit weird because that's what regular index scans do. (There's a\npatch to evaluate filters on index, which switches index scans to\nindex_getnext_tid(), so that'd make prefetching work too, but I'd ignore\nthat here. There are other index_getnext_slot() callers, and I don't\nthink we should accept does not work for those places seems wrong (e.g.\nexecIndexing/execReplication would benefit from prefetching, I think).\n\nThe patch just adds a \"prefetcher\" argument to index_getnext_*(), and\nthe prefetching still happens there. I guess we could move most of the\nprefether typedefs/code somewhere, but I don't quite see how it could be\ndone in executor entirely.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 9 Dec 2023 19:08:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Sat, Dec 9, 2023 at 1:08 PM Tomas Vondra\n<[email protected]> wrote:\n> But there's a layering problem that I don't know how to solve - I don't\n> see how we could make indexam.c entirely oblivious to the prefetching,\n> and move it entirely to the executor. Because how else would you know\n> what to prefetch?\n\nYeah, that seems impossible.\n\nSome thoughts:\n\n* I think perhaps the subject line of this thread is misleading. It\ndoesn't seem like there is any index prefetching going on here at all,\nand there couldn't be, unless you extended the index AM API with new\nmethods. What you're actually doing is prefetching heap pages that\nwill be needed by a scan of the index. I think this confusing naming\nhas propagated itself into some parts of the patch, e.g.\nindex_prefetch() reads *from the heap* which is not at all clear from\nthe comment saying \"Prefetch the TID, unless it's sequential or\nrecently prefetched.\" You're not prefetching the TID: you're\nprefetching the heap tuple to which the TID points. That's not an\nacademic distinction IMHO -- the TID would be stored in the index, so\nif we were prefetching the TID, we'd have to be reading index pages,\nnot heap pages.\n\n* Regarding layering, my first thought was that the changes to\nindex_getnext_tid() and index_getnext_slot() are sensible: read ahead\nby some number of TIDs, keep the TIDs you've fetched in an array\nsomeplace, use that to drive prefetching of blocks on disk, and return\nthe previously-read TIDs from the queue without letting the caller\nknow that the queue exists. I think that's the obvious design for a\nfeature of this type, to the point where I don't really see that\nthere's a viable alternative design. Driving something down into the\nindividual index AMs would make sense if you wanted to prefetch *from\nthe indexes*, but it's unnecessary otherwise, and best avoided.\n\n* But that said, the skip_all_visible flag passed down to\nindex_prefetch() looks like a VERY strong sign that the layering here\nis not what it should be. Right now, when some code calls\nindex_getnext_tid(), that function does not need to know or care\nwhether the caller is going to fetch the heap tuple or not. But with\nthis patch, the code does need to care. So knowledge of the executor\nconcept of an index-only scan trickles down into indexam.c, which now\nhas to be able to make decisions that are consistent with the ones\nthat the executor will make. That doesn't seem good at all.\n\n* I think it might make sense to have two different prefetching\nschemes. Ideally they could share some structure. If a caller is using\nindex_getnext_slot(), then it's easy for prefetching to be fully\ntransparent. The caller can just ask for TIDs and the prefetching\ndistance and TID queue can be fully under the control of something\nthat is hidden from the caller. But when using index_getnext_tid(),\nthe caller needs to have an opportunity to evaluate each TID and\ndecide whether we even want the heap tuple. If yes, then we feed that\nTID to the prefetcher; if no, we don't. That way, we're not\nreplicating executor logic in lower-level code. However, that also\nmeans that the IOS logic needs to be aware that this TID queue exists\nand interact with whatever controls the prefetch distance. Perhaps\nafter calling index_getnext_tid() you call\nindex_prefetcher_put_tid(prefetcher, tid, bool fetch_heap_tuple) and\nthen you call index_prefetcher_get_tid() to drain the queue. Perhaps\nalso the prefetcher has a \"fill\" callback that gets invoked when the\nTID queue isn't as full as the prefetcher wants it to be. Then\nindex_getnext_slot() can just install a trivial fill callback that\nsays index_prefetecher_put_tid(prefetcher, index_getnext_tid(...),\ntrue), but IOS can use a more sophisticated callback that checks the\nVM to determine what to pass for the third argument.\n\n* I realize that I'm being a little inconsistent in what I just said,\nbecause in the first bullet point I said that this wasn't really index\nprefetching, and now I'm proposing function names that still start\nwith index_prefetch. It's not entirely clear to me what the best thing\nto do about the terminology is here -- could it be a heap prefetcher,\nor a TID prefetcher, or an index scan prefetcher? I don't really know,\nbut whatever we can do to make the naming more clear seems like a\nreally good idea. Maybe there should be a clearer separation between\nthe queue of TIDs that we're going to return from the index and the\nqueue of blocks that we want to prefetch to get the corresponding heap\ntuples -- making that separation crisper might ease some of the naming\nissues.\n\n* Not that I want to be critical because I think this is a great start\non an important project, but it does look like there's an awful lot of\nstuff here that still needs to be sorted out before it would be\nreasonable to think of committing this, both in terms of design\ndecisions and just general polish. There's a lot of stuff marked with\nXXX and I think that's great because most of those seem to be good\nquestions but that does leave the, err, small problem of figuring out\nthe answers. index_prefetch_is_sequential() makes me really nervous\nbecause it seems to depend an awful lot on whether the OS is doing\nprefetching, and how the OS is doing prefetching, and I think those\nmight not be consistent across all systems and kernel versions.\nSimilarly with index_prefetch(). There's a lot of \"magical\"\nassumptions here. Even index_prefetch_add_cache() has this problem --\nthe function assumes that it's OK if we sometimes fail to detect a\nduplicate prefetch request, which makes sense, but under what\ncircumstances is it necessary to detect duplicates and in what cases\nis it optional? The function comments are silent about that, which\nmakes it hard to assess whether the algorithm is good enough.\n\n* In terms of polish, one thing I noticed is that index_getnext_slot()\ncalls index_prefetch_tids() even when scan->xs_heap_continue is set,\nwhich seems like it must be a waste, since we can't really need to\nkick off more prefetch requests halfway through a HOT chain referenced\nby a single index tuple, can we? Also, blks_prefetch_rounds doesn't\nseem to be used anywhere, and neither that nor blks_prefetches are\ndocumented. In fact there's no new documentation at all, which seems\nprobably not right. That's partly because there are no new GUCs, which\nI feel like typically for a feature like this would be the place where\nthe feature behavior would be mentioned in the documentation. I don't\nthink it's a good idea to tie the behavior of this feature to\neffective_io_concurrency partly because it's usually a bad idea to\nmake one setting control multiple different things, but perhaps even\nmore because effective_io_concurrency doesn't actually work in a\nuseful way AFAICT and people typically have to set it to some very\nartificially large value compared to how much real I/O parallelism\nthey have. So probably there should be new GUCs with hopefully-better\nsemantics, but at least the documentation for any existing ones would\nneed updating, I would think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 16:00:30 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 12/18/23 22:00, Robert Haas wrote:\n> On Sat, Dec 9, 2023 at 1:08 PM Tomas Vondra\n> <[email protected]> wrote:\n>> But there's a layering problem that I don't know how to solve - I don't\n>> see how we could make indexam.c entirely oblivious to the prefetching,\n>> and move it entirely to the executor. Because how else would you know\n>> what to prefetch?\n> \n> Yeah, that seems impossible.\n> \n> Some thoughts:\n> \n> * I think perhaps the subject line of this thread is misleading. It\n> doesn't seem like there is any index prefetching going on here at all,\n> and there couldn't be, unless you extended the index AM API with new\n> methods. What you're actually doing is prefetching heap pages that\n> will be needed by a scan of the index. I think this confusing naming\n> has propagated itself into some parts of the patch, e.g.\n> index_prefetch() reads *from the heap* which is not at all clear from\n> the comment saying \"Prefetch the TID, unless it's sequential or\n> recently prefetched.\" You're not prefetching the TID: you're\n> prefetching the heap tuple to which the TID points. That's not an\n> academic distinction IMHO -- the TID would be stored in the index, so\n> if we were prefetching the TID, we'd have to be reading index pages,\n> not heap pages.\n\nYes, that's a fair complaint. I think the naming is mostly obsolete -\nthe prefetching initially happened way way lower - in the index AMs. It\nwas prefetching the heap pages, ofc, but it kinda seemed reasonable to\ncall it \"index prefetching\". And even now it's called from indexam.c\nwhere most functions start with \"index_\".\n\nBut I'll think about some better / cleared name.\n\n> \n> * Regarding layering, my first thought was that the changes to\n> index_getnext_tid() and index_getnext_slot() are sensible: read ahead\n> by some number of TIDs, keep the TIDs you've fetched in an array\n> someplace, use that to drive prefetching of blocks on disk, and return\n> the previously-read TIDs from the queue without letting the caller\n> know that the queue exists. I think that's the obvious design for a\n> feature of this type, to the point where I don't really see that\n> there's a viable alternative design.\n\nI agree.\n\n> Driving something down into the individual index AMs would make sense\n> if you wanted to prefetch *from the indexes*, but it's unnecessary\n> otherwise, and best avoided.\n> \n\nRight. In fact, the patch moved exactly in the opposite direction - it\nwas originally done at the AM level, and moved up. First to indexam.c,\nthen even more to the executor.\n\n> * But that said, the skip_all_visible flag passed down to\n> index_prefetch() looks like a VERY strong sign that the layering here\n> is not what it should be. Right now, when some code calls\n> index_getnext_tid(), that function does not need to know or care\n> whether the caller is going to fetch the heap tuple or not. But with\n> this patch, the code does need to care. So knowledge of the executor\n> concept of an index-only scan trickles down into indexam.c, which now\n> has to be able to make decisions that are consistent with the ones\n> that the executor will make. That doesn't seem good at all.\n> \n\nI agree the all_visible flag is a sign the abstraction is not quite\nright. I did that mostly to quickly verify whether the duplicate VM\nchecks are causing for the perf regression (and they are).\n\nWhatever the right abstraction is, it probably needs to do these VM\nchecks only once.\n\n> * I think it might make sense to have two different prefetching\n> schemes. Ideally they could share some structure. If a caller is using\n> index_getnext_slot(), then it's easy for prefetching to be fully\n> transparent. The caller can just ask for TIDs and the prefetching\n> distance and TID queue can be fully under the control of something\n> that is hidden from the caller. But when using index_getnext_tid(),\n> the caller needs to have an opportunity to evaluate each TID and\n> decide whether we even want the heap tuple. If yes, then we feed that\n> TID to the prefetcher; if no, we don't. That way, we're not\n> replicating executor logic in lower-level code. However, that also\n> means that the IOS logic needs to be aware that this TID queue exists\n> and interact with whatever controls the prefetch distance. Perhaps\n> after calling index_getnext_tid() you call\n> index_prefetcher_put_tid(prefetcher, tid, bool fetch_heap_tuple) and\n> then you call index_prefetcher_get_tid() to drain the queue. Perhaps\n> also the prefetcher has a \"fill\" callback that gets invoked when the\n> TID queue isn't as full as the prefetcher wants it to be. Then\n> index_getnext_slot() can just install a trivial fill callback that\n> says index_prefetecher_put_tid(prefetcher, index_getnext_tid(...),\n> true), but IOS can use a more sophisticated callback that checks the\n> VM to determine what to pass for the third argument.\n> \n\nYeah, after you pointed out the \"leaky\" abstraction, I also started to\nthink about customizing the behavior using a callback. Not sure what\nexactly you mean by \"fully transparent\" but as I explained above I think\nwe need to allow passing some information between the prefetcher and the\nexecutor - for example results of the visibility map checks in IOS.\n\nI have imagined something like this:\n\nnodeIndexscan / index_getnext_slot()\n-> no callback, all TIDs are prefetched\n\nnodeIndexonlyscan / index_getnext_tid()\n-> callback checks VM for the TID, prefetches if not all-visible\n-> the VM check result is stored in the queue with the VM (but in an\n extensible way, so that other callback can store other stuff)\n-> index_getnext_tid() also returns this extra information\n\nSo not that different from the WIP patch, but in a \"generic\" and\nextensible way. Instead of hard-coding the all-visible flag, there'd be\na something custom information. A bit like qsort_r() has a void* arg to\npass custom context.\n\nOr if envisioned something different, could you elaborate a bit?\n\n> * I realize that I'm being a little inconsistent in what I just said,\n> because in the first bullet point I said that this wasn't really index\n> prefetching, and now I'm proposing function names that still start\n> with index_prefetch. It's not entirely clear to me what the best thing\n> to do about the terminology is here -- could it be a heap prefetcher,\n> or a TID prefetcher, or an index scan prefetcher? I don't really know,\n> but whatever we can do to make the naming more clear seems like a\n> really good idea. Maybe there should be a clearer separation between\n> the queue of TIDs that we're going to return from the index and the\n> queue of blocks that we want to prefetch to get the corresponding heap\n> tuples -- making that separation crisper might ease some of the naming\n> issues.\n> \n\nI think if the code stays in indexam.c, it's sensible to keep the index_\nprefix, but then also have a more appropriate rest of the name. For\nexample it might be index_prefetch_heap_pages() or something like that.\n\n> * Not that I want to be critical because I think this is a great start\n> on an important project, but it does look like there's an awful lot of\n> stuff here that still needs to be sorted out before it would be\n> reasonable to think of committing this, both in terms of design\n> decisions and just general polish. There's a lot of stuff marked with\n> XXX and I think that's great because most of those seem to be good\n> questions but that does leave the, err, small problem of figuring out\n> the answers.\n\nAbsolutely. I certainly don't claim this is close to commit ...\n\n> index_prefetch_is_sequential() makes me really nervous\n> because it seems to depend an awful lot on whether the OS is doing\n> prefetching, and how the OS is doing prefetching, and I think those\n> might not be consistent across all systems and kernel versions.\n\nIf the OS does not have read-ahead, or it's not configured properly,\nthen the patch does not perform worse than what we have now. I'm far\nmore concerned about the opposite issue, i.e. causing regressions with\nOS-level read-ahead. And the check handles that well, I think.\n\n> Similarly with index_prefetch(). There's a lot of \"magical\"\n> assumptions here. Even index_prefetch_add_cache() has this problem --\n> the function assumes that it's OK if we sometimes fail to detect a\n> duplicate prefetch request, which makes sense, but under what\n> circumstances is it necessary to detect duplicates and in what cases\n> is it optional? The function comments are silent about that, which\n> makes it hard to assess whether the algorithm is good enough.\n> \n\nI don't quite understand what problem with duplicates you envision here.\nStrictly speaking, we don't need to detect/prevent duplicates - it's\njust that if you do posix_fadvise() for a block that's already in\nmemory, it's overhead / wasted time. The whole point is to not do that\nvery often. In this sense it's entirely optional, but desirable.\n\nI'm in no way claiming the comments are perfect, ofc.\n\n> * In terms of polish, one thing I noticed is that index_getnext_slot()\n> calls index_prefetch_tids() even when scan->xs_heap_continue is set,\n> which seems like it must be a waste, since we can't really need to\n> kick off more prefetch requests halfway through a HOT chain referenced\n> by a single index tuple, can we?\n\nYeah, I think that's true.\n\n> Also, blks_prefetch_rounds doesn't\n> seem to be used anywhere, and neither that nor blks_prefetches are\n> documented. In fact there's no new documentation at all, which seems\n> probably not right. That's partly because there are no new GUCs, which\n> I feel like typically for a feature like this would be the place where\n> the feature behavior would be mentioned in the documentation.\n\nThat's mostly because the explain fields were added to help during\ndevelopment. I'm not sure we actually want to make them part of EXPLAIN.\n\n> I don't\n> think it's a good idea to tie the behavior of this feature to\n> effective_io_concurrency partly because it's usually a bad idea to\n> make one setting control multiple different things, but perhaps even\n> more because effective_io_concurrency doesn't actually work in a\n> useful way AFAICT and people typically have to set it to some very\n> artificially large value compared to how much real I/O parallelism\n> they have. So probably there should be new GUCs with hopefully-better\n> semantics, but at least the documentation for any existing ones would\n> need updating, I would think.\n> \n\nI really don't want to have multiple knobs. At this point we have three\nGUCs, each tuning prefetching for a fairly large part of the system:\n\n effective_io_concurrency = regular queries\n maintenance_io_concurrency = utility commands\n recovery_prefetch = recovery / PITR\n\nThis seems sensible, but I really don't want many more GUCs tuning\nprefetching for different executor nodes or something like that.\n\nIf we have issues with how effective_io_concurrency works (and I'm not\nsure that's actually true), then perhaps we should fix that rather than\ninventing new GUCs.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 20 Dec 2023 02:41:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 8:41 PM Tomas Vondra\n<[email protected]> wrote:\n> Whatever the right abstraction is, it probably needs to do these VM\n> checks only once.\n\nMakes sense.\n\n> Yeah, after you pointed out the \"leaky\" abstraction, I also started to\n> think about customizing the behavior using a callback. Not sure what\n> exactly you mean by \"fully transparent\" but as I explained above I think\n> we need to allow passing some information between the prefetcher and the\n> executor - for example results of the visibility map checks in IOS.\n\nAgreed.\n\n> I have imagined something like this:\n>\n> nodeIndexscan / index_getnext_slot()\n> -> no callback, all TIDs are prefetched\n>\n> nodeIndexonlyscan / index_getnext_tid()\n> -> callback checks VM for the TID, prefetches if not all-visible\n> -> the VM check result is stored in the queue with the VM (but in an\n> extensible way, so that other callback can store other stuff)\n> -> index_getnext_tid() also returns this extra information\n>\n> So not that different from the WIP patch, but in a \"generic\" and\n> extensible way. Instead of hard-coding the all-visible flag, there'd be\n> a something custom information. A bit like qsort_r() has a void* arg to\n> pass custom context.\n>\n> Or if envisioned something different, could you elaborate a bit?\n\nI can't totally follow the sketch you give above, but I think we're\nthinking along similar lines, at least.\n\n> I think if the code stays in indexam.c, it's sensible to keep the index_\n> prefix, but then also have a more appropriate rest of the name. For\n> example it might be index_prefetch_heap_pages() or something like that.\n\nYeah, that's not a bad idea.\n\n> > index_prefetch_is_sequential() makes me really nervous\n> > because it seems to depend an awful lot on whether the OS is doing\n> > prefetching, and how the OS is doing prefetching, and I think those\n> > might not be consistent across all systems and kernel versions.\n>\n> If the OS does not have read-ahead, or it's not configured properly,\n> then the patch does not perform worse than what we have now. I'm far\n> more concerned about the opposite issue, i.e. causing regressions with\n> OS-level read-ahead. And the check handles that well, I think.\n\nI'm just not sure how much I believe that it's going to work well\neverywhere. I mean, I have no evidence that it doesn't, it just kind\nof looks like guesswork to me. For instance, the behavior of the\nalgorithm depends heavily on PREFETCH_QUEUE_HISTORY and\nPREFETCH_SEQ_PATTERN_BLOCKS, but those are just magic numbers. Who is\nto say that on some system or workload you didn't test the required\nvalues aren't entirely different, or that the whole algorithm doesn't\nneed rethinking? Maybe we can't really answer that question perfectly,\nbut the patch doesn't really explain the reasoning behind this choice\nof algorithm.\n\n> > Similarly with index_prefetch(). There's a lot of \"magical\"\n> > assumptions here. Even index_prefetch_add_cache() has this problem --\n> > the function assumes that it's OK if we sometimes fail to detect a\n> > duplicate prefetch request, which makes sense, but under what\n> > circumstances is it necessary to detect duplicates and in what cases\n> > is it optional? The function comments are silent about that, which\n> > makes it hard to assess whether the algorithm is good enough.\n>\n> I don't quite understand what problem with duplicates you envision here.\n> Strictly speaking, we don't need to detect/prevent duplicates - it's\n> just that if you do posix_fadvise() for a block that's already in\n> memory, it's overhead / wasted time. The whole point is to not do that\n> very often. In this sense it's entirely optional, but desirable.\n\nRight ... but the patch sets up some data structure that will\neliminate duplicates in some circumstances and fail to eliminate them\nin others. So it's making a judgement that the things it catches are\nthe cases that are important enough that we need to catch them, and\nthe things that it doesn't catch are cases that aren't particularly\nimportant to catch. Here again, PREFETCH_LRU_SIZE and\nPREFETCH_LRU_COUNT seem like they will have a big impact, but why\nthese values? The comments suggest that it's because we want to cover\n~8MB of data, but it's not clear why that should be the right amount\nof data to cover. My naive thought is that we'd want to avoid\nprefetching a block during the time between we had prefetched it and\nwhen we later read it, but then the value that is here magically 8MB\nshould really be replaced by the operative prefetch distance.\n\n> I really don't want to have multiple knobs. At this point we have three\n> GUCs, each tuning prefetching for a fairly large part of the system:\n>\n> effective_io_concurrency = regular queries\n> maintenance_io_concurrency = utility commands\n> recovery_prefetch = recovery / PITR\n>\n> This seems sensible, but I really don't want many more GUCs tuning\n> prefetching for different executor nodes or something like that.\n>\n> If we have issues with how effective_io_concurrency works (and I'm not\n> sure that's actually true), then perhaps we should fix that rather than\n> inventing new GUCs.\n\nWell, that would very possibly be a good idea, but I still think using\nthe same GUC for two different purposes is likely to cause trouble. I\nthink what effective_io_concurrency currently controls is basically\nthe heap prefetch distance for bitmap scans, and what you want to\ncontrol here is the heap prefetch distance for index scans. If those\nare necessarily related in some understandable way (e.g. always the\nsame, one twice the other, one the square of the other) then it's fine\nto use the same parameter for both, but it's not clear to me that this\nis the case. I fear someone will find that if they crank up\neffective_io_concurrency high enough to get the amount of prefetching\nthey want for bitmap scans, it will be too much for index scans, or\nthe other way around.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 14:09:06 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 7:11 AM Tomas Vondra\n<[email protected]> wrote:\n>\nI was going through to understand the idea, couple of observations\n\n--\n+ for (int i = 0; i < PREFETCH_LRU_SIZE; i++)\n+ {\n+ entry = &prefetch->prefetchCache[lru * PREFETCH_LRU_SIZE + i];\n+\n+ /* Is this the oldest prefetch request in this LRU? */\n+ if (entry->request < oldestRequest)\n+ {\n+ oldestRequest = entry->request;\n+ oldestIndex = i;\n+ }\n+\n+ /*\n+ * If the entry is unused (identified by request being set to 0),\n+ * we're done. Notice the field is uint64, so empty entry is\n+ * guaranteed to be the oldest one.\n+ */\n+ if (entry->request == 0)\n+ continue;\n\nIf the 'entry->request == 0' then we should break instead of continue, right?\n\n---\n/*\n * Used to detect sequential patterns (and disable prefetching).\n */\n#define PREFETCH_QUEUE_HISTORY 8\n#define PREFETCH_SEQ_PATTERN_BLOCKS 4\n\nIf for sequential patterns we search only 4 blocks then why we are\nmaintaining history for 8 blocks\n\n---\n\n+ *\n+ * XXX Perhaps this should be tied to effective_io_concurrency somehow?\n+ *\n+ * XXX Could it be harmful that we read the queue backwards? Maybe memory\n+ * prefetching works better for the forward direction?\n+ */\n+ for (int i = 1; i < PREFETCH_SEQ_PATTERN_BLOCKS; i++)\n\nCorrect, I think if we fetch this forward it will have an advantage\nwith memory prefetching.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 12:19:20 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 12/20/23 20:09, Robert Haas wrote:\n> On Tue, Dec 19, 2023 at 8:41 PM Tomas Vondra\n> ...\n>> I have imagined something like this:\n>>\n>> nodeIndexscan / index_getnext_slot()\n>> -> no callback, all TIDs are prefetched\n>>\n>> nodeIndexonlyscan / index_getnext_tid()\n>> -> callback checks VM for the TID, prefetches if not all-visible\n>> -> the VM check result is stored in the queue with the VM (but in an\n>> extensible way, so that other callback can store other stuff)\n>> -> index_getnext_tid() also returns this extra information\n>>\n>> So not that different from the WIP patch, but in a \"generic\" and\n>> extensible way. Instead of hard-coding the all-visible flag, there'd be\n>> a something custom information. A bit like qsort_r() has a void* arg to\n>> pass custom context.\n>>\n>> Or if envisioned something different, could you elaborate a bit?\n> \n> I can't totally follow the sketch you give above, but I think we're\n> thinking along similar lines, at least.\n> \n\nYeah, it's hard to discuss vague descriptions of code that does not\nexist yet. I'll try to do the actual patch, then we can discuss.\n\n>>> index_prefetch_is_sequential() makes me really nervous\n>>> because it seems to depend an awful lot on whether the OS is doing\n>>> prefetching, and how the OS is doing prefetching, and I think those\n>>> might not be consistent across all systems and kernel versions.\n>>\n>> If the OS does not have read-ahead, or it's not configured properly,\n>> then the patch does not perform worse than what we have now. I'm far\n>> more concerned about the opposite issue, i.e. causing regressions with\n>> OS-level read-ahead. And the check handles that well, I think.\n> \n> I'm just not sure how much I believe that it's going to work well\n> everywhere. I mean, I have no evidence that it doesn't, it just kind\n> of looks like guesswork to me. For instance, the behavior of the\n> algorithm depends heavily on PREFETCH_QUEUE_HISTORY and\n> PREFETCH_SEQ_PATTERN_BLOCKS, but those are just magic numbers. Who is\n> to say that on some system or workload you didn't test the required\n> values aren't entirely different, or that the whole algorithm doesn't\n> need rethinking? Maybe we can't really answer that question perfectly,\n> but the patch doesn't really explain the reasoning behind this choice\n> of algorithm.\n> \n\nYou're right a lot of this is a guesswork. I don't think we can do much\nbetter, because it depends on stuff that's out of our control - each OS\nmay do things differently, or perhaps it's just configured differently.\n\nBut I don't think this is really a serious issue - all the read-ahead\nimplementations need to work about the same, because they are meant to\nwork in a transparent way.\n\nSo it's about deciding at which point we think this is a sequential\npattern. Yes, the OS may use a slightly different threshold, but the\nexact value does not really matter - in the worst case we prefetch a\ncouple more/fewer blocks.\n\nThe OS read-ahead can't really prefetch anything except sequential\ncases, so the whole question is \"When does the access pattern get\nsequential enough?\". I don't think there's a perfect answer, and I don't\nthink we need a perfect one - we just need to be reasonably close.\n\nAlso, while I don't want to lazily dismiss valid cases that might be\naffected by this, I think that sequential access for index paths is not\nthat common (with the exception of clustered indexes).\n\nFWIW bitmap index scans have exactly the same \"problem\" except that no\none cares about it because that's how it worked from the start, so it's\nnot considered a regression.\n\n>>> Similarly with index_prefetch(). There's a lot of \"magical\"\n>>> assumptions here. Even index_prefetch_add_cache() has this problem --\n>>> the function assumes that it's OK if we sometimes fail to detect a\n>>> duplicate prefetch request, which makes sense, but under what\n>>> circumstances is it necessary to detect duplicates and in what cases\n>>> is it optional? The function comments are silent about that, which\n>>> makes it hard to assess whether the algorithm is good enough.\n>>\n>> I don't quite understand what problem with duplicates you envision here.\n>> Strictly speaking, we don't need to detect/prevent duplicates - it's\n>> just that if you do posix_fadvise() for a block that's already in\n>> memory, it's overhead / wasted time. The whole point is to not do that\n>> very often. In this sense it's entirely optional, but desirable.\n> \n> Right ... but the patch sets up some data structure that will\n> eliminate duplicates in some circumstances and fail to eliminate them\n> in others. So it's making a judgement that the things it catches are\n> the cases that are important enough that we need to catch them, and\n> the things that it doesn't catch are cases that aren't particularly\n> important to catch. Here again, PREFETCH_LRU_SIZE and\n> PREFETCH_LRU_COUNT seem like they will have a big impact, but why\n> these values? The comments suggest that it's because we want to cover\n> ~8MB of data, but it's not clear why that should be the right amount\n> of data to cover. My naive thought is that we'd want to avoid\n> prefetching a block during the time between we had prefetched it and\n> when we later read it, but then the value that is here magically 8MB\n> should really be replaced by the operative prefetch distance.\n> \n\nTrue. Ideally we'd not issue prefetch request for data that's already in\nmemory - either in shared buffers or page cache (or whatever). And we\nalready do that for shared buffers, but not for page cache. The preadv2\nexperiment was an attempt to do that, but it's too expensive to help.\n\nSo we have to approximate, and the only way I can think of is checking\nif we recently prefetched that block. Which is the whole point of this\nsimple cache - remembering which blocks we prefetched, so that we don't\nprefetch them over and over again.\n\nI don't understand what you mean by \"cases that are important enough\".\nIn a way, all the blocks are equally important, with exactly the same\nimpact of making the wrong decision.\n\nYou're certainly right the 8MB is a pretty arbitrary value, though. It\nseemed reasonable, so I used that, but I might just as well use 32MB or\nsome other sensible value. Ultimately, any hard-coded value is going to\nbe wrong, but the negative consequences are a bit asymmetrical. If the\ncache is too small, we may end up doing prefetches for data that's\nalready in cache. If it's too large, we may not prefetch data that's not\nin memory at that point.\n\nObviously, the latter case has much more severe impact, but it depends\non the exact workload / access pattern etc. The only \"perfect\" solution\nwould be to actually check the page cache, but well - that seems to be\nfairly expensive.\n\nWhat I was envisioning was something self-tuning, based on the I/O we\nmay do later. If the prefetcher decides to prefetch something, but finds\nit's already in cache, we'd increase the distance, to remember more\nblocks. Likewise, if a block is not prefetched but then requires I/O\nlater, decrease the distance. That'd make it adaptive, but I don't think\nwe actually have the info about I/O.\n\nA bigger \"flaw\" is that these caches are per-backend, so there's no way\nto check if a block was recently prefetched by some other backend. I\nactually wonder if maybe this cache should be in shared memory, but I\nhaven't tried.\n\nAlternatively, I was thinking about moving the prefetches into a\nseparate worker process (or multiple workers), so we'd just queue the\nrequest and all the overhead would be done by the worker. The main\nproblem is the overhead of calling posix_fadvise() for blocks that are\nalready in memory, and this would just move it to a separate backend. I\nwonder if that might even make the custom cache unnecessary / optional.\n\nAFAICS this seems similar to some of the AIO patch, I wonder what that\nplans to do. I need to check.\n\n>> I really don't want to have multiple knobs. At this point we have three\n>> GUCs, each tuning prefetching for a fairly large part of the system:\n>>\n>> effective_io_concurrency = regular queries\n>> maintenance_io_concurrency = utility commands\n>> recovery_prefetch = recovery / PITR\n>>\n>> This seems sensible, but I really don't want many more GUCs tuning\n>> prefetching for different executor nodes or something like that.\n>>\n>> If we have issues with how effective_io_concurrency works (and I'm not\n>> sure that's actually true), then perhaps we should fix that rather than\n>> inventing new GUCs.\n> \n> Well, that would very possibly be a good idea, but I still think using\n> the same GUC for two different purposes is likely to cause trouble. I\n> think what effective_io_concurrency currently controls is basically\n> the heap prefetch distance for bitmap scans, and what you want to\n> control here is the heap prefetch distance for index scans. If those\n> are necessarily related in some understandable way (e.g. always the\n> same, one twice the other, one the square of the other) then it's fine\n> to use the same parameter for both, but it's not clear to me that this\n> is the case. I fear someone will find that if they crank up\n> effective_io_concurrency high enough to get the amount of prefetching\n> they want for bitmap scans, it will be too much for index scans, or\n> the other way around.\n> \n\nI understand, but I think we should really try to keep the number of\nknobs as low as possible, unless we actually have very good arguments\nfor having separate GUCs. And I don't think we have that.\n\nThis is very much about how many concurrent requests the storage can\nhandle (or rather requires to benefit from the capabilities), and that's\npretty orthogonal to which operation is generating the requests.\n\nI think this is pretty similar to what we do with work_mem - there's one\nvalue for all possible parts of the query plan, no matter if it's sort,\ngroup by, or something else. We do have separate limits for maintenance\ncommands, because that's a different matter, and we have the same for\nthe two I/O GUCs.\n\nIf we come to the realization that really need two GUCs, fine with me.\nBut at this point I don't see a reason to do that.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 13:30:42 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 12/21/23 07:49, Dilip Kumar wrote:\n> On Wed, Dec 20, 2023 at 7:11 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n> I was going through to understand the idea, couple of observations\n> \n> --\n> + for (int i = 0; i < PREFETCH_LRU_SIZE; i++)\n> + {\n> + entry = &prefetch->prefetchCache[lru * PREFETCH_LRU_SIZE + i];\n> +\n> + /* Is this the oldest prefetch request in this LRU? */\n> + if (entry->request < oldestRequest)\n> + {\n> + oldestRequest = entry->request;\n> + oldestIndex = i;\n> + }\n> +\n> + /*\n> + * If the entry is unused (identified by request being set to 0),\n> + * we're done. Notice the field is uint64, so empty entry is\n> + * guaranteed to be the oldest one.\n> + */\n> + if (entry->request == 0)\n> + continue;\n> \n> If the 'entry->request == 0' then we should break instead of continue, right?\n> \n\nYes, I think that's true. The small LRU caches are accessed/filled\nlinearly, so once we find an empty entry, all following entries are\ngoing to be empty too.\n\nI thought this shouldn't make any difference, because the LRUs are very\nsmall (only 8 entries, and I don't think we should make them larger).\nAnd it's going to go away once the cache gets full. But now that I think\nabout it, maybe this could matter for small queries that only ever hit a\ncouple rows. Hmmm, I'll have to check.\n\nThanks for noticing this!\n\n> ---\n> /*\n> * Used to detect sequential patterns (and disable prefetching).\n> */\n> #define PREFETCH_QUEUE_HISTORY 8\n> #define PREFETCH_SEQ_PATTERN_BLOCKS 4\n> \n> If for sequential patterns we search only 4 blocks then why we are\n> maintaining history for 8 blocks\n> \n> ---\n\nRight, I think there's no reason to keep these two separate constants. I\nbelieve this is a remnant from an earlier patch version which tried to\ndo something smarter, but I ended up abandoning that.\n\n> \n> + *\n> + * XXX Perhaps this should be tied to effective_io_concurrency somehow?\n> + *\n> + * XXX Could it be harmful that we read the queue backwards? Maybe memory\n> + * prefetching works better for the forward direction?\n> + */\n> + for (int i = 1; i < PREFETCH_SEQ_PATTERN_BLOCKS; i++)\n> \n> Correct, I think if we fetch this forward it will have an advantage\n> with memory prefetching.\n> \n\nOK, although we only really have a couple uint32 values, so it should be\nthe same cacheline I guess.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 13:48:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-09 19:08:20 +0100, Tomas Vondra wrote:\n> But there's a layering problem that I don't know how to solve - I don't\n> see how we could make indexam.c entirely oblivious to the prefetching,\n> and move it entirely to the executor. Because how else would you know\n> what to prefetch?\n\n> With index_getnext_tid() I can imagine fetching XIDs ahead, stashing\n> them into a queue, and prefetching based on that. That's kinda what the\n> patch does, except that it does it from inside index_getnext_tid(). But\n> that does not work for index_getnext_slot(), because that already reads\n> the heap tuples.\n\n> We could say prefetching only works for index_getnext_tid(), but that\n> seems a bit weird because that's what regular index scans do. (There's a\n> patch to evaluate filters on index, which switches index scans to\n> index_getnext_tid(), so that'd make prefetching work too, but I'd ignore\n> that here.\n\nI think we should just switch plain index scans to index_getnext_tid(). It's\none of the primary places triggering index scans, so a few additional lines\ndon't seem problematic.\n\nI continue to think that we should not have split plain and index only scans\ninto separate files...\n\n\n> There are other index_getnext_slot() callers, and I don't\n> think we should accept does not work for those places seems wrong (e.g.\n> execIndexing/execReplication would benefit from prefetching, I think).\n\nI don't think it'd be a problem to have to opt into supporting\nprefetching. There's plenty places where it doesn't really seem likely to be\nuseful, e.g. doing prefetching during syscache lookups is very likely just a\nwaste of time.\n\nI don't think e.g. execReplication is likely to benefit from prefetching -\nyou're just fetching a single row after all. You'd need a lot of dead rows to\nmake it beneficial. I think it's similar in execIndexing.c.\n\n\nI suspect we should work on providing executor nodes with some estimates about\nthe number of rows that are likely to be consumed. If an index scan is under a\nLIMIT 1, we shoulnd't prefetch. Similar for sequential scan with the\ninfrastructure in\nhttps://postgr.es/m/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 05:27:42 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 13:30:42 +0100, Tomas Vondra wrote:\n> You're right a lot of this is a guesswork. I don't think we can do much\n> better, because it depends on stuff that's out of our control - each OS\n> may do things differently, or perhaps it's just configured differently.\n> \n> But I don't think this is really a serious issue - all the read-ahead\n> implementations need to work about the same, because they are meant to\n> work in a transparent way.\n> \n> So it's about deciding at which point we think this is a sequential\n> pattern. Yes, the OS may use a slightly different threshold, but the\n> exact value does not really matter - in the worst case we prefetch a\n> couple more/fewer blocks.\n> \n> The OS read-ahead can't really prefetch anything except sequential\n> cases, so the whole question is \"When does the access pattern get\n> sequential enough?\". I don't think there's a perfect answer, and I don't\n> think we need a perfect one - we just need to be reasonably close.\n\nFor the streaming read interface (initially backed by fadvise, to then be\nreplaced by AIO) we found that it's clearly necessary to avoid fadvises in\ncases of actual sequential IO - the overhead otherwise leads to easily\nreproducible regressions. So I don't think we have much choice.\n\n\n> Also, while I don't want to lazily dismiss valid cases that might be\n> affected by this, I think that sequential access for index paths is not\n> that common (with the exception of clustered indexes).\n\nI think sequential access is common in other cases as well. There's lots of\nindexes where heap tids are almost perfectly correlated with index entries,\nconsider insert only insert-only tables and serial PKs or inserted_at\ntimestamp columns. Even leaving those aside, for indexes with many entries\nfor the same key, we sort by tid these days, which will also result in\n\"runs\" of sequential access.\n\n\n> Obviously, the latter case has much more severe impact, but it depends\n> on the exact workload / access pattern etc. The only \"perfect\" solution\n> would be to actually check the page cache, but well - that seems to be\n> fairly expensive.\n\n> What I was envisioning was something self-tuning, based on the I/O we\n> may do later. If the prefetcher decides to prefetch something, but finds\n> it's already in cache, we'd increase the distance, to remember more\n> blocks. Likewise, if a block is not prefetched but then requires I/O\n> later, decrease the distance. That'd make it adaptive, but I don't think\n> we actually have the info about I/O.\n\nHow would the prefetcher know that hte data wasn't in cache?\n\n\n> Alternatively, I was thinking about moving the prefetches into a\n> separate worker process (or multiple workers), so we'd just queue the\n> request and all the overhead would be done by the worker. The main\n> problem is the overhead of calling posix_fadvise() for blocks that are\n> already in memory, and this would just move it to a separate backend. I\n> wonder if that might even make the custom cache unnecessary / optional.\n\nThe AIO patchset provides this.\n\n\n> AFAICS this seems similar to some of the AIO patch, I wonder what that\n> plans to do. I need to check.\n\nYes, most of this exists there. The difference that with the AIO you don't\nneed to prefetch, as you can just initiate the IO for real, and wait for it to\ncomplete.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 05:43:14 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 12/21/23 14:43, Andres Freund wrote:\n> Hi,\n> \n> On 2023-12-21 13:30:42 +0100, Tomas Vondra wrote:\n>> You're right a lot of this is a guesswork. I don't think we can do much\n>> better, because it depends on stuff that's out of our control - each OS\n>> may do things differently, or perhaps it's just configured differently.\n>>\n>> But I don't think this is really a serious issue - all the read-ahead\n>> implementations need to work about the same, because they are meant to\n>> work in a transparent way.\n>>\n>> So it's about deciding at which point we think this is a sequential\n>> pattern. Yes, the OS may use a slightly different threshold, but the\n>> exact value does not really matter - in the worst case we prefetch a\n>> couple more/fewer blocks.\n>>\n>> The OS read-ahead can't really prefetch anything except sequential\n>> cases, so the whole question is \"When does the access pattern get\n>> sequential enough?\". I don't think there's a perfect answer, and I don't\n>> think we need a perfect one - we just need to be reasonably close.\n> \n> For the streaming read interface (initially backed by fadvise, to then be\n> replaced by AIO) we found that it's clearly necessary to avoid fadvises in\n> cases of actual sequential IO - the overhead otherwise leads to easily\n> reproducible regressions. So I don't think we have much choice.\n> \n\nYeah, the regression are pretty easy to demonstrate. In fact, I didn't\nhave such detection in the first patch, but after the first round of\nbenchmarks it became obvious it's needed.\n\n> \n>> Also, while I don't want to lazily dismiss valid cases that might be\n>> affected by this, I think that sequential access for index paths is not\n>> that common (with the exception of clustered indexes).\n> \n> I think sequential access is common in other cases as well. There's lots of\n> indexes where heap tids are almost perfectly correlated with index entries,\n> consider insert only insert-only tables and serial PKs or inserted_at\n> timestamp columns. Even leaving those aside, for indexes with many entries\n> for the same key, we sort by tid these days, which will also result in\n> \"runs\" of sequential access.\n> \n\nTrue. I should have thought about those cases.\n\n> \n>> Obviously, the latter case has much more severe impact, but it depends\n>> on the exact workload / access pattern etc. The only \"perfect\" solution\n>> would be to actually check the page cache, but well - that seems to be\n>> fairly expensive.\n> \n>> What I was envisioning was something self-tuning, based on the I/O we\n>> may do later. If the prefetcher decides to prefetch something, but finds\n>> it's already in cache, we'd increase the distance, to remember more\n>> blocks. Likewise, if a block is not prefetched but then requires I/O\n>> later, decrease the distance. That'd make it adaptive, but I don't think\n>> we actually have the info about I/O.\n> \n> How would the prefetcher know that hte data wasn't in cache?\n> \n\nI don't think there's a good way to do that, unfortunately, or at least\nI'm not aware of it. That's what I meant by \"we don't have the info\" at\nthe end. Which is why I haven't tried implementing it.\n\nThe only \"solution\" I could come up with was some sort of \"timing\" for\nthe I/O requests and deducing what was cached. Not great, of course.\n\n> \n>> Alternatively, I was thinking about moving the prefetches into a\n>> separate worker process (or multiple workers), so we'd just queue the\n>> request and all the overhead would be done by the worker. The main\n>> problem is the overhead of calling posix_fadvise() for blocks that are\n>> already in memory, and this would just move it to a separate backend. I\n>> wonder if that might even make the custom cache unnecessary / optional.\n> \n> The AIO patchset provides this.\n> \n\nOK, I guess it's time for me to take a look at the patch again.\n\n> \n>> AFAICS this seems similar to some of the AIO patch, I wonder what that\n>> plans to do. I need to check.\n> \n> Yes, most of this exists there. The difference that with the AIO you don't\n> need to prefetch, as you can just initiate the IO for real, and wait for it to\n> complete.\n> \n\nRight, although the line where things stop being \"prefetch\" and becomes\n\"async\" seems a bit unclear to me / perhaps more a point of view.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 16:20:45 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 12/21/23 14:27, Andres Freund wrote:\n> Hi,\n> \n> On 2023-12-09 19:08:20 +0100, Tomas Vondra wrote:\n>> But there's a layering problem that I don't know how to solve - I don't\n>> see how we could make indexam.c entirely oblivious to the prefetching,\n>> and move it entirely to the executor. Because how else would you know\n>> what to prefetch?\n> \n>> With index_getnext_tid() I can imagine fetching XIDs ahead, stashing\n>> them into a queue, and prefetching based on that. That's kinda what the\n>> patch does, except that it does it from inside index_getnext_tid(). But\n>> that does not work for index_getnext_slot(), because that already reads\n>> the heap tuples.\n> \n>> We could say prefetching only works for index_getnext_tid(), but that\n>> seems a bit weird because that's what regular index scans do. (There's a\n>> patch to evaluate filters on index, which switches index scans to\n>> index_getnext_tid(), so that'd make prefetching work too, but I'd ignore\n>> that here.\n> \n> I think we should just switch plain index scans to index_getnext_tid(). It's\n> one of the primary places triggering index scans, so a few additional lines\n> don't seem problematic.\n> \n> I continue to think that we should not have split plain and index only scans\n> into separate files...\n> \n\nI do agree with that opinion. Not just because of this prefetching\nthread, but also because of the discussions about index-only filters in\na nearby thread.\n\n> \n>> There are other index_getnext_slot() callers, and I don't\n>> think we should accept does not work for those places seems wrong (e.g.\n>> execIndexing/execReplication would benefit from prefetching, I think).\n> \n> I don't think it'd be a problem to have to opt into supporting\n> prefetching. There's plenty places where it doesn't really seem likely to be\n> useful, e.g. doing prefetching during syscache lookups is very likely just a\n> waste of time.\n> \n> I don't think e.g. execReplication is likely to benefit from prefetching -\n> you're just fetching a single row after all. You'd need a lot of dead rows to\n> make it beneficial. I think it's similar in execIndexing.c.\n> \n\nYeah, systable scans are unlikely to benefit from prefetching of this\ntype. I'm not sure about execIndexing/execReplication, it wasn't clear\nto me but maybe you're right.\n\n> \n> I suspect we should work on providing executor nodes with some estimates about\n> the number of rows that are likely to be consumed. If an index scan is under a\n> LIMIT 1, we shoulnd't prefetch. Similar for sequential scan with the\n> infrastructure in\n> https://postgr.es/m/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n> \n\nIsn't this mostly addressed by the incremental ramp-up at the beginning?\nEven with target set to 1000, we only start prefetching 1, 2, 3, ...\nblocks ahead, it's not like we'll prefetch 1000 blocks right away.\n\nI did initially plan to also consider the number of rows we're expected\nto need, but I think it's actually harder than it might seem. With LIMIT\nfor example we often don't know how selective the qual is, it's not like\nwe can just stop prefetching after the reading the first N tids. With\nother nodes it's good to remember those are just estimates - it'd be\nsilly to be bitten both by a wrong estimate and also prefetching doing\nthe wrong thing based on an estimate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 16:32:51 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 16:20:45 +0100, Tomas Vondra wrote:\n> On 12/21/23 14:43, Andres Freund wrote:\n> >> AFAICS this seems similar to some of the AIO patch, I wonder what that\n> >> plans to do. I need to check.\n> > \n> > Yes, most of this exists there. The difference that with the AIO you don't\n> > need to prefetch, as you can just initiate the IO for real, and wait for it to\n> > complete.\n> > \n> \n> Right, although the line where things stop being \"prefetch\" and becomes\n> \"async\" seems a bit unclear to me / perhaps more a point of view.\n\nAgreed. What I meant with not needing prefetching was that you'd not use\nfadvise(), because it's better to instead just asynchronously read data into\nshared buffers. That way you don't have the doubling of syscalls and you don't\nneed to care less about the buffering rate in the kernel.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 07:43:52 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 10:33 AM Tomas Vondra\n<[email protected]> wrote:\n> > I continue to think that we should not have split plain and index only scans\n> > into separate files...\n>\n> I do agree with that opinion. Not just because of this prefetching\n> thread, but also because of the discussions about index-only filters in\n> a nearby thread.\n\nFor the record, in the original patch I submitted for this feature, it\nwasn't in separate files. If memory serves, Tom changed it.\n\nSo don't blame me. :-)\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 11:00:34 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2023-12-21 11:00:34 -0500, Robert Haas wrote:\n> On Thu, Dec 21, 2023 at 10:33 AM Tomas Vondra\n> <[email protected]> wrote:\n> > > I continue to think that we should not have split plain and index only scans\n> > > into separate files...\n> >\n> > I do agree with that opinion. Not just because of this prefetching\n> > thread, but also because of the discussions about index-only filters in\n> > a nearby thread.\n> \n> For the record, in the original patch I submitted for this feature, it\n> wasn't in separate files. If memory serves, Tom changed it.\n> \n> So don't blame me. :-)\n\nBut I'd like you to feel guilty (no, not really) and fix it (yes, really) :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 21 Dec 2023 08:07:57 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 11:08 AM Andres Freund <[email protected]> wrote:\n> But I'd like you to feel guilty (no, not really) and fix it (yes, really) :)\n\nSadly, you're more likely to get the first one than you are to get the\nsecond one. I can't really see going back to revisit that decision as\na basis for somebody else's new work -- it'd be better if the person\ndoing the new work figured out what makes sense here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Dec 2023 12:14:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 12/21/23 18:14, Robert Haas wrote:\n> On Thu, Dec 21, 2023 at 11:08 AM Andres Freund <[email protected]> wrote:\n>> But I'd like you to feel guilty (no, not really) and fix it (yes, really) :)\n> \n> Sadly, you're more likely to get the first one than you are to get the\n> second one. I can't really see going back to revisit that decision as\n> a basis for somebody else's new work -- it'd be better if the person\n> doing the new work figured out what makes sense here.\n> \n\nI think it's a great example of \"hindsight is 20/20\". There were\nperfectly valid reasons to have two separate nodes, and it's not like\nthese reasons somehow disappeared. It still is a perfectly reasonable\ndecision.\n\nIt's just that allowing index-only filters for regular index scans seems\nto eliminate pretty much all executor differences between the two nodes.\nBut that's hard to predict - I certainly would not have even think about\nthat back when index-only scans were added.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 20:05:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's a somewhat reworked version of the patch. My initial goal was to\nsee if it could adopt the StreamingRead API proposed in [1], but that\nturned out to be less straight-forward than I hoped, for two reasons:\n\n(1) The StreamingRead API seems to be designed for pages, but the index\ncode naturally works with TIDs/tuples. Yes, the callbacks can associate\nthe blocks with custom data (in this case that'd be the TID), but it\nseemed a bit strange ...\n\n(2) The place adding requests to the StreamingRead queue is pretty far\nfrom the place actually reading the pages - for prefetching, the\nrequests would be generated in nodeIndexscan, but the page reading\nhappens somewhere deep in index_fetch_heap/heapam_index_fetch_tuple.\nSure, the TIDs would come from a callback, so it's a bit as if the\nrequests were generated in heapam_index_fetch_tuple - but it has no idea\nStreamingRead exists, so where would it get it.\n\nWe might teach it about it, but what if there are multiple places\ncalling index_fetch_heap()? Not all of which may be using StreamingRead\n(only indexscans would do that). Or if there are multiple index scans,\nthere's need to be a separate StreamingRead queues, right?\n\nIn any case, I felt a bit out of my depth here, and I chose not to do\nall this work without discussing the direction here. (Also, see the\npoint about cursors and xs_heap_continue a bit later in this post.)\n\n\nI did however like the general StreamingRead API - how it splits the\nwork between the API and the callback. The patch used to do everything,\nwhich meant it hardcoded a lot of the IOS-specific logic etc. I did plan\nto have some sort of \"callback\" for reading from the queue, but that\ndidn't quite solve this issue - a lot of the stuff remained hard-coded.\nBut the StreamingRead API made me realize that having a callback for the\nfirst phase (that adds requests to the queue) would fix that.\n\nSo I did that - there's now one simple callback in for index scans, and\na bit more complex callback for index-only scans. Thanks to this the\nhard-coded stuff mostly disappears, which is good.\n\nPerhaps a bigger change is that I decided to move this into a separate\nAPI on top of indexam.c. The original idea was to integrate this into\nindex_getnext_tid/index_getnext_slot, so that all callers benefit from\nthe prefetching automatically. Which would be nice, but it also meant\nit's need to happen in the indexam.c code, which seemed dirty.\n\nThis patch introduces an API similar to StreamingRead. It calls the\nindexam.c stuff, but does all the prefetching on top of it, not in it.\nIf a place calling index_getnext_tid() wants to allow prefetching, it\nneeds to switch to IndexPrefetchNext(). (There's no function that would\nreplace index_getnext_slot, at the moment. Maybe there should be.)\n\nNote 1: The IndexPrefetch name is a bit misleading, because it's used\neven with prefetching disabled - all index reads from the index scan\nhappen through it. Maybe it should be called IndexReader or something\nlike that.\n\nNote 2: I left the code in indexam.c for now, but in principle it could\n(should) be moved to a different place.\n\nI think this layering makes sense, and it's probably much closer to what\nAndres meant when he said the prefetching should happen in the executor.\nEven if the patch ends up using StreamingRead in the future, I guess\nwe'll want something like IndexPrefetch - it might use the StreamingRead\ninternally, but it would still need to do some custom stuff to detect\nI/O patterns or something that does not quite fit into the StreamingRead.\n\n\nNow, let's talk about two (mostly unrelated) problems I ran into.\n\nFirstly, I realized there's a bit of a problem with cursors. The\nprefetching works like this:\n\n1) reading TIDs from the index\n2) stashing them into a queue in IndexPrefetch\n3) doing prefetches for the new TIDs added to the queue\n4) returning the TIDs to the caller, one by one\n\nAnd all of this works ... unless the direction of the scan changes.\nWhich for cursors can happen if someone does FETCH BACKWARD or stuff\nlike that. I'm not sure how difficult it'd be to make this work. I\nsuppose we could simply discard the prefetched entries and do the right\nnumber of steps back for the index scan. But I haven't tried, and maybe\nit's more complex than I'm imagining. Also, if the cursor changes the\ndirection a lot, it'd make the prefetching harmful.\n\nThe patch simply disables prefetching for such queries, using the same\nlogic that we do for parallelism. This may be over-zealous.\n\nFWIW this is one of the things that probably should remain outside of\nStreamingRead API - it seems pretty index-specific, and I'm not sure\nwe'd even want to support these \"backward\" movements in the API.\n\n\nThe other issue I'm aware of is handling xs_heap_continue. I believe it\nworks fine for \"false\" but I need to take a look at non-MVCC snapshots\n(i.e. when xs_heap_continue=true).\n\n\nI haven't done any benchmarks with this reworked API - there's a couple\nmore allocations etc. but it did not change in a fundamental way. I\ndon't expect any major difference.\n\nregards\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Jan 2024 15:55:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Jan 4, 2024 at 9:55 AM Tomas Vondra\n<[email protected]> wrote:\n> Here's a somewhat reworked version of the patch. My initial goal was to\n> see if it could adopt the StreamingRead API proposed in [1], but that\n> turned out to be less straight-forward than I hoped, for two reasons:\n\nI guess we need Thomas or Andres or maybe Melanie to comment on this.\n\n> Perhaps a bigger change is that I decided to move this into a separate\n> API on top of indexam.c. The original idea was to integrate this into\n> index_getnext_tid/index_getnext_slot, so that all callers benefit from\n> the prefetching automatically. Which would be nice, but it also meant\n> it's need to happen in the indexam.c code, which seemed dirty.\n\nThis patch is hard to review right now because there's a bunch of\ncomment updating that doesn't seem to have been done for the new\ndesign. For instance:\n\n+ * XXX This does not support prefetching of heap pages. When such\nprefetching is\n+ * desirable, use index_getnext_tid().\n\nBut not any more.\n\n+ * XXX The prefetching may interfere with the patch allowing us to evaluate\n+ * conditions on the index tuple, in which case we may not need the heap\n+ * tuple. Maybe if there's such filter, we should prefetch only pages that\n+ * are not all-visible (and the same idea would also work for IOS), but\n+ * it also makes the indexing a bit \"aware\" of the visibility stuff (which\n+ * seems a somewhat wrong). Also, maybe we should consider the filter\nselectivity\n\nI'm not sure whether all the problems in this area are solved, but I\nthink you've solved enough of them that this at least needs rewording,\nif not removing.\n\n+ * XXX Comment/check seems obsolete.\n\nThis occurs in two places. I'm not sure if it's accurate or not.\n\n+ * XXX Could this be an issue for the prefetching? What if we\nprefetch something\n+ * but the direction changes before we get to the read? If that\ncould happen,\n+ * maybe we should discard the prefetched data and go back? But can we even\n+ * do that, if we already fetched some TIDs from the index? I don't think\n+ * indexorderdir can't change, but es_direction maybe can?\n\nBut your email claims that \"The patch simply disables prefetching for\nsuch queries, using the same logic that we do for parallelism.\" FWIW,\nI think that's a fine way to handle that case.\n\n+ * XXX Maybe we should enable prefetching, but prefetch only pages that\n+ * are not all-visible (but checking that from the index code seems like\n+ * a violation of layering etc).\n\nIsn't this fixed now? Note this comment occurs twice.\n\n+ * XXX We need to disable this in some cases (e.g. when using index-only\n+ * scans, we don't want to prefetch pages). Or maybe we should prefetch\n+ * only pages that are not all-visible, that'd be even better.\n\nHere again.\n\nAnd now for some comments on other parts of the patch, mostly other\nXXX comments:\n\n+ * XXX This does not support prefetching of heap pages. When such\nprefetching is\n+ * desirable, use index_getnext_tid().\n\nThere's probably no reason to write XXX here. The comment is fine.\n\n+ * XXX Notice we haven't added the block to the block queue yet, and there\n+ * is a preceding block (i.e. blockIndex-1 is valid).\n\nSame here, possibly? If this XXX indicates a defect in the code, I\ndon't know what the defect is, so I guess it needs to be more clear.\nIf it is just explaining the code, then there's no reason for the\ncomment to say XXX.\n\n+ * XXX Could it be harmful that we read the queue backwards? Maybe memory\n+ * prefetching works better for the forward direction?\n\nIt does. But I don't know whether that matters here or not.\n\n+ * XXX We do add the cache size to the request in order not to\n+ * have issues with uint64 underflows.\n\nI don't know what this means.\n\n+ * XXX not sure this correctly handles xs_heap_continue - see\nindex_getnext_slot,\n+ * maybe nodeIndexscan needs to do something more to handle this?\nAlthough, that\n+ * should be in the indexscan next_cb callback, probably.\n+ *\n+ * XXX If xs_heap_continue=true, we need to return the last TID.\n\nYou've got a bunch of comments about xs_heap_continue here -- and I\ndon't fully understand what the issues are here with respect to this\nparticular patch, but I think that the general purpose of\nxs_heap_continue is to handle the case where we need to return more\nthan one tuple from the same HOT chain. With an MVCC snapshot that\ndoesn't happen, but with say SnapshotAny or SnapshotDirty, it could.\nAs far as possible, the prefetcher shouldn't be involved at all when\nxs_heap_continue is set, I believe, because in that case we're just\nreturning a bunch of tuples from the same page, and the extra fetches\nfrom that heap page shouldn't trigger or require any further\nprefetching.\n\n+ * XXX Should this also look at plan.plan_rows and maybe cap the target\n+ * to that? Pointless to prefetch more than we expect to use. Or maybe\n+ * just reset to that value during prefetching, after reading the next\n+ * index page (or rather after rescan)?\n\nIt seems questionable to use plan_rows here because (1) I don't think\nwe have existing cases where we use the estimated row count in the\nexecutor for anything, we just carry it through so EXPLAIN can print\nit and (2) row count estimates can be really far off, especially if\nwe're on the inner side of a nested loop, we might like to figure that\nout eventually instead of just DTWT forever. But on the other hand\nthis does feel like an important case where we have a clue that\nprefetching might need to be done less aggressively or not at all, and\nit doesn't seem right to ignore that signal either. I wonder if we\nwant this shaped in some other way, like a Boolean that says\nare-we-under-a-potentially-row-limiting-construct e.g. limit or inner\nside of a semi-join or anti-join.\n\n+ * We reach here if the index only scan is not parallel, or if we're\n+ * serially executing an index only scan that was planned to be\n+ * parallel.\n\nWell, this seems sad.\n\n+ * XXX This might lead to IOS being slower than plain index scan, if the\n+ * table has a lot of pages that need recheck.\n\nHow?\n\n+ /*\n+ * XXX Only allow index prefetching when parallelModeOK=true. This is a bit\n+ * of a misuse of the flag, but we need to disable prefetching for cursors\n+ * (which might change direction), and parallelModeOK does that. But maybe\n+ * we might (or should) have a separate flag.\n+ */\n\nI think the correct flag to be using here is execute_once, which\ncaptures whether the executor could potentially be invoked a second\ntime for the same portal. Changes in the fetch direction are possible\nif and only if !execute_once.\n\n> Note 1: The IndexPrefetch name is a bit misleading, because it's used\n> even with prefetching disabled - all index reads from the index scan\n> happen through it. Maybe it should be called IndexReader or something\n> like that.\n\nMy biggest gripe here is the capitalization. This version adds, inter\nalia, IndexPrefetchAlloc, PREFETCH_QUEUE_INDEX, and\nindex_heap_prefetch_target, which seems like one or two too many\nconventions. But maybe the PREFETCH_* macros don't even belong in a\npublic header.\n\nI do like the index_heap_prefetch_* naming. Possibly that's too\nverbose to use for everything, but calling this index-heap-prefetch\nrather than index-prefetch seems clearer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jan 2024 15:31:39 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's an improved version of this patch, finishing a lot of the stuff\nthat I alluded to earlier - moving the code from indexam.c, renaming a\nbunch of stuff, etc. I've also squashed it into a single patch, to make\nit easier to review.\n\nI'll briefly go through the main changes in the patch, and then will\nrespond in-line to Robert's points.\n\n\n1) I moved the code from indexam.c to (new) execPrefetch.c. All the\nprototypes / typedefs now live in executor.h, with only minimal changes\nin execnodes.h (adding it to scan descriptors).\n\nI believe this finally moves the code to the right place - it feels much\nnicer and cleaner than in indexam.c. And it allowed me to hide a bunch\nof internal structs and improve the general API, I think.\n\nI'm sure there's stuff that could be named differently, but the layering\nfeels about right, I think.\n\n\n2) A bunch of stuff got renamed to start with IndexPrefetch... to make\nthe naming consistent / clearer. I'm not entirely sure IndexPrefetch is\nthe right name, though - it's still a bit misleading, as it might seem\nit's about prefetching index stuff, but really it's about heap pages\nfrom indexes. Maybe IndexScanPrefetch() or something like that?\n\n\n3) If there's a way to make this work with the streaming I/O API, I'm\nnot aware of it. But the overall design seems somewhat similar (based on\n\"next\" callback etc.) so hopefully that'd make it easier to adopt it.\n\n\n4) I initially relied on parallelModeOK to disable prefetching, which\nkinda worked, but not really. Robert suggested to use the execute_once\nflag directly, and I think that's much better - not only is it cleaner,\nit also seems more appropriate (the parallel flag considers other stuff\nthat is not quite relevant to prefetching).\n\nThinking about this, I think it should be possible to make prefetching\nwork even for plans with execute_once=false. In particular, when the\nplan changes direction it should be possible to simply \"walk back\" the\nprefetch queue, to get to the \"correct\" place in in the scan. But I'm\nnot sure it's worth it, because plans that change direction often can't\nreally benefit from prefetches anyway - they'll often visit stuff they\naccessed shortly before anyway. For plans that don't change direction\nbut may pause, we don't know if the plan pauses long enough for the\nprefetched pages to get evicted or something. So I think it's OK that\nexecute_once=false means no prefetching.\n\n\n5) I haven't done anything about the xs_heap_continue=true case yet.\n\n\n6) I went through all the comments and reworked them considerably. The\nmain comment at execPrefetch.c start, with some overall design etc. And\nthen there are comments for each function, explaining that bit in more\ndetail. Or at least that's the goal - there's still work to do.\n\nThere's two trivial FIXMEs, but you can ignore those - it's not that\nthere's a bug, but that I'd like to rework something and just don't know\nhow yet.\n\nThere's also a couple of XXX comments. Some are a bit wild ideas for the\nfuture, others are somewhat \"open questions\" to be discussed during a\nreview.\n\nAnyway, there should be no outright obsolete comments - if there's\nsomething I missed, let me know.\n\n\nNow to Robert's message ...\n\n\nOn 1/9/24 21:31, Robert Haas wrote:\n> On Thu, Jan 4, 2024 at 9:55 AM Tomas Vondra\n> <[email protected]> wrote:\n>> Here's a somewhat reworked version of the patch. My initial goal was to\n>> see if it could adopt the StreamingRead API proposed in [1], but that\n>> turned out to be less straight-forward than I hoped, for two reasons:\n> \n> I guess we need Thomas or Andres or maybe Melanie to comment on this.\n> \n\nYeah. Or maybe Thomas if he has thoughts on how to combine this with the\nstreaming I/O stuff.\n\n>> Perhaps a bigger change is that I decided to move this into a separate\n>> API on top of indexam.c. The original idea was to integrate this into\n>> index_getnext_tid/index_getnext_slot, so that all callers benefit from\n>> the prefetching automatically. Which would be nice, but it also meant\n>> it's need to happen in the indexam.c code, which seemed dirty.\n> \n> This patch is hard to review right now because there's a bunch of\n> comment updating that doesn't seem to have been done for the new\n> design. For instance:\n> \n> + * XXX This does not support prefetching of heap pages. When such\n> prefetching is\n> + * desirable, use index_getnext_tid().\n> \n> But not any more.\n> \n\nTrue. And this is now even more obsolete, as the prefetching was moved\nfrom indexam.c layer to the executor.\n\n> + * XXX The prefetching may interfere with the patch allowing us to evaluate\n> + * conditions on the index tuple, in which case we may not need the heap\n> + * tuple. Maybe if there's such filter, we should prefetch only pages that\n> + * are not all-visible (and the same idea would also work for IOS), but\n> + * it also makes the indexing a bit \"aware\" of the visibility stuff (which\n> + * seems a somewhat wrong). Also, maybe we should consider the filter\n> selectivity\n> \n> I'm not sure whether all the problems in this area are solved, but I\n> think you've solved enough of them that this at least needs rewording,\n> if not removing.\n> \n> + * XXX Comment/check seems obsolete.\n> \n> This occurs in two places. I'm not sure if it's accurate or not.\n> \n> + * XXX Could this be an issue for the prefetching? What if we\n> prefetch something\n> + * but the direction changes before we get to the read? If that\n> could happen,\n> + * maybe we should discard the prefetched data and go back? But can we even\n> + * do that, if we already fetched some TIDs from the index? I don't think\n> + * indexorderdir can't change, but es_direction maybe can?\n> \n> But your email claims that \"The patch simply disables prefetching for\n> such queries, using the same logic that we do for parallelism.\" FWIW,\n> I think that's a fine way to handle that case.\n> \n\nTrue. I left behind this comment partly intentionally, to point out why\nwe disable the prefetching in these cases, but you're right the comment\nnow explains something that can't happen.\n\n> + * XXX Maybe we should enable prefetching, but prefetch only pages that\n> + * are not all-visible (but checking that from the index code seems like\n> + * a violation of layering etc).\n> \n> Isn't this fixed now? Note this comment occurs twice.\n> \n> + * XXX We need to disable this in some cases (e.g. when using index-only\n> + * scans, we don't want to prefetch pages). Or maybe we should prefetch\n> + * only pages that are not all-visible, that'd be even better.\n> \n> Here again.\n> \n\nSorry, you're right those comments (and a couple more nearby) were\nstale. Removed / clarified.\n\n> And now for some comments on other parts of the patch, mostly other\n> XXX comments:\n> \n> + * XXX This does not support prefetching of heap pages. When such\n> prefetching is\n> + * desirable, use index_getnext_tid().\n> \n> There's probably no reason to write XXX here. The comment is fine.\n> \n> + * XXX Notice we haven't added the block to the block queue yet, and there\n> + * is a preceding block (i.e. blockIndex-1 is valid).\n> \n> Same here, possibly? If this XXX indicates a defect in the code, I\n> don't know what the defect is, so I guess it needs to be more clear.\n> If it is just explaining the code, then there's no reason for the\n> comment to say XXX.\n> \n\nYeah, removed the XXX / reworded a bit.\n\n> + * XXX Could it be harmful that we read the queue backwards? Maybe memory\n> + * prefetching works better for the forward direction?\n> \n> It does. But I don't know whether that matters here or not.\n> \n> + * XXX We do add the cache size to the request in order not to\n> + * have issues with uint64 underflows.\n> \n> I don't know what this means.\n> \n\nThere's a check that does this:\n\n (x + PREFETCH_CACHE_SIZE) >= y\n\nit might also be done as \"mathematically equivalent\"\n\n x >= (y - PREFETCH_CACHE_SIZE)\n\nbut if the \"y\" is an uint64, and the value is smaller than the constant,\nthis would underflow. It'd eventually disappear, once the \"y\" gets large\nenough, ofc.\n\n> + * XXX not sure this correctly handles xs_heap_continue - see\n> index_getnext_slot,\n> + * maybe nodeIndexscan needs to do something more to handle this?\n> Although, that\n> + * should be in the indexscan next_cb callback, probably.\n> + *\n> + * XXX If xs_heap_continue=true, we need to return the last TID.\n> \n> You've got a bunch of comments about xs_heap_continue here -- and I\n> don't fully understand what the issues are here with respect to this\n> particular patch, but I think that the general purpose of\n> xs_heap_continue is to handle the case where we need to return more\n> than one tuple from the same HOT chain. With an MVCC snapshot that\n> doesn't happen, but with say SnapshotAny or SnapshotDirty, it could.\n> As far as possible, the prefetcher shouldn't be involved at all when\n> xs_heap_continue is set, I believe, because in that case we're just\n> returning a bunch of tuples from the same page, and the extra fetches\n> from that heap page shouldn't trigger or require any further\n> prefetching.\n> \n\nYes, that's correct. The current code simply ignores that flag and just\nproceeds to the next TID. Which is correct for xs_heap_continue=false,\nand thus all MVCC snapshots work fine. But for the Any/Dirty case it\nneeds to work a bit differently.\n\n> + * XXX Should this also look at plan.plan_rows and maybe cap the target\n> + * to that? Pointless to prefetch more than we expect to use. Or maybe\n> + * just reset to that value during prefetching, after reading the next\n> + * index page (or rather after rescan)?\n> \n> It seems questionable to use plan_rows here because (1) I don't think\n> we have existing cases where we use the estimated row count in the\n> executor for anything, we just carry it through so EXPLAIN can print\n> it and (2) row count estimates can be really far off, especially if\n> we're on the inner side of a nested loop, we might like to figure that\n> out eventually instead of just DTWT forever. But on the other hand\n> this does feel like an important case where we have a clue that\n> prefetching might need to be done less aggressively or not at all, and\n> it doesn't seem right to ignore that signal either. I wonder if we\n> want this shaped in some other way, like a Boolean that says\n> are-we-under-a-potentially-row-limiting-construct e.g. limit or inner\n> side of a semi-join or anti-join.\n> \n\nThe current code actually does look at plan_rows when calculating the\nprefetch target:\n\n prefetch_max = IndexPrefetchComputeTarget(node->ss.ss_currentRelation,\n node->ss.ps.plan->plan_rows,\n estate->es_use_prefetching);\n\nbut I agree maybe it should not, for the reasons you explain. I'm not\nattached to this part.\n\n\n> + * We reach here if the index only scan is not parallel, or if we're\n> + * serially executing an index only scan that was planned to be\n> + * parallel.\n> \n> Well, this seems sad.\n> \n\nStale comment, I believe. However, I didn't see much benefits with\nparallel index scan during testing. Having I/O from multiple workers\ngenerally had the same effect, I think.\n\n> + * XXX This might lead to IOS being slower than plain index scan, if the\n> + * table has a lot of pages that need recheck.\n> \n> How?\n> \n\nThe comment is not particularly clear what \"this\" means, but I believe\nthis was about index-only scan with many not-all-visible pages. If it\ndidn't do prefetching, a regular index scan with prefetching may be way\nfaster. But the code actually allows doing prefetching even for IOS, by\nchecking the vm in the \"next\" callback.\n\n> + /*\n> + * XXX Only allow index prefetching when parallelModeOK=true. This is a bit\n> + * of a misuse of the flag, but we need to disable prefetching for cursors\n> + * (which might change direction), and parallelModeOK does that. But maybe\n> + * we might (or should) have a separate flag.\n> + */\n> \n> I think the correct flag to be using here is execute_once, which\n> captures whether the executor could potentially be invoked a second\n> time for the same portal. Changes in the fetch direction are possible\n> if and only if !execute_once.\n> \n\nRight. The new patch version does that.\n\n>> Note 1: The IndexPrefetch name is a bit misleading, because it's used\n>> even with prefetching disabled - all index reads from the index scan\n>> happen through it. Maybe it should be called IndexReader or something\n>> like that.\n> \n> My biggest gripe here is the capitalization. This version adds, inter\n> alia, IndexPrefetchAlloc, PREFETCH_QUEUE_INDEX, and\n> index_heap_prefetch_target, which seems like one or two too many\n> conventions. But maybe the PREFETCH_* macros don't even belong in a\n> public header.\n> \n> I do like the index_heap_prefetch_* naming. Possibly that's too\n> verbose to use for everything, but calling this index-heap-prefetch\n> rather than index-prefetch seems clearer.\n> \n\nYeah. I renamed all the structs and functions to IndexPrefetchSomething,\nto keep it consistent. And then the constants are all capital, ofc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Jan 2024 17:42:39 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Not a full response, but just to address a few points:\n\nOn Fri, Jan 12, 2024 at 11:42 AM Tomas Vondra\n<[email protected]> wrote:\n> Thinking about this, I think it should be possible to make prefetching\n> work even for plans with execute_once=false. In particular, when the\n> plan changes direction it should be possible to simply \"walk back\" the\n> prefetch queue, to get to the \"correct\" place in in the scan. But I'm\n> not sure it's worth it, because plans that change direction often can't\n> really benefit from prefetches anyway - they'll often visit stuff they\n> accessed shortly before anyway. For plans that don't change direction\n> but may pause, we don't know if the plan pauses long enough for the\n> prefetched pages to get evicted or something. So I think it's OK that\n> execute_once=false means no prefetching.\n\n+1.\n\n> > + * XXX We do add the cache size to the request in order not to\n> > + * have issues with uint64 underflows.\n> >\n> > I don't know what this means.\n> >\n>\n> There's a check that does this:\n>\n> (x + PREFETCH_CACHE_SIZE) >= y\n>\n> it might also be done as \"mathematically equivalent\"\n>\n> x >= (y - PREFETCH_CACHE_SIZE)\n>\n> but if the \"y\" is an uint64, and the value is smaller than the constant,\n> this would underflow. It'd eventually disappear, once the \"y\" gets large\n> enough, ofc.\n\nThe problem is, I think, that there's no particular reason that\nsomeone reading the existing code should imagine that it might have\nbeen done in that \"mathematically equivalent\" fashion. I imagined that\nyou were trying to make a point about adding the cache size to the\nrequest vs. adding nothing, whereas in reality you were trying to make\na point about adding from one side vs. subtracting from the other.\n\n> > + * We reach here if the index only scan is not parallel, or if we're\n> > + * serially executing an index only scan that was planned to be\n> > + * parallel.\n> >\n> > Well, this seems sad.\n>\n> Stale comment, I believe. However, I didn't see much benefits with\n> parallel index scan during testing. Having I/O from multiple workers\n> generally had the same effect, I think.\n\nFair point, likely worth mentioning explicitly in the comment.\n\n> Yeah. I renamed all the structs and functions to IndexPrefetchSomething,\n> to keep it consistent. And then the constants are all capital, ofc.\n\nIt'd still be nice to get table or heap in there, IMHO, but maybe we\ncan't, and consistency is certainly a good thing regardless of the\ndetails, so thanks for that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 11:52:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 12/01/2024 6:42 pm, Tomas Vondra wrote:\n> Hi,\n>\n> Here's an improved version of this patch, finishing a lot of the stuff\n> that I alluded to earlier - moving the code from indexam.c, renaming a\n> bunch of stuff, etc. I've also squashed it into a single patch, to make\n> it easier to review.\n\nI am thinking about testing you patch with Neon (cloud Postgres). As far \nas Neon seaprates compute and storage, prefetch is much more critical \nfor Neon\narchitecture than for vanilla Postgres.\n\nI have few complaints:\n\n1. It disables prefetch for sequential access pattern (i.e. INDEX \nMERGE), motivating it that in this case OS read-ahead will be more \nefficient than prefetch. It may be true for normal storage devices, bit \nnot for Neon storage and may be also for Postgres on top of DFS (i.e. \nAmazon RDS). I wonder if we can delegate decision whether to perform \nprefetch in this case or not to some other level. I do not know \nprecisely where is should be handled. The best candidate IMHO is \nstorager manager. But it most likely requires extension of SMGR API. Not \nsure if you want to do it... Straightforward solution is to move this \nlogic to some callback, which can be overwritten by user.\n\n2. It disables prefetch for direct_io. It seems to be even more obvious \nthan 1), because prefetching using `posix_fadvise` definitely not \npossible in case of using direct_io. But in theory if SMGR provides some \nalternative prefetch implementation (as in case of Neon), this also may \nbe not true. Still unclear why we can want to use direct_io in Neon... \nBut still I prefer to mo.ve this decision outside executor.\n\n3. It doesn't perform prefetch of leave pages for IOS, only referenced \nheap pages which are not marked as all-visible. It seems to me that if \noptimized has chosen IOS (and not bitmap heap scan for example), then \nthere should be large enough fraction for all-visible pages. Also index \nprefetch is most efficient for OLAp queries and them are used to be \nperformance for historical data which is all-visible. But IOS can be \nreally handled separately in some other PR. Frankly speaking combining \nprefetch of leave B-Tree pages and referenced heap pages seems to be \nvery challenged task.\n\n4. I think that performing prefetch at executor level is really great \nidea and so prefetch can be used by all indexes, including custom \nindexes. But prefetch will be efficient only if index can provide fast \naccess to next TID (located at the same page). I am not sure that it is \ntrue for all builtin indexes (GIN, GIST, BRIN,...) and especially for \ncustom AM. I wonder if we should extend AM API to make index make a \ndecision weather to perform prefetch of TIDs or not.\n\n5. Minor notice: there are few places where index_getnext_slot is called \nwith last NULL parameter (disabled prefetch) with the following comment\n\"XXX Would be nice to also benefit from prefetching here.\" But all this \nplaces corresponds to \"point loopkup\", i.e. unique constraint check, \nfind replication tuple by index... Prefetch seems to be unlikely useful \nhere, unlkess there is index bloating and and we have to skip a lot of \ntuples before locating right one. But should we try to optimize case of \nbloated indexes?\n\n\n\n\n\n\nHi,\n\nOn 12/01/2024 6:42 pm, Tomas Vondra\n wrote:\n\n\nHi,\n\nHere's an improved version of this patch, finishing a lot of the stuff\nthat I alluded to earlier - moving the code from indexam.c, renaming a\nbunch of stuff, etc. I've also squashed it into a single patch, to make\nit easier to review.\n\n\nI am thinking about testing you patch with Neon (cloud Postgres).\n As far as Neon seaprates compute and storage, prefetch is much\n more critical for Neon\n architecture than for vanilla Postgres.\n\n I have few complaints:\n1. It disables prefetch for sequential access pattern (i.e. INDEX\n MERGE), motivating it that in this case OS read-ahead will be more\n efficient than prefetch. It may be true for normal storage\n devices, bit not for Neon storage and may be also for Postgres on\n top of DFS (i.e. Amazon RDS). I wonder if we can delegate decision\n whether to perform prefetch in this case or not to some other\n level. I do not know precisely where is should be handled. The\n best candidate IMHO is storager manager. But it most likely\n requires extension of SMGR API. Not sure if you want to do it...\n Straightforward solution is to move this logic to some callback,\n which can be overwritten by user.\n\n 2. It disables prefetch for direct_io. It seems to be even more\n obvious than 1), because prefetching using `posix_fadvise`\n definitely not possible in case of using direct_io. But in theory\n if SMGR provides some alternative prefetch implementation (as in\n case of Neon), this also may be not true. Still unclear why we can\n want to use direct_io in Neon... But still I prefer to mo.ve this\n decision outside executor.\n3. It doesn't perform prefetch of leave pages for IOS, only\n referenced heap pages which are not marked as all-visible. It\n seems to me that if optimized has chosen IOS (and not bitmap heap\n scan for example), then there should be large enough fraction for\n all-visible pages. Also index prefetch is most efficient for OLAp\n queries and them are used to be performance for historical data \n which is all-visible. But IOS can be really handled separately in\n some other PR. Frankly speaking combining prefetch of leave B-Tree\n pages and referenced heap pages seems to be very challenged task.\n4. I think that performing prefetch at executor level is really\n great idea and so prefetch can be used by all indexes, including\n custom indexes. But prefetch will be efficient only if index can\n provide fast access to next TID (located at the same page). I am\n not sure that it is true for all builtin indexes (GIN, GIST,\n BRIN,...) and especially for custom AM. I wonder if we should\n extend AM API to make index make a decision weather to perform\n prefetch of TIDs or not.\n\n5. Minor notice: there are few places where index_getnext_slot is called with last NULL parameter (disabled prefetch) with the\n following comment \n\"XXX Would be nice to also benefit from prefetching here.\" But all this places corresponds to \"point loopkup\", i.e. unique constraint check, find replication tuple by index...\nPrefetch seems to be unlikely useful here, unlkess there is index bloating and and we have to skip a lot of tuples before locating right one. But should we try to optimize case of bloated indexes?",
"msg_date": "Tue, 16 Jan 2024 10:13:43 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/16/24 09:13, Konstantin Knizhnik wrote:\n> Hi,\n> \n> On 12/01/2024 6:42 pm, Tomas Vondra wrote:\n>> Hi,\n>>\n>> Here's an improved version of this patch, finishing a lot of the stuff\n>> that I alluded to earlier - moving the code from indexam.c, renaming a\n>> bunch of stuff, etc. I've also squashed it into a single patch, to make\n>> it easier to review.\n> \n> I am thinking about testing you patch with Neon (cloud Postgres). As far\n> as Neon seaprates compute and storage, prefetch is much more critical\n> for Neon\n> architecture than for vanilla Postgres.\n> \n> I have few complaints:\n> \n> 1. It disables prefetch for sequential access pattern (i.e. INDEX\n> MERGE), motivating it that in this case OS read-ahead will be more\n> efficient than prefetch. It may be true for normal storage devices, bit\n> not for Neon storage and may be also for Postgres on top of DFS (i.e.\n> Amazon RDS). I wonder if we can delegate decision whether to perform\n> prefetch in this case or not to some other level. I do not know\n> precisely where is should be handled. The best candidate IMHO is\n> storager manager. But it most likely requires extension of SMGR API. Not\n> sure if you want to do it... Straightforward solution is to move this\n> logic to some callback, which can be overwritten by user.\n> \n\nInteresting point. You're right these decisions (whether to prefetch\nparticular patterns) are closely tied to the capabilities of the storage\nsystem. So it might make sense to maybe define it at that level.\n\nNot sure what exactly RDS does with the storage - my understanding is\nthat it's mostly regular Postgres code, but managed by Amazon. So how\nwould that modify the prefetching logic?\n\nHowever, I'm not against making this modular / wrapping this in some\nsort of callbacks, for example.\n\n> 2. It disables prefetch for direct_io. It seems to be even more obvious\n> than 1), because prefetching using `posix_fadvise` definitely not\n> possible in case of using direct_io. But in theory if SMGR provides some\n> alternative prefetch implementation (as in case of Neon), this also may\n> be not true. Still unclear why we can want to use direct_io in Neon...\n> But still I prefer to mo.ve this decision outside executor.\n> \n\nTrue. I think this would / should be customizable by the callback.\n\n> 3. It doesn't perform prefetch of leave pages for IOS, only referenced\n> heap pages which are not marked as all-visible. It seems to me that if\n> optimized has chosen IOS (and not bitmap heap scan for example), then\n> there should be large enough fraction for all-visible pages. Also index\n> prefetch is most efficient for OLAp queries and them are used to be\n> performance for historical data which is all-visible. But IOS can be\n> really handled separately in some other PR. Frankly speaking combining\n> prefetch of leave B-Tree pages and referenced heap pages seems to be\n> very challenged task.\n> \n\nI see prefetching of leaf pages as interesting / worthwhile improvement,\nbut out of scope for this patch. I don't think it can be done at the\nexecutor level - the prefetch requests need to be submitted from the\nindex AM code (by calling PrefetchBuffer, etc.)\n\n> 4. I think that performing prefetch at executor level is really great\n> idea and so prefetch can be used by all indexes, including custom\n> indexes. But prefetch will be efficient only if index can provide fast\n> access to next TID (located at the same page). I am not sure that it is\n> true for all builtin indexes (GIN, GIST, BRIN,...) and especially for\n> custom AM. I wonder if we should extend AM API to make index make a\n> decision weather to perform prefetch of TIDs or not.\n\nI'm not against having a flag to enable/disable prefetching, but the\nquestion is whether doing prefetching for such indexes can be harmful.\nI'm not sure about that.\n\n> \n> 5. Minor notice: there are few places where index_getnext_slot is called\n> with last NULL parameter (disabled prefetch) with the following comment\n> \"XXX Would be nice to also benefit from prefetching here.\" But all this\n> places corresponds to \"point loopkup\", i.e. unique constraint check,\n> find replication tuple by index... Prefetch seems to be unlikely useful\n> here, unlkess there is index bloating and and we have to skip a lot of\n> tuples before locating right one. But should we try to optimize case of\n> bloated indexes?\n> \n\nAre you sure you're looking at the last patch version? Because the\ncurrent patch does not have any new parameters in index_getnext_* and\nthe comments were removed too (I suppose you're talking about\nexecIndexing, execReplication and those places).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Jan 2024 17:25:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Tue, Jan 16, 2024 at 11:25 AM Tomas Vondra\n<[email protected]> wrote:\n> > 3. It doesn't perform prefetch of leave pages for IOS, only referenced\n> > heap pages which are not marked as all-visible. It seems to me that if\n> > optimized has chosen IOS (and not bitmap heap scan for example), then\n> > there should be large enough fraction for all-visible pages. Also index\n> > prefetch is most efficient for OLAp queries and them are used to be\n> > performance for historical data which is all-visible. But IOS can be\n> > really handled separately in some other PR. Frankly speaking combining\n> > prefetch of leave B-Tree pages and referenced heap pages seems to be\n> > very challenged task.\n>\n> I see prefetching of leaf pages as interesting / worthwhile improvement,\n> but out of scope for this patch. I don't think it can be done at the\n> executor level - the prefetch requests need to be submitted from the\n> index AM code (by calling PrefetchBuffer, etc.)\n\n+1. This is a good feature, and so is that, but they're not the same\nfeature, despite the naming problems.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Jan 2024 12:08:14 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 16/01/2024 6:25 pm, Tomas Vondra wrote:\n> On 1/16/24 09:13, Konstantin Knizhnik wrote:\n>> Hi,\n>>\n>> On 12/01/2024 6:42 pm, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> Here's an improved version of this patch, finishing a lot of the stuff\n>>> that I alluded to earlier - moving the code from indexam.c, renaming a\n>>> bunch of stuff, etc. I've also squashed it into a single patch, to make\n>>> it easier to review.\n>> I am thinking about testing you patch with Neon (cloud Postgres). As far\n>> as Neon seaprates compute and storage, prefetch is much more critical\n>> for Neon\n>> architecture than for vanilla Postgres.\n>>\n>> I have few complaints:\n>>\n>> 1. It disables prefetch for sequential access pattern (i.e. INDEX\n>> MERGE), motivating it that in this case OS read-ahead will be more\n>> efficient than prefetch. It may be true for normal storage devices, bit\n>> not for Neon storage and may be also for Postgres on top of DFS (i.e.\n>> Amazon RDS). I wonder if we can delegate decision whether to perform\n>> prefetch in this case or not to some other level. I do not know\n>> precisely where is should be handled. The best candidate IMHO is\n>> storager manager. But it most likely requires extension of SMGR API. Not\n>> sure if you want to do it... Straightforward solution is to move this\n>> logic to some callback, which can be overwritten by user.\n>>\n> Interesting point. You're right these decisions (whether to prefetch\n> particular patterns) are closely tied to the capabilities of the storage\n> system. So it might make sense to maybe define it at that level.\n>\n> Not sure what exactly RDS does with the storage - my understanding is\n> that it's mostly regular Postgres code, but managed by Amazon. So how\n> would that modify the prefetching logic?\n\nAmazon RDS is just vanilla Postgres with file system mounted on EBS \n(Amazon distributed file system).\nEBS provides good throughput but larger latencies comparing with local SSDs.\nI am not sure if read-ahead works for EBS.\n\n\n\n> 4. I think that performing prefetch at executor level is really great\n>> idea and so prefetch can be used by all indexes, including custom\n>> indexes. But prefetch will be efficient only if index can provide fast\n>> access to next TID (located at the same page). I am not sure that it is\n>> true for all builtin indexes (GIN, GIST, BRIN,...) and especially for\n>> custom AM. I wonder if we should extend AM API to make index make a\n>> decision weather to perform prefetch of TIDs or not.\n> I'm not against having a flag to enable/disable prefetching, but the\n> question is whether doing prefetching for such indexes can be harmful.\n> I'm not sure about that.\n\nI tend to agree with you - it is hard to imagine index implementation \nwhich doesn't win from prefetching heap pages.\nMay be only the filtering case you have mentioned. But it seems to me \nthat current B-Tree index scan (not IOS) implementation in Postgres\ndoesn't try to use index tuple to check extra condition - it will fetch \nheap tuple in any case.\n\n>> 5. Minor notice: there are few places where index_getnext_slot is called\n>> with last NULL parameter (disabled prefetch) with the following comment\n>> \"XXX Would be nice to also benefit from prefetching here.\" But all this\n>> places corresponds to \"point loopkup\", i.e. unique constraint check,\n>> find replication tuple by index... Prefetch seems to be unlikely useful\n>> here, unlkess there is index bloating and and we have to skip a lot of\n>> tuples before locating right one. But should we try to optimize case of\n>> bloated indexes?\n>>\n> Are you sure you're looking at the last patch version? Because the\n> current patch does not have any new parameters in index_getnext_* and\n> the comments were removed too (I suppose you're talking about\n> execIndexing, execReplication and those places).\n>\nSorry, I looked at v20240103-0001-prefetch-2023-12-09.patch , I didn't \nnoticed v20240112-0001-Prefetch-heap-pages-during-index-scans.patch\n\n\n> regards\n>\n\n\n\n\n\n\n\nOn 16/01/2024 6:25 pm, Tomas Vondra\n wrote:\n\n\nOn 1/16/24 09:13, Konstantin Knizhnik wrote:\n\n\nHi,\n\nOn 12/01/2024 6:42 pm, Tomas Vondra wrote:\n\n\nHi,\n\nHere's an improved version of this patch, finishing a lot of the stuff\nthat I alluded to earlier - moving the code from indexam.c, renaming a\nbunch of stuff, etc. I've also squashed it into a single patch, to make\nit easier to review.\n\n\n\nI am thinking about testing you patch with Neon (cloud Postgres). As far\nas Neon seaprates compute and storage, prefetch is much more critical\nfor Neon\narchitecture than for vanilla Postgres.\n\nI have few complaints:\n\n1. It disables prefetch for sequential access pattern (i.e. INDEX\nMERGE), motivating it that in this case OS read-ahead will be more\nefficient than prefetch. It may be true for normal storage devices, bit\nnot for Neon storage and may be also for Postgres on top of DFS (i.e.\nAmazon RDS). I wonder if we can delegate decision whether to perform\nprefetch in this case or not to some other level. I do not know\nprecisely where is should be handled. The best candidate IMHO is\nstorager manager. But it most likely requires extension of SMGR API. Not\nsure if you want to do it... Straightforward solution is to move this\nlogic to some callback, which can be overwritten by user.\n\n\n\n\nInteresting point. You're right these decisions (whether to prefetch\nparticular patterns) are closely tied to the capabilities of the storage\nsystem. So it might make sense to maybe define it at that level.\n\nNot sure what exactly RDS does with the storage - my understanding is\nthat it's mostly regular Postgres code, but managed by Amazon. So how\nwould that modify the prefetching logic?\n\n\nAmazon RDS is just vanilla Postgres with file system mounted on\n EBS (Amazon distributed file system).\n EBS provides good throughput but larger latencies comparing with\n local SSDs.\n I am not sure if read-ahead works for EBS.\n\n\n\n\n\n4. I think that performing prefetch at executor level is really great\n\nidea and so prefetch can be used by all indexes, including custom\nindexes. But prefetch will be efficient only if index can provide fast\naccess to next TID (located at the same page). I am not sure that it is\ntrue for all builtin indexes (GIN, GIST, BRIN,...) and especially for\ncustom AM. I wonder if we should extend AM API to make index make a\ndecision weather to perform prefetch of TIDs or not.\n\n\n\nI'm not against having a flag to enable/disable prefetching, but the\nquestion is whether doing prefetching for such indexes can be harmful.\nI'm not sure about that.\n\n\nI tend to agree with you - it is hard to imagine index\n implementation which doesn't win from prefetching heap pages.\n May be only the filtering case you have mentioned. But it seems to\n me that current B-Tree index scan (not IOS) implementation in\n Postgres\n doesn't try to use index tuple to check extra condition - it will\n fetch heap tuple in any case.\n\n\n\n\n\n\n\n\n5. Minor notice: there are few places where index_getnext_slot is called\nwith last NULL parameter (disabled prefetch) with the following comment\n\"XXX Would be nice to also benefit from prefetching here.\" But all this\nplaces corresponds to \"point loopkup\", i.e. unique constraint check,\nfind replication tuple by index... Prefetch seems to be unlikely useful\nhere, unlkess there is index bloating and and we have to skip a lot of\ntuples before locating right one. But should we try to optimize case of\nbloated indexes?\n\n\n\n\nAre you sure you're looking at the last patch version? Because the\ncurrent patch does not have any new parameters in index_getnext_* and\nthe comments were removed too (I suppose you're talking about\nexecIndexing, execReplication and those places).\n\n\n\nSorry, I looked at v20240103-0001-prefetch-2023-12-09.patch , I\n didn't noticed\n v20240112-0001-Prefetch-heap-pages-during-index-scans.patch\n\n\n\n\n\n\nregards",
"msg_date": "Tue, 16 Jan 2024 22:10:23 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/16/24 2:10 PM, Konstantin Knizhnik wrote:\n> Amazon RDS is just vanilla Postgres with file system mounted on EBS \n> (Amazon distributed file system).\n> EBS provides good throughput but larger latencies comparing with local SSDs.\n> I am not sure if read-ahead works for EBS.\n\nActually, EBS only provides a block device - it's definitely not a \nfilesystem itself (*EFS* is a filesystem - but it's also significantly \ndifferent than EBS). So as long as readahead is happening somewheer \nabove the block device I would expect it to JustWork on EBS.\n\nOf course, Aurora Postgres (like Neon) is completely different. If you \nlook at page 53 of [1] you'll note that there's two different terms \nused: prefetch and batch. I'm not sure how much practical difference \nthere is, but batched IO (one IO request to Aurora Storage for many \nblocks) predates index prefetch; VACUUM in APG has used batched IO for a \nvery long time (it also *only* reads blocks that aren't marked all \nvisble/frozen; none of the \"only skip if skipping at least 32 blocks\" \nlogic is used).\n\n1: \nhttps://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Deep_dive_on_Amazon_Aurora_with_PostgreSQL_compatibility_DAT328-R1.pdf\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n",
"msg_date": "Tue, 16 Jan 2024 15:58:42 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\nOn 16/01/2024 11:58 pm, Jim Nasby wrote:\n> On 1/16/24 2:10 PM, Konstantin Knizhnik wrote:\n>> Amazon RDS is just vanilla Postgres with file system mounted on EBS \n>> (Amazon distributed file system).\n>> EBS provides good throughput but larger latencies comparing with \n>> local SSDs.\n>> I am not sure if read-ahead works for EBS.\n>\n> Actually, EBS only provides a block device - it's definitely not a \n> filesystem itself (*EFS* is a filesystem - but it's also significantly \n> different than EBS). So as long as readahead is happening somewheer \n> above the block device I would expect it to JustWork on EBS.\n\n\nThank you for clarification.\nYes, EBS is just block device and read-ahead can be used fir it as for \nany other local device.\nThere is actually recommendation to increase read-ahead for EBS device \nto reach better performance on some workloads:\n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html\n\nSo looks like for sequential access pattern manual prefetching at EBS is \nnot needed.\nBut at Neon situation is quite different. May be Aurora Postgres is \nusing some other mechanism for speed-up vacuum and seqscan,\nbut Neon is using Postgres prefetch mechanism for it.\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 08:10:01 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\nOn 16/01/2024 11:58 pm, Jim Nasby wrote:\n> On 1/16/24 2:10 PM, Konstantin Knizhnik wrote:\n>> Amazon RDS is just vanilla Postgres with file system mounted on EBS \n>> (Amazon distributed file system).\n>> EBS provides good throughput but larger latencies comparing with \n>> local SSDs.\n>> I am not sure if read-ahead works for EBS.\n>\n> Actually, EBS only provides a block device - it's definitely not a \n> filesystem itself (*EFS* is a filesystem - but it's also significantly \n> different than EBS). So as long as readahead is happening somewheer \n> above the block device I would expect it to JustWork on EBS.\n\n\nThank you for clarification.\nYes, EBS is just block device and read-ahead can be used fir it as for \nany other local device.\nThere is actually recommendation to increase read-ahead for EBS device \nto reach better performance on some workloads:\n\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html\n\nSo looks like for sequential access pattern manual prefetching at EBS is \nnot needed.\nBut at Neon situation is quite different. May be Aurora Postgres is \nusing some other mechanism for speed-up vacuum and seqscan,\nbut Neon is using Postgres prefetch mechanism for it.\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 10:04:43 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "I have integrated your prefetch patch in Neon and it actually works!\nMoreover, I combined it with prefetch of leaf pages for IOS and it also \nseems to work.\n\nJust small notice: you are reporting `blks_prefetch_rounds` in explain, \nbut it is not incremented anywhere.\nMoreover, I do not precisely understand what it mean and wonder if such \ninformation is useful for analyzing query executing plan.\nAlso your patch always report number of prefetched blocks (and rounds) \nif them are not zero.\n\nI think that adding new information to explain it may cause some \nproblems because there are a lot of different tools which parse explain \nreport to visualize it,\nmake some recommendations top improve performance, ... Certainly good \npractice for such tools is to ignore all unknown tags. But I am not sure \nthat everybody follow this practice.\nIt seems to be more safe and at the same time convenient for users to \nadd extra tag to explain to enable/disable prefetch info (as it was done \nin Neon).\n\nHere we come back to my custom explain patch;) Actually using it is not \nnecessary. You can manually add \"prefetch\" option to Postgres core (as \nit is currently done in Neon).\n\nBest regards,\nKonstantin\n\n\n\n",
"msg_date": "Wed, 17 Jan 2024 10:45:01 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/16/24 21:10, Konstantin Knizhnik wrote:\n> \n> ...\n> \n>> 4. I think that performing prefetch at executor level is really great\n>>> idea and so prefetch can be used by all indexes, including custom\n>>> indexes. But prefetch will be efficient only if index can provide fast\n>>> access to next TID (located at the same page). I am not sure that it is\n>>> true for all builtin indexes (GIN, GIST, BRIN,...) and especially for\n>>> custom AM. I wonder if we should extend AM API to make index make a\n>>> decision weather to perform prefetch of TIDs or not.\n>> I'm not against having a flag to enable/disable prefetching, but the\n>> question is whether doing prefetching for such indexes can be harmful.\n>> I'm not sure about that.\n> \n> I tend to agree with you - it is hard to imagine index implementation\n> which doesn't win from prefetching heap pages.\n> May be only the filtering case you have mentioned. But it seems to me\n> that current B-Tree index scan (not IOS) implementation in Postgres\n> doesn't try to use index tuple to check extra condition - it will fetch\n> heap tuple in any case.\n> \n\nThat's true, but that's why I started working on this:\n\nhttps://commitfest.postgresql.org/46/4352/\n\nI need to think about how to combine that with the prefetching. The good\nthing is that both changes require fetching TIDs, not slots. I think the\ncondition can be simply added to the prefetch callback.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jan 2024 16:57:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/17/24 09:45, Konstantin Knizhnik wrote:\n> I have integrated your prefetch patch in Neon and it actually works!\n> Moreover, I combined it with prefetch of leaf pages for IOS and it also\n> seems to work.\n> \n\nCool! And do you think this is the right design/way to do this?\n\n> Just small notice: you are reporting `blks_prefetch_rounds` in explain,\n> but it is not incremented anywhere.\n> Moreover, I do not precisely understand what it mean and wonder if such\n> information is useful for analyzing query executing plan.\n> Also your patch always report number of prefetched blocks (and rounds)\n> if them are not zero.\n> \n\nRight, this needs fixing.\n\n> I think that adding new information to explain it may cause some\n> problems because there are a lot of different tools which parse explain\n> report to visualize it,\n> make some recommendations top improve performance, ... Certainly good\n> practice for such tools is to ignore all unknown tags. But I am not sure\n> that everybody follow this practice.\n> It seems to be more safe and at the same time convenient for users to\n> add extra tag to explain to enable/disable prefetch info (as it was done\n> in Neon).\n> \n\nI think we want to add this info to explain, but maybe it should be\nbehind a new flag and disabled by default.\n\n> Here we come back to my custom explain patch;) Actually using it is not\n> necessary. You can manually add \"prefetch\" option to Postgres core (as\n> it is currently done in Neon).\n> \n\nYeah, I think that's the right solution.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jan 2024 17:00:32 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\nOn 18/01/2024 6:00 pm, Tomas Vondra wrote:\n> On 1/17/24 09:45, Konstantin Knizhnik wrote:\n>> I have integrated your prefetch patch in Neon and it actually works!\n>> Moreover, I combined it with prefetch of leaf pages for IOS and it also\n>> seems to work.\n>>\n> Cool! And do you think this is the right design/way to do this?\n\nI like the idea of prefetching TIDs in executor.\n\nBut looking though your patch I have some questions:\n\n\n1. Why it is necessary to allocate and store all_visible flag in data \nbuffer. Why caller of IndexPrefetchNext can not look at prefetch field?\n\n+ /* store the all_visible flag in the private part of the entry */\n+ entry->data = palloc(sizeof(bool));\n+ *(bool *) entry->data = all_visible;\n\n2. Names of the functions `IndexPrefetchNext` and \n`IndexOnlyPrefetchNext` are IMHO confusing because they look similar and \none can assume that for one is used for normal index scan and last one - \nfor index only scan. But actually `IndexOnlyPrefetchNext` is callback \nand `IndexPrefetchNext` is used in both nodeIndexscan.c and \nnodeIndexonlyscan.c\n\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 10:34:42 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 1/19/24 09:34, Konstantin Knizhnik wrote:\n> \n> On 18/01/2024 6:00 pm, Tomas Vondra wrote:\n>> On 1/17/24 09:45, Konstantin Knizhnik wrote:\n>>> I have integrated your prefetch patch in Neon and it actually works!\n>>> Moreover, I combined it with prefetch of leaf pages for IOS and it also\n>>> seems to work.\n>>>\n>> Cool! And do you think this is the right design/way to do this?\n> \n> I like the idea of prefetching TIDs in executor.\n> \n> But looking though your patch I have some questions:\n> \n> \n> 1. Why it is necessary to allocate and store all_visible flag in data\n> buffer. Why caller of IndexPrefetchNext can not look at prefetch field?\n> \n> + /* store the all_visible flag in the private part of the entry */\n> + entry->data = palloc(sizeof(bool));\n> + *(bool *) entry->data = all_visible;\n> \n\nWhat you mean by \"prefetch field\"? The reason why it's done like this is\nto only do the VM check once - without keeping the value, we'd have to\ndo it in the \"next\" callback, to determine if we need to prefetch the\nheap tuple, and then later in the index-only scan itself. That's a\nsignificant overhead, especially in the case when everything is visible.\n\n> 2. Names of the functions `IndexPrefetchNext` and\n> `IndexOnlyPrefetchNext` are IMHO confusing because they look similar and\n> one can assume that for one is used for normal index scan and last one -\n> for index only scan. But actually `IndexOnlyPrefetchNext` is callback\n> and `IndexPrefetchNext` is used in both nodeIndexscan.c and\n> nodeIndexonlyscan.c\n> \n\nYeah, that's a good point. The naming probably needs rethinking.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jan 2024 13:35:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 18/01/2024 5:57 pm, Tomas Vondra wrote:\n> On 1/16/24 21:10, Konstantin Knizhnik wrote:\n>> ...\n>>\n>>> 4. I think that performing prefetch at executor level is really great\n>>>> idea and so prefetch can be used by all indexes, including custom\n>>>> indexes. But prefetch will be efficient only if index can provide fast\n>>>> access to next TID (located at the same page). I am not sure that it is\n>>>> true for all builtin indexes (GIN, GIST, BRIN,...) and especially for\n>>>> custom AM. I wonder if we should extend AM API to make index make a\n>>>> decision weather to perform prefetch of TIDs or not.\n>>> I'm not against having a flag to enable/disable prefetching, but the\n>>> question is whether doing prefetching for such indexes can be harmful.\n>>> I'm not sure about that.\n>> I tend to agree with you - it is hard to imagine index implementation\n>> which doesn't win from prefetching heap pages.\n>> May be only the filtering case you have mentioned. But it seems to me\n>> that current B-Tree index scan (not IOS) implementation in Postgres\n>> doesn't try to use index tuple to check extra condition - it will fetch\n>> heap tuple in any case.\n>>\n> That's true, but that's why I started working on this:\n>\n> https://commitfest.postgresql.org/46/4352/\n>\n> I need to think about how to combine that with the prefetching. The good\n> thing is that both changes require fetching TIDs, not slots. I think the\n> condition can be simply added to the prefetch callback.\n>\n>\n> regards\n>\nLooks like I was not true, even if it is not index-only scan but index \ncondition involves only index attributes, then heap is not accessed \nuntil we find tuple satisfying search condition.\nInclusive index case described above \n(https://commitfest.postgresql.org/46/4352/) is interesting but IMHO \nexotic case. If keys are actually used in search, then why not to create \nnormal compound index instead?\n\n\n\n\n\n\n\n\n\nOn 18/01/2024 5:57 pm, Tomas Vondra\n wrote:\n\n\nOn 1/16/24 21:10, Konstantin Knizhnik wrote:\n\n\n\n...\n\n\n\n4. I think that performing prefetch at executor level is really great\n\n\nidea and so prefetch can be used by all indexes, including custom\nindexes. But prefetch will be efficient only if index can provide fast\naccess to next TID (located at the same page). I am not sure that it is\ntrue for all builtin indexes (GIN, GIST, BRIN,...) and especially for\ncustom AM. I wonder if we should extend AM API to make index make a\ndecision weather to perform prefetch of TIDs or not.\n\n\nI'm not against having a flag to enable/disable prefetching, but the\nquestion is whether doing prefetching for such indexes can be harmful.\nI'm not sure about that.\n\n\n\nI tend to agree with you - it is hard to imagine index implementation\nwhich doesn't win from prefetching heap pages.\nMay be only the filtering case you have mentioned. But it seems to me\nthat current B-Tree index scan (not IOS) implementation in Postgres\ndoesn't try to use index tuple to check extra condition - it will fetch\nheap tuple in any case.\n\n\n\n\nThat's true, but that's why I started working on this:\n\nhttps://commitfest.postgresql.org/46/4352/\n\nI need to think about how to combine that with the prefetching. The good\nthing is that both changes require fetching TIDs, not slots. I think the\ncondition can be simply added to the prefetch callback.\n\n\nregards\n\n\n\nLooks like I was not true, even if it is not index-only scan but\n index condition involves only index attributes, then heap is not\n accessed until we find tuple satisfying search condition.\n Inclusive index case described above (https://commitfest.postgresql.org/46/4352/) is interesting but IMHO exotic case.\nIf keys are actually used in search, then why not to create normal compound index instead?",
"msg_date": "Fri, 19 Jan 2024 17:19:22 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 11:42 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 1/9/24 21:31, Robert Haas wrote:\n> > On Thu, Jan 4, 2024 at 9:55 AM Tomas Vondra\n> > <[email protected]> wrote:\n> >> Here's a somewhat reworked version of the patch. My initial goal was to\n> >> see if it could adopt the StreamingRead API proposed in [1], but that\n> >> turned out to be less straight-forward than I hoped, for two reasons:\n> >\n> > I guess we need Thomas or Andres or maybe Melanie to comment on this.\n> >\n>\n> Yeah. Or maybe Thomas if he has thoughts on how to combine this with the\n> streaming I/O stuff.\n\nI've been studying your patch with the intent of finding a way to\nchange it and or the streaming read API to work together. I've\nattached a very rough sketch of how I think it could work.\n\nWe fill a queue with blocks from TIDs that we fetched from the index.\nThe queue is saved in a scan descriptor that is made available to the\nstreaming read callback. Once the queue is full, we invoke the table\nAM specific index_fetch_tuple() function which calls\npg_streaming_read_buffer_get_next(). When the streaming read API\ninvokes the callback we registered, it simply dequeues a block number\nfor prefetching. The only change to the streaming read API is that\nnow, even if the callback returns InvalidBlockNumber, we may not be\nfinished, so make it resumable.\n\nStructurally, this changes the timing of when the heap blocks are\nprefetched. Your code would get a tid from the index and then prefetch\nthe heap block -- doing this until it filled a queue that had the\nactual tids saved in it. With my approach and the streaming read API,\nyou fetch tids from the index until you've filled up a queue of block\nnumbers. Then the streaming read API will prefetch those heap blocks.\n\nI didn't actually implement the block queue -- I just saved a single\nblock number and pretended it was a block queue. I was imagining we\nreplace this with something like your IndexPrefetch->blockItems --\nwhich has light deduplication. We'd probably have to flesh it out more\nthan that.\n\nThere are also table AM layering violations in my sketch which would\nhave to be worked out (not to mention some resource leakage I didn't\nbother investigating [which causes it to fail tests]).\n\n0001 is all of Thomas' streaming read API code that isn't yet in\nmaster and 0002 is my rough sketch of index prefetching using the\nstreaming read API\n\nThere are also numerous optimizations that your index prefetching\npatch set does that would need to be added in some way. I haven't\nthought much about it yet. I wanted to see what you thought of this\napproach first. Basically, is it workable?\n\n- Melanie",
"msg_date": "Fri, 19 Jan 2024 16:43:37 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 1/19/24 16:19, Konstantin Knizhnik wrote:\n> \n> On 18/01/2024 5:57 pm, Tomas Vondra wrote:\n>> On 1/16/24 21:10, Konstantin Knizhnik wrote:\n>>> ...\n>>>\n>>>> 4. I think that performing prefetch at executor level is really great\n>>>>> idea and so prefetch can be used by all indexes, including custom\n>>>>> indexes. But prefetch will be efficient only if index can provide fast\n>>>>> access to next TID (located at the same page). I am not sure that\n>>>>> it is\n>>>>> true for all builtin indexes (GIN, GIST, BRIN,...) and especially for\n>>>>> custom AM. I wonder if we should extend AM API to make index make a\n>>>>> decision weather to perform prefetch of TIDs or not.\n>>>> I'm not against having a flag to enable/disable prefetching, but the\n>>>> question is whether doing prefetching for such indexes can be harmful.\n>>>> I'm not sure about that.\n>>> I tend to agree with you - it is hard to imagine index implementation\n>>> which doesn't win from prefetching heap pages.\n>>> May be only the filtering case you have mentioned. But it seems to me\n>>> that current B-Tree index scan (not IOS) implementation in Postgres\n>>> doesn't try to use index tuple to check extra condition - it will fetch\n>>> heap tuple in any case.\n>>>\n>> That's true, but that's why I started working on this:\n>>\n>> https://commitfest.postgresql.org/46/4352/\n>>\n>> I need to think about how to combine that with the prefetching. The good\n>> thing is that both changes require fetching TIDs, not slots. I think the\n>> condition can be simply added to the prefetch callback.\n>>\n>>\n>> regards\n>>\n> Looks like I was not true, even if it is not index-only scan but index\n> condition involves only index attributes, then heap is not accessed\n> until we find tuple satisfying search condition.\n> Inclusive index case described above\n> (https://commitfest.postgresql.org/46/4352/) is interesting but IMHO\n> exotic case. If keys are actually used in search, then why not to create\n> normal compound index instead?\n> \n\nNot sure I follow ...\n\nFirstly, I'm not convinced the example addressed by that other patch is\nthat exotic. IMHO it's quite possible it's actually quite common, but\nthe users do no realize the possible gains.\n\nAlso, there are reasons to not want very wide indexes - it has overhead\nassociated with maintenance, disk space, etc. I think it's perfectly\nrational to design indexes in a way eliminates most heap fetches\nnecessary to evaluate conditions, but does not guarantee IOS (so the\nlast heap fetch is still needed).\n\nWhat do you mean by \"create normal compound index\"? The patch addresses\na limitation that not every condition can be translated into a proper\nscan key. Even if we improve this, there will always be such conditions.\nThe the IOS can evaluate them on index tuple, the regular index scan\ncan't do that (currently).\n\nCan you share an example demonstrating the alternative approach?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jan 2024 23:14:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 20/01/2024 12:14 am, Tomas Vondra wrote:\n> Looks like I was not true, even if it is not index-only scan but index\n>> condition involves only index attributes, then heap is not accessed\n>> until we find tuple satisfying search condition.\n>> Inclusive index case described above\n>> (https://commitfest.postgresql.org/46/4352/) is interesting but IMHO\n>> exotic case. If keys are actually used in search, then why not to create\n>> normal compound index instead?\n>>\n> Not sure I follow ...\n>\n> Firstly, I'm not convinced the example addressed by that other patch is\n> that exotic. IMHO it's quite possible it's actually quite common, but\n> the users do no realize the possible gains.\n>\n> Also, there are reasons to not want very wide indexes - it has overhead\n> associated with maintenance, disk space, etc. I think it's perfectly\n> rational to design indexes in a way eliminates most heap fetches\n> necessary to evaluate conditions, but does not guarantee IOS (so the\n> last heap fetch is still needed).\n\nWe are comparing compound index (a,b) and covering (inclusive) index (a) \ninclude (b)\nThis indexes have exactly the same width and size and almost the same \nmaintenance overhead.\n\nFirst index has more expensive comparison function (involving two \ncolumns) but I do not think that it can significantly affect\nperformance and maintenance cost. Also if selectivity of \"a\" is good \nenough, then there is no need to compare \"b\"\n\nWhy we can prefer covering index to compound index? I see only two good \nreasons:\n1. Extra columns type do not have comparison function need for AM.\n2. The extra columns are never used in query predicate.\n\nIf you are going to use this columns in query predicates I do not see \nmuch sense in creating inclusive index rather than compound index.\nDo you?\n\n\n> What do you mean by \"create normal compound index\"? The patch addresses\n> a limitation that not every condition can be translated into a proper\n> scan key. Even if we improve this, there will always be such conditions.\n> The the IOS can evaluate them on index tuple, the regular index scan\n> can't do that (currently).\n>\n> Can you share an example demonstrating the alternative approach?\n\nMay be I missed something.\n\nThis is the example from \nhttps://www.postgresql.org/message-id/flat/N1xaIrU29uk5YxLyW55MGk5fz9s6V2FNtj54JRaVlFbPixD5z8sJ07Ite5CvbWwik8ZvDG07oSTN-usENLVMq2UAcizVTEd5b-o16ZGDIIU=@yamlcoder.me \n:\n\n```\n\nAnd here is the plan with index on (a,b).\n\nLimit (cost=0.42..4447.90 rows=1 width=12) (actual time=6.883..6.884 \nrows=0 loops=1) Output: a, b, d Buffers: shared hit=613 -> \nIndex Scan using t_a_b_idx on public.t (cost=0.42..4447.90 rows=1 \nwidth=12) (actual time=6.880..6.881 rows=0 loops=1) Output: a, \nb, d Index Cond: ((t.a > 1000000) AND (t.b = 4)) \n Buffers: shared hit=613 Planning: Buffers: shared hit=41 Planning \nTime: 0.314 ms Execution Time: 6.910 ms ```\n\n\nIsn't it an optimal plan for this query?\n\nAnd cite from self reproducible example https://dbfiddle.uk/iehtq44L :\n```\ncreate unique index t_a_include_b on t(a) include (b);\n-- I'd expecd index above to behave the same as index below for this query\n--create unique index on t(a,b);\n```\n\nI agree that it is natural to expect the same result for both indexes. \nSo this PR definitely makes sense.\nMy point is only that compound index (a,b) in this case is more natural \nand preferable.\n\n\n\n\n\n\n\n\n\nOn 20/01/2024 12:14 am, Tomas Vondra\n wrote:\n\nLooks like I was not true, even if it is not index-only scan but index\n\ncondition involves only index attributes, then heap is not accessed\nuntil we find tuple satisfying search condition.\nInclusive index case described above\n(https://commitfest.postgresql.org/46/4352/) is interesting but IMHO\nexotic case. If keys are actually used in search, then why not to create\nnormal compound index instead?\n\n\n\n\nNot sure I follow ...\n\nFirstly, I'm not convinced the example addressed by that other patch is\nthat exotic. IMHO it's quite possible it's actually quite common, but\nthe users do no realize the possible gains.\n\nAlso, there are reasons to not want very wide indexes - it has overhead\nassociated with maintenance, disk space, etc. I think it's perfectly\nrational to design indexes in a way eliminates most heap fetches\nnecessary to evaluate conditions, but does not guarantee IOS (so the\nlast heap fetch is still needed).\n\nWe are comparing compound index (a,b) and covering (inclusive)\n index (a) include (b)\n This indexes have exactly the same width and size and almost the\n same maintenance overhead.\nFirst index has more expensive comparison function (involving two\n columns) but I do not think that it can significantly affect \n performance and maintenance cost. Also if selectivity of \"a\" is\n good enough, then there is no need to compare \"b\"\n\n Why we can prefer covering index to compound index? I see only\n two good reasons:\n 1. Extra columns type do not have comparison function need for\n AM.\n 2. The extra columns are never used in query predicate.\n\n If you are going to use this columns in query predicates I do not\n see much sense in creating inclusive index rather than compound\n index.\n Do you?\n\n\n\n\n\n\nWhat do you mean by \"create normal compound index\"? The patch addresses\na limitation that not every condition can be translated into a proper\nscan key. Even if we improve this, there will always be such conditions.\nThe the IOS can evaluate them on index tuple, the regular index scan\ncan't do that (currently).\n\nCan you share an example demonstrating the alternative approach?\n\n\nMay be I missed something.\nThis is the example from\nhttps://www.postgresql.org/message-id/flat/N1xaIrU29uk5YxLyW55MGk5fz9s6V2FNtj54JRaVlFbPixD5z8sJ07Ite5CvbWwik8ZvDG07oSTN-usENLVMq2UAcizVTEd5b-o16ZGDIIU=@yamlcoder.me\n :\n\n```\n\nAnd here is the plan with index on (a,b).\n\n\n Limit (cost=0.42..4447.90 rows=1 width=12) (actual time=6.883..6.884 rows=0 loops=1)\n Output: a, b, d\n Buffers: shared hit=613\n -> Index Scan using t_a_b_idx on public.t (cost=0.42..4447.90 rows=1 width=12) (actual time=6.880..6.881 rows=0 loops=1)\n Output: a, b, d\n Index Cond: ((t.a > 1000000) AND (t.b = 4))\n Buffers: shared hit=613\n Planning:\n Buffers: shared hit=41\n Planning Time: 0.314 ms\n Execution Time: 6.910 ms\n```\n\n Isn't it an optimal plan for this query?\n\nAnd cite from self reproducible example\n https://dbfiddle.uk/iehtq44L :\n ```\n create unique index t_a_include_b on t(a) include (b);\n -- I'd expecd index above to behave the same as index below for\n this query\n --create unique index on t(a,b);\n ```\n\n I agree that it is natural to expect the same result for both\n indexes. So this PR definitely makes sense.\n My point is only that compound index (a,b) in this case is more\n natural and preferable.",
"msg_date": "Sun, 21 Jan 2024 21:50:17 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\nOn 19/01/2024 2:35 pm, Tomas Vondra wrote:\n>\n> On 1/19/24 09:34, Konstantin Knizhnik wrote:\n>> On 18/01/2024 6:00 pm, Tomas Vondra wrote:\n>>> On 1/17/24 09:45, Konstantin Knizhnik wrote:\n>>>> I have integrated your prefetch patch in Neon and it actually works!\n>>>> Moreover, I combined it with prefetch of leaf pages for IOS and it also\n>>>> seems to work.\n>>>>\n>>> Cool! And do you think this is the right design/way to do this?\n>> I like the idea of prefetching TIDs in executor.\n>>\n>> But looking though your patch I have some questions:\n>>\n>>\n>> 1. Why it is necessary to allocate and store all_visible flag in data\n>> buffer. Why caller of IndexPrefetchNext can not look at prefetch field?\n>>\n>> + /* store the all_visible flag in the private part of the entry */\n>> + entry->data = palloc(sizeof(bool));\n>> + *(bool *) entry->data = all_visible;\n>>\n> What you mean by \"prefetch field\"?\n\n\nI mean \"prefetch\" field of IndexPrefetchEntry:\n\n+\n+typedef struct IndexPrefetchEntry\n+{\n+ ItemPointerData tid;\n+\n+ /* should we prefetch heap page for this TID? */\n+ bool prefetch;\n+\n\nYou store the same flag twice:\n\n+ /* prefetch only if not all visible */\n+ entry->prefetch = !all_visible;\n+\n+ /* store the all_visible flag in the private part of the entry */\n+ entry->data = palloc(sizeof(bool));\n+ *(bool *) entry->data = all_visible;\n\nMy question was: why do we need to allocate something in entry->data and \nstore all_visible in it, while we already stored !all-visible in \nentry->prefetch.\n\n\n\n\n",
"msg_date": "Sun, 21 Jan 2024 21:56:36 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 1/21/24 20:50, Konstantin Knizhnik wrote:\n> \n> On 20/01/2024 12:14 am, Tomas Vondra wrote:\n>> Looks like I was not true, even if it is not index-only scan but index\n>>> condition involves only index attributes, then heap is not accessed\n>>> until we find tuple satisfying search condition.\n>>> Inclusive index case described above\n>>> (https://commitfest.postgresql.org/46/4352/) is interesting but IMHO\n>>> exotic case. If keys are actually used in search, then why not to create\n>>> normal compound index instead?\n>>>\n>> Not sure I follow ...\n>>\n>> Firstly, I'm not convinced the example addressed by that other patch is\n>> that exotic. IMHO it's quite possible it's actually quite common, but\n>> the users do no realize the possible gains.\n>>\n>> Also, there are reasons to not want very wide indexes - it has overhead\n>> associated with maintenance, disk space, etc. I think it's perfectly\n>> rational to design indexes in a way eliminates most heap fetches\n>> necessary to evaluate conditions, but does not guarantee IOS (so the\n>> last heap fetch is still needed).\n> \n> We are comparing compound index (a,b) and covering (inclusive) index (a)\n> include (b)\n> This indexes have exactly the same width and size and almost the same\n> maintenance overhead.\n> \n> First index has more expensive comparison function (involving two\n> columns) but I do not think that it can significantly affect\n> performance and maintenance cost. Also if selectivity of \"a\" is good\n> enough, then there is no need to compare \"b\"\n> \n> Why we can prefer covering index to compound index? I see only two good\n> reasons:\n> 1. Extra columns type do not have comparison function need for AM.\n> 2. The extra columns are never used in query predicate.\n> \n\nOr maybe you don't want to include the columns in a UNIQUE constraint?\n\n> If you are going to use this columns in query predicates I do not see\n> much sense in creating inclusive index rather than compound index.\n> Do you?\n> \n\nBut this is also about conditions that can't be translated into index\nscan keys. Consider this:\n\ncreate table t (a int, b int, c int);\ninsert into t select 1000 * random(), 1000 * random(), 1000 * random()\nfrom generate_series(1,1000000) s(i);\ncreate index on t (a,b);\nvacuum analyze t;\n\nexplain (analyze, buffers) select * from t where a = 10 and mod(b,10) =\n1111111;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------\n Index Scan using t_a_b_idx on t (cost=0.42..3670.74 rows=5 width=12)\n(actual time=4.562..4.564 rows=0 loops=1)\n Index Cond: (a = 10)\n Filter: (mod(b, 10) = 1111111)\n Rows Removed by Filter: 974\n Buffers: shared hit=980\n Prefetches: blocks=901\n Planning Time: 0.304 ms\n Execution Time: 5.146 ms\n(8 rows)\n\nNotice that this still fetched ~1000 buffers in order to evaluate the\nfilter on \"b\", because it's complex and can't be transformed into a nice\nscan key. Or this:\n\nexplain (analyze, buffers) select a from t where a = 10 and (b+1) < 100\n and c < 0;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Index Scan using t_a_b_idx on t (cost=0.42..3673.22 rows=1 width=4)\n(actual time=4.446..4.448 rows=0 loops=1)\n Index Cond: (a = 10)\n Filter: ((c < 0) AND ((b + 1) < 100))\n Rows Removed by Filter: 974\n Buffers: shared hit=980\n Prefetches: blocks=901\n Planning Time: 0.313 ms\n Execution Time: 4.878 ms\n(8 rows)\n\nwhere it's \"broken\" by the extra unindexed column.\n\nFWIW there are the primary cases I had in mind for this patch.\n\n\n> \n>> What do you mean by \"create normal compound index\"? The patch addresses\n>> a limitation that not every condition can be translated into a proper\n>> scan key. Even if we improve this, there will always be such conditions.\n>> The the IOS can evaluate them on index tuple, the regular index scan\n>> can't do that (currently).\n>>\n>> Can you share an example demonstrating the alternative approach?\n> \n> May be I missed something.\n> \n> This is the example from\n> https://www.postgresql.org/message-id/flat/N1xaIrU29uk5YxLyW55MGk5fz9s6V2FNtj54JRaVlFbPixD5z8sJ07Ite5CvbWwik8ZvDG07oSTN-usENLVMq2UAcizVTEd5b-o16ZGDIIU=@yamlcoder.me :\n> \n> ```\n> \n> And here is the plan with index on (a,b).\n> \n> Limit (cost=0.42..4447.90 rows=1 width=12) (actual time=6.883..6.884\n> rows=0 loops=1) Output: a, b, d Buffers: shared hit=613 ->\n> Index Scan using t_a_b_idx on public.t (cost=0.42..4447.90 rows=1\n> width=12) (actual time=6.880..6.881 rows=0 loops=1) Output: a,\n> b, d Index Cond: ((t.a > 1000000) AND (t.b = 4)) \n> Buffers: shared hit=613 Planning: Buffers: shared hit=41 Planning\n> Time: 0.314 ms Execution Time: 6.910 ms ```\n> \n> \n> Isn't it an optimal plan for this query?\n> \n> And cite from self reproducible example https://dbfiddle.uk/iehtq44L :\n> ```\n> create unique index t_a_include_b on t(a) include (b);\n> -- I'd expecd index above to behave the same as index below for this query\n> --create unique index on t(a,b);\n> ```\n> \n> I agree that it is natural to expect the same result for both indexes.\n> So this PR definitely makes sense.\n> My point is only that compound index (a,b) in this case is more natural\n> and preferable.\n> \n\nYes, perhaps. But you may also see it from the other direction - if you\nalready have an index with included columns (for whatever reason), it\nwould be nice to leverage that if possible. And as I mentioned above,\nit's not always the case that move a column from \"included\" to a proper\nkey, or stuff like that.\n\nAnyway, it seems entirely unrelated to this prefetching thread.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jan 2024 00:39:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 1/21/24 20:56, Konstantin Knizhnik wrote:\n> \n> On 19/01/2024 2:35 pm, Tomas Vondra wrote:\n>>\n>> On 1/19/24 09:34, Konstantin Knizhnik wrote:\n>>> On 18/01/2024 6:00 pm, Tomas Vondra wrote:\n>>>> On 1/17/24 09:45, Konstantin Knizhnik wrote:\n>>>>> I have integrated your prefetch patch in Neon and it actually works!\n>>>>> Moreover, I combined it with prefetch of leaf pages for IOS and it\n>>>>> also\n>>>>> seems to work.\n>>>>>\n>>>> Cool! And do you think this is the right design/way to do this?\n>>> I like the idea of prefetching TIDs in executor.\n>>>\n>>> But looking though your patch I have some questions:\n>>>\n>>>\n>>> 1. Why it is necessary to allocate and store all_visible flag in data\n>>> buffer. Why caller of IndexPrefetchNext can not look at prefetch field?\n>>>\n>>> + /* store the all_visible flag in the private part of the\n>>> entry */\n>>> + entry->data = palloc(sizeof(bool));\n>>> + *(bool *) entry->data = all_visible;\n>>>\n>> What you mean by \"prefetch field\"?\n> \n> \n> I mean \"prefetch\" field of IndexPrefetchEntry:\n> \n> +\n> +typedef struct IndexPrefetchEntry\n> +{\n> + ItemPointerData tid;\n> +\n> + /* should we prefetch heap page for this TID? */\n> + bool prefetch;\n> +\n> \n> You store the same flag twice:\n> \n> + /* prefetch only if not all visible */\n> + entry->prefetch = !all_visible;\n> +\n> + /* store the all_visible flag in the private part of the entry */\n> + entry->data = palloc(sizeof(bool));\n> + *(bool *) entry->data = all_visible;\n> \n> My question was: why do we need to allocate something in entry->data and\n> store all_visible in it, while we already stored !all-visible in\n> entry->prefetch.\n> \n\nAh, right. Well, you're right in this case we perhaps could set just one\nof those flags, but the \"purpose\" of the two places is quite different.\n\nThe \"prefetch\" flag is fully controlled by the prefetcher, and it's up\nto it to change it (e.g. I can easily imagine some new logic touching\nsetting it to \"false\" for some reason).\n\nThe \"data\" flag is fully controlled by the custom callbacks, so whatever\nthe callback stores, will be there.\n\nI don't think it's worth simplifying this. In particular, I don't think\nthe callback can assume it can rely on the \"prefetch\" flag.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jan 2024 00:47:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there were CFbot test failures last time it was run [2]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4351/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4351\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 15:53:15 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 22/01/2024 1:47 am, Tomas Vondra wrote:\n> h, right. Well, you're right in this case we perhaps could set just one\n> of those flags, but the \"purpose\" of the two places is quite different.\n>\n> The \"prefetch\" flag is fully controlled by the prefetcher, and it's up\n> to it to change it (e.g. I can easily imagine some new logic touching\n> setting it to \"false\" for some reason).\n>\n> The \"data\" flag is fully controlled by the custom callbacks, so whatever\n> the callback stores, will be there.\n>\n> I don't think it's worth simplifying this. In particular, I don't think\n> the callback can assume it can rely on the \"prefetch\" flag.\n>\nWhy not to add \"all_visible\" flag to IndexPrefetchEntry ? If will not \ncause any extra space overhead (because of alignment), but allows to \navoid dynamic memory allocation (not sure if it is critical, but nice to \navoid if possible).\n\n\n\n\n\n\n\n\n\nOn 22/01/2024 1:47 am, Tomas Vondra\n wrote:\n\nh, right. Well, you're right in this case we perhaps could set just one\nof those flags, but the \"purpose\" of the two places is quite different.\n\nThe \"prefetch\" flag is fully controlled by the prefetcher, and it's up\nto it to change it (e.g. I can easily imagine some new logic touching\nsetting it to \"false\" for some reason).\n\nThe \"data\" flag is fully controlled by the custom callbacks, so whatever\nthe callback stores, will be there.\n\nI don't think it's worth simplifying this. In particular, I don't think\nthe callback can assume it can rely on the \"prefetch\" flag.\n\n\n\nWhy not to add \"all_visible\" flag to IndexPrefetchEntry ?\nIf will not cause any extra space overhead (because of alignment), but allows to avoid dynamic memory allocation (not sure if it is critical, but nice to avoid if possible).",
"msg_date": "Mon, 22 Jan 2024 08:35:59 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 22/01/2024 1:39 am, Tomas Vondra wrote:\n>> Why we can prefer covering index to compound index? I see only two good\n>> reasons:\n>> 1. Extra columns type do not have comparison function need for AM.\n>> 2. The extra columns are never used in query predicate.\n>>\n> Or maybe you don't want to include the columns in a UNIQUE constraint?\n>\nDo you mean that compound index (a,b) can not be used to enforce \nuniqueness of \"a\"?\nIf so, I agree.\n\n>> If you are going to use this columns in query predicates I do not see\n>> much sense in creating inclusive index rather than compound index.\n>> Do you?\n>>\n> But this is also about conditions that can't be translated into index\n> scan keys. Consider this:\n>\n> create table t (a int, b int, c int);\n> insert into t select 1000 * random(), 1000 * random(), 1000 * random()\n> from generate_series(1,1000000) s(i);\n> create index on t (a,b);\n> vacuum analyze t;\n>\n> explain (analyze, buffers) select * from t where a = 10 and mod(b,10) =\n> 1111111;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Index Scan using t_a_b_idx on t (cost=0.42..3670.74 rows=5 width=12)\n> (actual time=4.562..4.564 rows=0 loops=1)\n> Index Cond: (a = 10)\n> Filter: (mod(b, 10) = 1111111)\n> Rows Removed by Filter: 974\n> Buffers: shared hit=980\n> Prefetches: blocks=901\n> Planning Time: 0.304 ms\n> Execution Time: 5.146 ms\n> (8 rows)\n>\n> Notice that this still fetched ~1000 buffers in order to evaluate the\n> filter on \"b\", because it's complex and can't be transformed into a nice\n> scan key.\n\nO yes.\nLooks like I didn't understand the logic when predicate is included in \nindex condition and when not.\nIt seems to be natural that only such predicate which specifies some \nrange can be included in index condition.\nBut it is not the case:\n\npostgres=# explain select * from t where a = 10 and b in (10,20,30);\n QUERY PLAN\n---------------------------------------------------------------------\n Index Scan using t_a_b_idx on t (cost=0.42..25.33 rows=3 width=12)\n Index Cond: ((a = 10) AND (b = ANY ('{10,20,30}'::integer[])))\n(2 rows)\n\nSo I though ANY predicate using index keys is included in index condition.\nBut it is not true (as your example shows).\n\nBut IMHO mod(b,10)=111111 or (b+1) < 100 are both quite rare predicates \nthis is why I named this use cases \"exotic\".\n\nIn any case, if we have some columns in index tuple it is desired to use \nthem for filtering before extracting heap tuple.\nBut I afraid it will be not so easy to implement...\n\n\n\n\n\n\n\n\n\nOn 22/01/2024 1:39 am, Tomas Vondra\n wrote:\n\n\n\n\nWhy we can prefer covering index to compound index? I see only two good\nreasons:\n1. Extra columns type do not have comparison function need for AM.\n2. The extra columns are never used in query predicate.\n\n\n\n\nOr maybe you don't want to include the columns in a UNIQUE constraint?\n\n\n\nDo you mean that compound index (a,b) can not be used to enforce\n uniqueness of \"a\"?\n If so, I agree.\n\n\n\nIf you are going to use this columns in query predicates I do not see\nmuch sense in creating inclusive index rather than compound index.\nDo you?\n\n\n\n\nBut this is also about conditions that can't be translated into index\nscan keys. Consider this:\n\ncreate table t (a int, b int, c int);\ninsert into t select 1000 * random(), 1000 * random(), 1000 * random()\nfrom generate_series(1,1000000) s(i);\ncreate index on t (a,b);\nvacuum analyze t;\n\nexplain (analyze, buffers) select * from t where a = 10 and mod(b,10) =\n1111111;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------\n Index Scan using t_a_b_idx on t (cost=0.42..3670.74 rows=5 width=12)\n(actual time=4.562..4.564 rows=0 loops=1)\n Index Cond: (a = 10)\n Filter: (mod(b, 10) = 1111111)\n Rows Removed by Filter: 974\n Buffers: shared hit=980\n Prefetches: blocks=901\n Planning Time: 0.304 ms\n Execution Time: 5.146 ms\n(8 rows)\n\nNotice that this still fetched ~1000 buffers in order to evaluate the\nfilter on \"b\", because it's complex and can't be transformed into a nice\nscan key.\n\n\nO yes.\n Looks like I didn't understand the logic when predicate is\n included in index condition and when not.\n It seems to be natural that only such predicate which specifies\n some range can be included in index condition.\n But it is not the case:\n\npostgres=# explain select * from t where a = 10 and b in (10,20,30);\n QUERY PLAN \n---------------------------------------------------------------------\n Index Scan using t_a_b_idx on t (cost=0.42..25.33 rows=3 width=12)\n Index Cond: ((a = 10) AND (b = ANY ('{10,20,30}'::integer[])))\n(2 rows)\n\nSo I though ANY predicate using index keys is included in index condition.\nBut it is not true (as your example shows).\n\n\nBut IMHO mod(b,10)=111111 or (b+1) < 100 are both quite rare\n predicates this is why I named this use cases \"exotic\".\nIn any case, if we have some columns in index tuple it is desired\n to use them for filtering before extracting heap tuple.\n But I afraid it will be not so easy to implement...",
"msg_date": "Mon, 22 Jan 2024 09:21:14 +0200",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/19/24 22:43, Melanie Plageman wrote:\n> On Fri, Jan 12, 2024 at 11:42 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 1/9/24 21:31, Robert Haas wrote:\n>>> On Thu, Jan 4, 2024 at 9:55 AM Tomas Vondra\n>>> <[email protected]> wrote:\n>>>> Here's a somewhat reworked version of the patch. My initial goal was to\n>>>> see if it could adopt the StreamingRead API proposed in [1], but that\n>>>> turned out to be less straight-forward than I hoped, for two reasons:\n>>>\n>>> I guess we need Thomas or Andres or maybe Melanie to comment on this.\n>>>\n>>\n>> Yeah. Or maybe Thomas if he has thoughts on how to combine this with the\n>> streaming I/O stuff.\n> \n> I've been studying your patch with the intent of finding a way to\n> change it and or the streaming read API to work together. I've\n> attached a very rough sketch of how I think it could work.\n> \n\nThanks.\n\n> We fill a queue with blocks from TIDs that we fetched from the index.\n> The queue is saved in a scan descriptor that is made available to the\n> streaming read callback. Once the queue is full, we invoke the table\n> AM specific index_fetch_tuple() function which calls\n> pg_streaming_read_buffer_get_next(). When the streaming read API\n> invokes the callback we registered, it simply dequeues a block number\n> for prefetching.\n\nSo in a way there are two queues in IndexFetchTableData. One (blk_queue)\nis being filled from IndexNext, and then the queue in StreamingRead.\n\n> The only change to the streaming read API is that now, even if the\n> callback returns InvalidBlockNumber, we may not be finished, so make\n> it resumable.\n> \n\nHmm, not sure when can the callback return InvalidBlockNumber before\nreaching the end. Perhaps for the first index_fetch_heap call? Any\nreason not to fill the blk_queue before calling index_fetch_heap?\n\n\n> Structurally, this changes the timing of when the heap blocks are\n> prefetched. Your code would get a tid from the index and then prefetch\n> the heap block -- doing this until it filled a queue that had the\n> actual tids saved in it. With my approach and the streaming read API,\n> you fetch tids from the index until you've filled up a queue of block\n> numbers. Then the streaming read API will prefetch those heap blocks.\n> \n\nAnd is that a good/desirable change? I'm not saying it's not, but maybe\nwe should not be filling either queue in one go - we don't want to\noverload the prefetching.\n\n> I didn't actually implement the block queue -- I just saved a single\n> block number and pretended it was a block queue. I was imagining we\n> replace this with something like your IndexPrefetch->blockItems --\n> which has light deduplication. We'd probably have to flesh it out more\n> than that.\n> \n\nI don't understand how this passes the TID to the index_fetch_heap.\nIsn't it working only by accident, due to blk_queue only having a single\nentry? Shouldn't the first queue (blk_queue) store TIDs instead?\n\n> There are also table AM layering violations in my sketch which would\n> have to be worked out (not to mention some resource leakage I didn't\n> bother investigating [which causes it to fail tests]).\n> \n> 0001 is all of Thomas' streaming read API code that isn't yet in\n> master and 0002 is my rough sketch of index prefetching using the\n> streaming read API\n> \n> There are also numerous optimizations that your index prefetching\n> patch set does that would need to be added in some way. I haven't\n> thought much about it yet. I wanted to see what you thought of this\n> approach first. Basically, is it workable?\n> \n\nIt seems workable, yes. I'm not sure it's much simpler than my patch\n(considering a lot of the code is in the optimizations, which are\nmissing from this patch).\n\nI think the question is where should the optimizations happen. I suppose\nsome of them might/should happen in the StreamingRead API itself - like\nthe detection of sequential patterns, recently prefetched blocks, ...\n\nBut I'm not sure what to do about optimizations that are more specific\nto the access path. Consider for example the index-only scans. We don't\nwant to prefetch all the pages, we need to inspect the VM and prefetch\njust the not-all-visible ones. And then pass the info to the index scan,\nso that it does not need to check the VM again. It's not clear to me how\nto do this with this approach.\n\n\nThe main\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 23 Jan 2024 18:43:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 12:43 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 1/19/24 22:43, Melanie Plageman wrote:\n>\n> > We fill a queue with blocks from TIDs that we fetched from the index.\n> > The queue is saved in a scan descriptor that is made available to the\n> > streaming read callback. Once the queue is full, we invoke the table\n> > AM specific index_fetch_tuple() function which calls\n> > pg_streaming_read_buffer_get_next(). When the streaming read API\n> > invokes the callback we registered, it simply dequeues a block number\n> > for prefetching.\n>\n> So in a way there are two queues in IndexFetchTableData. One (blk_queue)\n> is being filled from IndexNext, and then the queue in StreamingRead.\n\nI've changed the name from blk_queue to tid_queue to fix the issue you\nmention in your later remarks.\nI suppose there are two queues. The tid_queue is just to pass the\nblock requests to the streaming read API. The prefetch distance will\nbe the smaller of the two sizes.\n\n> > The only change to the streaming read API is that now, even if the\n> > callback returns InvalidBlockNumber, we may not be finished, so make\n> > it resumable.\n>\n> Hmm, not sure when can the callback return InvalidBlockNumber before\n> reaching the end. Perhaps for the first index_fetch_heap call? Any\n> reason not to fill the blk_queue before calling index_fetch_heap?\n\nThe callback will return InvalidBlockNumber whenever the queue is\nempty. Let's say your queue size is 5 and your effective prefetch\ndistance is 10 (some combination of the PgStreamingReadRange sizes and\nPgStreamingRead->max_ios). The first time you call index_fetch_heap(),\nthe callback returns InvalidBlockNumber. Then the tid_queue is filled\nwith 5 tids. Then index_fetch_heap() is called.\npg_streaming_read_look_ahead() will prefetch all 5 of these TID's\nblocks, emptying the queue. Once all 5 have been dequeued, the\ncallback will return InvalidBlockNumber.\npg_streaming_read_buffer_get_next() will return one of the 5 blocks in\na buffer and save the associated TID in the per_buffer_data. Before\nindex_fetch_heap() is called again, we will see that the queue is not\nfull and fill it up again with 5 TIDs. So, the callback will return\nInvalidBlockNumber 3 times in this scenario.\n\n> > Structurally, this changes the timing of when the heap blocks are\n> > prefetched. Your code would get a tid from the index and then prefetch\n> > the heap block -- doing this until it filled a queue that had the\n> > actual tids saved in it. With my approach and the streaming read API,\n> > you fetch tids from the index until you've filled up a queue of block\n> > numbers. Then the streaming read API will prefetch those heap blocks.\n>\n> And is that a good/desirable change? I'm not saying it's not, but maybe\n> we should not be filling either queue in one go - we don't want to\n> overload the prefetching.\n\nWe can focus on the prefetch distance algorithm maintained in the\nstreaming read API and then make sure that the tid_queue is larger\nthan the desired prefetch distance maintained by the streaming read\nAPI.\n\n> > I didn't actually implement the block queue -- I just saved a single\n> > block number and pretended it was a block queue. I was imagining we\n> > replace this with something like your IndexPrefetch->blockItems --\n> > which has light deduplication. We'd probably have to flesh it out more\n> > than that.\n>\n> I don't understand how this passes the TID to the index_fetch_heap.\n> Isn't it working only by accident, due to blk_queue only having a single\n> entry? Shouldn't the first queue (blk_queue) store TIDs instead?\n\nOh dear! Fixed in the attached v2. I've replaced the single\nBlockNumber with a single ItemPointerData. I will work on implementing\nan actual queue next week.\n\n> > There are also table AM layering violations in my sketch which would\n> > have to be worked out (not to mention some resource leakage I didn't\n> > bother investigating [which causes it to fail tests]).\n> >\n> > 0001 is all of Thomas' streaming read API code that isn't yet in\n> > master and 0002 is my rough sketch of index prefetching using the\n> > streaming read API\n> >\n> > There are also numerous optimizations that your index prefetching\n> > patch set does that would need to be added in some way. I haven't\n> > thought much about it yet. I wanted to see what you thought of this\n> > approach first. Basically, is it workable?\n>\n> It seems workable, yes. I'm not sure it's much simpler than my patch\n> (considering a lot of the code is in the optimizations, which are\n> missing from this patch).\n>\n> I think the question is where should the optimizations happen. I suppose\n> some of them might/should happen in the StreamingRead API itself - like\n> the detection of sequential patterns, recently prefetched blocks, ...\n\nSo, the streaming read API does detection of sequential patterns and\nnot prefetching things that are in shared buffers. It doesn't handle\navoiding prefetching recently prefetched blocks yet AFAIK. But I\ndaresay this would be relevant for other streaming read users and\ncould certainly be implemented there.\n\n> But I'm not sure what to do about optimizations that are more specific\n> to the access path. Consider for example the index-only scans. We don't\n> want to prefetch all the pages, we need to inspect the VM and prefetch\n> just the not-all-visible ones. And then pass the info to the index scan,\n> so that it does not need to check the VM again. It's not clear to me how\n> to do this with this approach.\n\nYea, this is an issue I'll need to think about. To really spell out\nthe problem: the callback dequeues a TID from the tid_queue and looks\nup its block in the VM. It's all visible. So, it shouldn't return that\nblock to the streaming read API to fetch from the heap because it\ndoesn't need to be read. But, where does the callback put the TID so\nthat the caller can get it? I'm going to think more about this.\n\nAs for passing around the all visible status so as to not reread the\nVM block -- that feels solvable but I haven't looked into it.\n\n- Melanie",
"msg_date": "Tue, 23 Jan 2024 19:51:24 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/24/24 01:51, Melanie Plageman wrote:\n> On Tue, Jan 23, 2024 at 12:43 PM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 1/19/24 22:43, Melanie Plageman wrote:\n>>\n>>> We fill a queue with blocks from TIDs that we fetched from the index.\n>>> The queue is saved in a scan descriptor that is made available to the\n>>> streaming read callback. Once the queue is full, we invoke the table\n>>> AM specific index_fetch_tuple() function which calls\n>>> pg_streaming_read_buffer_get_next(). When the streaming read API\n>>> invokes the callback we registered, it simply dequeues a block number\n>>> for prefetching.\n>>\n>> So in a way there are two queues in IndexFetchTableData. One (blk_queue)\n>> is being filled from IndexNext, and then the queue in StreamingRead.\n> \n> I've changed the name from blk_queue to tid_queue to fix the issue you\n> mention in your later remarks.\n> I suppose there are two queues. The tid_queue is just to pass the\n> block requests to the streaming read API. The prefetch distance will\n> be the smaller of the two sizes.\n> \n\nFWIW I think the two queues are a nice / elegant approach. In hindsight\nmy problems with trying to utilize the StreamingRead were due to trying\nto use the block-oriented API directly from places that work with TIDs,\nand this just makes that go away.\n\nI wonder what the overhead of shuffling stuff between queues will be,\nbut hopefully not too high (that's my assumption).\n\n>>> The only change to the streaming read API is that now, even if the\n>>> callback returns InvalidBlockNumber, we may not be finished, so make\n>>> it resumable.\n>>\n>> Hmm, not sure when can the callback return InvalidBlockNumber before\n>> reaching the end. Perhaps for the first index_fetch_heap call? Any\n>> reason not to fill the blk_queue before calling index_fetch_heap?\n> \n> The callback will return InvalidBlockNumber whenever the queue is\n> empty. Let's say your queue size is 5 and your effective prefetch\n> distance is 10 (some combination of the PgStreamingReadRange sizes and\n> PgStreamingRead->max_ios). The first time you call index_fetch_heap(),\n> the callback returns InvalidBlockNumber. Then the tid_queue is filled\n> with 5 tids. Then index_fetch_heap() is called.\n> pg_streaming_read_look_ahead() will prefetch all 5 of these TID's\n> blocks, emptying the queue. Once all 5 have been dequeued, the\n> callback will return InvalidBlockNumber.\n> pg_streaming_read_buffer_get_next() will return one of the 5 blocks in\n> a buffer and save the associated TID in the per_buffer_data. Before\n> index_fetch_heap() is called again, we will see that the queue is not\n> full and fill it up again with 5 TIDs. So, the callback will return\n> InvalidBlockNumber 3 times in this scenario.\n> \n\nThanks for the explanation. Yes, I didn't realize that the queues may be\nof different length, at which point it makes sense to return invalid\nblock to signal the TID queue is empty.\n\n>>> Structurally, this changes the timing of when the heap blocks are\n>>> prefetched. Your code would get a tid from the index and then prefetch\n>>> the heap block -- doing this until it filled a queue that had the\n>>> actual tids saved in it. With my approach and the streaming read API,\n>>> you fetch tids from the index until you've filled up a queue of block\n>>> numbers. Then the streaming read API will prefetch those heap blocks.\n>>\n>> And is that a good/desirable change? I'm not saying it's not, but maybe\n>> we should not be filling either queue in one go - we don't want to\n>> overload the prefetching.\n> \n> We can focus on the prefetch distance algorithm maintained in the\n> streaming read API and then make sure that the tid_queue is larger\n> than the desired prefetch distance maintained by the streaming read\n> API.\n> \n\nAgreed. I think I wasn't quite right when concerned about \"overloading\"\nthe prefetch, because that depends entirely on the StreamingRead API\nqueue. A lage TID queue can't cause overload of anything.\n\nWhat could happen is a TID queue being too small, so the prefetch can't\nhit the target distance. But that can happen already, e.g. indexes that\nare correlated and/or index-only scans with all-visible pages.\n\n>>> There are also table AM layering violations in my sketch which would\n>>> have to be worked out (not to mention some resource leakage I didn't\n>>> bother investigating [which causes it to fail tests]).\n>>>\n>>> 0001 is all of Thomas' streaming read API code that isn't yet in\n>>> master and 0002 is my rough sketch of index prefetching using the\n>>> streaming read API\n>>>\n>>> There are also numerous optimizations that your index prefetching\n>>> patch set does that would need to be added in some way. I haven't\n>>> thought much about it yet. I wanted to see what you thought of this\n>>> approach first. Basically, is it workable?\n>>\n>> It seems workable, yes. I'm not sure it's much simpler than my patch\n>> (considering a lot of the code is in the optimizations, which are\n>> missing from this patch).\n>>\n>> I think the question is where should the optimizations happen. I suppose\n>> some of them might/should happen in the StreamingRead API itself - like\n>> the detection of sequential patterns, recently prefetched blocks, ...\n> \n> So, the streaming read API does detection of sequential patterns and\n> not prefetching things that are in shared buffers. It doesn't handle\n> avoiding prefetching recently prefetched blocks yet AFAIK. But I\n> daresay this would be relevant for other streaming read users and\n> could certainly be implemented there.\n> \n\nYes, the \"recently prefetched stuff\" cache seems like a fairly natural\ncomplement to the pattern detection and shared-buffers check.\n\nFWIW I wonder if we should make some of this customizable, so that\nsystems with customized storage (e.g. neon or with direct I/O) can e.g.\ndisable some of these checks. Or replace them with their version.\n\n>> But I'm not sure what to do about optimizations that are more specific\n>> to the access path. Consider for example the index-only scans. We don't\n>> want to prefetch all the pages, we need to inspect the VM and prefetch\n>> just the not-all-visible ones. And then pass the info to the index scan,\n>> so that it does not need to check the VM again. It's not clear to me how\n>> to do this with this approach.\n> \n> Yea, this is an issue I'll need to think about. To really spell out\n> the problem: the callback dequeues a TID from the tid_queue and looks\n> up its block in the VM. It's all visible. So, it shouldn't return that\n> block to the streaming read API to fetch from the heap because it\n> doesn't need to be read. But, where does the callback put the TID so\n> that the caller can get it? I'm going to think more about this.\n> \n\nYes, that's the problem for index-only scans. I'd generalize it so that\nit's about the callback being able to (a) decide if it needs to read the\nheap page, and (b) store some custom info for the TID.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Jan 2024 10:19:44 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/22/24 08:21, Konstantin Knizhnik wrote:\n> \n> On 22/01/2024 1:39 am, Tomas Vondra wrote:\n>>> Why we can prefer covering index to compound index? I see only two good\n>>> reasons:\n>>> 1. Extra columns type do not have comparison function need for AM.\n>>> 2. The extra columns are never used in query predicate.\n>>>\n>> Or maybe you don't want to include the columns in a UNIQUE constraint?\n>>\n> Do you mean that compound index (a,b) can not be used to enforce\n> uniqueness of \"a\"?\n> If so, I agree.\n> \n\nYes.\n\n>>> If you are going to use this columns in query predicates I do not see\n>>> much sense in creating inclusive index rather than compound index.\n>>> Do you?\n>>>\n>> But this is also about conditions that can't be translated into index\n>> scan keys. Consider this:\n>>\n>> create table t (a int, b int, c int);\n>> insert into t select 1000 * random(), 1000 * random(), 1000 * random()\n>> from generate_series(1,1000000) s(i);\n>> create index on t (a,b);\n>> vacuum analyze t;\n>>\n>> explain (analyze, buffers) select * from t where a = 10 and mod(b,10) =\n>> 1111111;\n>> QUERY PLAN\n>>\n>> -----------------------------------------------------------------------------------------------------------------\n>> Index Scan using t_a_b_idx on t (cost=0.42..3670.74 rows=5 width=12)\n>> (actual time=4.562..4.564 rows=0 loops=1)\n>> Index Cond: (a = 10)\n>> Filter: (mod(b, 10) = 1111111)\n>> Rows Removed by Filter: 974\n>> Buffers: shared hit=980\n>> Prefetches: blocks=901\n>> Planning Time: 0.304 ms\n>> Execution Time: 5.146 ms\n>> (8 rows)\n>>\n>> Notice that this still fetched ~1000 buffers in order to evaluate the\n>> filter on \"b\", because it's complex and can't be transformed into a nice\n>> scan key.\n> \n> O yes.\n> Looks like I didn't understand the logic when predicate is included in\n> index condition and when not.\n> It seems to be natural that only such predicate which specifies some\n> range can be included in index condition.\n> But it is not the case:\n> \n> postgres=# explain select * from t where a = 10 and b in (10,20,30);\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Index Scan using t_a_b_idx on t (cost=0.42..25.33 rows=3 width=12)\n> Index Cond: ((a = 10) AND (b = ANY ('{10,20,30}'::integer[])))\n> (2 rows)\n> \n> So I though ANY predicate using index keys is included in index condition.\n> But it is not true (as your example shows).\n> \n> But IMHO mod(b,10)=111111 or (b+1) < 100 are both quite rare predicates\n> this is why I named this use cases \"exotic\".\n\nNot sure I agree with describing this as \"exotic\".\n\nThe same thing applies to an arbitrary function call. And those are\npretty common in conditions - date_part/date_trunc. Arithmetic\nexpressions are not that uncommon either. Also, users sometimes have\nconditions comparing multiple keys (a<b) etc.\n\nBut even if it was \"uncommon\", the whole point of this patch is to\neliminate these corner cases where a user does something minor (like\nadding an output column), and the executor disables an optimization\nunnecessarily, causing unexpected regressions.\n\n> \n> In any case, if we have some columns in index tuple it is desired to use\n> them for filtering before extracting heap tuple.\n> But I afraid it will be not so easy to implement...\n> \n\nI'm not sure what you mean. The patch does that, more or less. There's\nissues that need to be solved (e.g. to decide when not to do this), and\nhow to integrate that into the scan interface (where the quals are\nevaluated at the end).\n\nWhat do you mean when you say \"will not be easy to implement\"? What\nproblems do you foresee?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Jan 2024 19:08:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 1/22/24 07:35, Konstantin Knizhnik wrote:\n> \n> On 22/01/2024 1:47 am, Tomas Vondra wrote:\n>> h, right. Well, you're right in this case we perhaps could set just one\n>> of those flags, but the \"purpose\" of the two places is quite different.\n>>\n>> The \"prefetch\" flag is fully controlled by the prefetcher, and it's up\n>> to it to change it (e.g. I can easily imagine some new logic touching\n>> setting it to \"false\" for some reason).\n>>\n>> The \"data\" flag is fully controlled by the custom callbacks, so whatever\n>> the callback stores, will be there.\n>>\n>> I don't think it's worth simplifying this. In particular, I don't think\n>> the callback can assume it can rely on the \"prefetch\" flag.\n>>\n> Why not to add \"all_visible\" flag to IndexPrefetchEntry ? If will not\n> cause any extra space overhead (because of alignment), but allows to\n> avoid dynamic memory allocation (not sure if it is critical, but nice to\n> avoid if possible).\n> \n\nBecause it's specific to index-only scans, while IndexPrefetchEntry is a\ngeneric thing, for all places.\n\nHowever:\n\n(1) Melanie actually presented a very different way to implement this,\nrelying on the StreamingRead API. So chances are this struct won't\nactually be used.\n\n(2) After going through Melanie's patch, I realized this is actually\nbroken. The IOS case needs to keep more stuff, not just the all-visible\nflag, but also the index tuple. Otherwise it'll just operate on the last\ntuple read from the index, which happens to be in xs_ituple. Attached is\na patch with a trivial fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 24 Jan 2024 19:13:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 4:19 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 1/24/24 01:51, Melanie Plageman wrote:\n>\n> >>> There are also table AM layering violations in my sketch which would\n> >>> have to be worked out (not to mention some resource leakage I didn't\n> >>> bother investigating [which causes it to fail tests]).\n> >>>\n> >>> 0001 is all of Thomas' streaming read API code that isn't yet in\n> >>> master and 0002 is my rough sketch of index prefetching using the\n> >>> streaming read API\n> >>>\n> >>> There are also numerous optimizations that your index prefetching\n> >>> patch set does that would need to be added in some way. I haven't\n> >>> thought much about it yet. I wanted to see what you thought of this\n> >>> approach first. Basically, is it workable?\n> >>\n> >> It seems workable, yes. I'm not sure it's much simpler than my patch\n> >> (considering a lot of the code is in the optimizations, which are\n> >> missing from this patch).\n> >>\n> >> I think the question is where should the optimizations happen. I suppose\n> >> some of them might/should happen in the StreamingRead API itself - like\n> >> the detection of sequential patterns, recently prefetched blocks, ...\n> >\n> > So, the streaming read API does detection of sequential patterns and\n> > not prefetching things that are in shared buffers. It doesn't handle\n> > avoiding prefetching recently prefetched blocks yet AFAIK. But I\n> > daresay this would be relevant for other streaming read users and\n> > could certainly be implemented there.\n> >\n>\n> Yes, the \"recently prefetched stuff\" cache seems like a fairly natural\n> complement to the pattern detection and shared-buffers check.\n>\n> FWIW I wonder if we should make some of this customizable, so that\n> systems with customized storage (e.g. neon or with direct I/O) can e.g.\n> disable some of these checks. Or replace them with their version.\n\nThat's a promising idea.\n\n> >> But I'm not sure what to do about optimizations that are more specific\n> >> to the access path. Consider for example the index-only scans. We don't\n> >> want to prefetch all the pages, we need to inspect the VM and prefetch\n> >> just the not-all-visible ones. And then pass the info to the index scan,\n> >> so that it does not need to check the VM again. It's not clear to me how\n> >> to do this with this approach.\n> >\n> > Yea, this is an issue I'll need to think about. To really spell out\n> > the problem: the callback dequeues a TID from the tid_queue and looks\n> > up its block in the VM. It's all visible. So, it shouldn't return that\n> > block to the streaming read API to fetch from the heap because it\n> > doesn't need to be read. But, where does the callback put the TID so\n> > that the caller can get it? I'm going to think more about this.\n> >\n>\n> Yes, that's the problem for index-only scans. I'd generalize it so that\n> it's about the callback being able to (a) decide if it needs to read the\n> heap page, and (b) store some custom info for the TID.\n\nActually, I think this is no big deal. See attached. I just don't\nenqueue tids whose blocks are all visible. I had to switch the order\nfrom fetch heap then fill queue to fill queue then fetch heap.\n\nWhile doing this I noticed some wrong results in the regression tests\n(like in the alter table test), so I suspect I have some kind of\ncontrol flow issue. Perhaps I should fix the resource leak so I can\nactually see the failing tests :)\n\nAs for your a) and b) above.\n\nRegarding a): We discussed allowing speculative prefetching and\nseparating the logic for prefetching from actually reading blocks (so\nyou can prefetch blocks you ultimately don't read). We decided this\nmay not belong in a streaming read API. What do you think?\n\nRegarding b): We can store per buffer data for anything that actually\ngoes down through the streaming read API, but, in the index only case,\nwe don't want the streaming read API to know about blocks that it\ndoesn't actually need to read.\n\n- Melanie",
"msg_date": "Wed, 24 Jan 2024 15:20:28 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 11:43 PM Tomas Vondra\n<[email protected]> wrote:\n\n> On 1/22/24 07:35, Konstantin Knizhnik wrote:\n> >\n> > On 22/01/2024 1:47 am, Tomas Vondra wrote:\n> >> h, right. Well, you're right in this case we perhaps could set just one\n> >> of those flags, but the \"purpose\" of the two places is quite different.\n> >>\n> >> The \"prefetch\" flag is fully controlled by the prefetcher, and it's up\n> >> to it to change it (e.g. I can easily imagine some new logic touching\n> >> setting it to \"false\" for some reason).\n> >>\n> >> The \"data\" flag is fully controlled by the custom callbacks, so whatever\n> >> the callback stores, will be there.\n> >>\n> >> I don't think it's worth simplifying this. In particular, I don't think\n> >> the callback can assume it can rely on the \"prefetch\" flag.\n> >>\n> > Why not to add \"all_visible\" flag to IndexPrefetchEntry ? If will not\n> > cause any extra space overhead (because of alignment), but allows to\n> > avoid dynamic memory allocation (not sure if it is critical, but nice to\n> > avoid if possible).\n> >\n>\nWhile reading through the first patch I got some questions, I haven't\nread it complete yet but this is what I got so far.\n\n1.\n+static bool\n+IndexPrefetchBlockIsSequential(IndexPrefetch *prefetch, BlockNumber block)\n+{\n+ int idx;\n...\n+ if (prefetch->blockItems[idx] != (block - i))\n+ return false;\n+\n+ /* Don't prefetch if the block happens to be the same. */\n+ if (prefetch->blockItems[idx] == block)\n+ return false;\n+ }\n+\n+ /* not sequential, not recently prefetched */\n+ return true;\n+}\n\nThe above function name is BlockIsSequential but at the end, it\nreturns true if it is not sequential, seem like a problem?\nAlso other 2 checks right above the end of the function are returning\nfalse if the block is the same or the pattern is sequential I think\nthose are wrong too.\n\n\n 2.\n I have noticed that the prefetch history is maintained at the backend\nlevel, but what if multiple backends are trying to fetch the same heap\nblocks maybe scanning the same index, so should that be in some shared\nstructure? I haven't thought much deeper about this from the\nimplementation POV, but should we think about it, or it doesn't\nmatter?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jan 2024 16:15:31 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 1/25/24 11:45, Dilip Kumar wrote:\n> On Wed, Jan 24, 2024 at 11:43 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>> On 1/22/24 07:35, Konstantin Knizhnik wrote:\n>>>\n>>> On 22/01/2024 1:47 am, Tomas Vondra wrote:\n>>>> h, right. Well, you're right in this case we perhaps could set just one\n>>>> of those flags, but the \"purpose\" of the two places is quite different.\n>>>>\n>>>> The \"prefetch\" flag is fully controlled by the prefetcher, and it's up\n>>>> to it to change it (e.g. I can easily imagine some new logic touching\n>>>> setting it to \"false\" for some reason).\n>>>>\n>>>> The \"data\" flag is fully controlled by the custom callbacks, so whatever\n>>>> the callback stores, will be there.\n>>>>\n>>>> I don't think it's worth simplifying this. In particular, I don't think\n>>>> the callback can assume it can rely on the \"prefetch\" flag.\n>>>>\n>>> Why not to add \"all_visible\" flag to IndexPrefetchEntry ? If will not\n>>> cause any extra space overhead (because of alignment), but allows to\n>>> avoid dynamic memory allocation (not sure if it is critical, but nice to\n>>> avoid if possible).\n>>>\n>>\n> While reading through the first patch I got some questions, I haven't\n> read it complete yet but this is what I got so far.\n> \n> 1.\n> +static bool\n> +IndexPrefetchBlockIsSequential(IndexPrefetch *prefetch, BlockNumber block)\n> +{\n> + int idx;\n> ...\n> + if (prefetch->blockItems[idx] != (block - i))\n> + return false;\n> +\n> + /* Don't prefetch if the block happens to be the same. */\n> + if (prefetch->blockItems[idx] == block)\n> + return false;\n> + }\n> +\n> + /* not sequential, not recently prefetched */\n> + return true;\n> +}\n> \n> The above function name is BlockIsSequential but at the end, it\n> returns true if it is not sequential, seem like a problem?\n\nActually, I think it's the comment that's wrong - the last return is\nreached only for a sequential pattern (and when the block was not\naccessed recently).\n\n> Also other 2 checks right above the end of the function are returning\n> false if the block is the same or the pattern is sequential I think\n> those are wrong too.\n> \n\nHmmm. You're right this is partially wrong. There are two checks:\n\n /*\n * For a sequential pattern, blocks \"k\" step ago needs to have block\n * number by \"k\" smaller compared to the current block.\n */\n if (prefetch->blockItems[idx] != (block - i))\n return false;\n\n /* Don't prefetch if the block happens to be the same. */\n if (prefetch->blockItems[idx] == block)\n return false;\n\nThe first condition is correct - we want to return \"false\" when the\npattern is not sequential.\n\nBut the second condition is wrong - we want to skip prefetching when the\nblock was already prefetched recently, so this should return true (which\nis a bit misleading, as it seems to imply the pattern is sequential,\nwhen it's not).\n\nHowever, this is harmless, because we then identify this block as\nrecently prefetched in the \"full\" cache check, so we won't prefetch it\nanyway. So it's harmless, although a bit more expensive.\n\nThere's another inefficiency - we stop looking for the same block once\nwe find the first block breaking the non-sequential pattern. Imagine a\nsequence of blocks 1, 2, 3, 1, 2, 3, ... in which case we never notice\nthe block was recently prefetched, because we always find the break of\nthe sequential pattern. But again, it's harmless, thanks to the full\ncache of recently prefetched blocks.\n\n> 2.\n> I have noticed that the prefetch history is maintained at the backend\n> level, but what if multiple backends are trying to fetch the same heap\n> blocks maybe scanning the same index, so should that be in some shared\n> structure? I haven't thought much deeper about this from the\n> implementation POV, but should we think about it, or it doesn't\n> matter?\n\nYes, the cache is at the backend level - it's a known limitation, but I\nsee it more as a conscious tradeoff.\n\nFirstly, while the LRU cache is at backend level, PrefetchBuffer also\nchecks shared buffers for each prefetch request. So with sufficiently\nlarge shared buffers we're likely to find it there (and for direct I/O\nthere won't be any other place to check).\n\nSecondly, the only other place to check is page cache, but there's no\ngood (sufficiently cheap) way to check that. See the preadv2/nowait\nexperiment earlier in this thread.\n\nI suppose we could implement a similar LRU cache for shared memory (and\nI don't think it'd be very complicated), but I did not plan to do that\nin this patch unless absolutely necessary.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 25 Jan 2024 17:47:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 3:20 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Jan 24, 2024 at 4:19 AM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 1/24/24 01:51, Melanie Plageman wrote:\n> > >> But I'm not sure what to do about optimizations that are more specific\n> > >> to the access path. Consider for example the index-only scans. We don't\n> > >> want to prefetch all the pages, we need to inspect the VM and prefetch\n> > >> just the not-all-visible ones. And then pass the info to the index scan,\n> > >> so that it does not need to check the VM again. It's not clear to me how\n> > >> to do this with this approach.\n> > >\n> > > Yea, this is an issue I'll need to think about. To really spell out\n> > > the problem: the callback dequeues a TID from the tid_queue and looks\n> > > up its block in the VM. It's all visible. So, it shouldn't return that\n> > > block to the streaming read API to fetch from the heap because it\n> > > doesn't need to be read. But, where does the callback put the TID so\n> > > that the caller can get it? I'm going to think more about this.\n> > >\n> >\n> > Yes, that's the problem for index-only scans. I'd generalize it so that\n> > it's about the callback being able to (a) decide if it needs to read the\n> > heap page, and (b) store some custom info for the TID.\n>\n> Actually, I think this is no big deal. See attached. I just don't\n> enqueue tids whose blocks are all visible. I had to switch the order\n> from fetch heap then fill queue to fill queue then fetch heap.\n>\n> While doing this I noticed some wrong results in the regression tests\n> (like in the alter table test), so I suspect I have some kind of\n> control flow issue. Perhaps I should fix the resource leak so I can\n> actually see the failing tests :)\n\nAttached is a patch which implements a real queue and fixes some of\nthe issues with the previous version. It doesn't pass tests yet and\nhas issues. Some are bugs in my implementation I need to fix. Some are\nissues we would need to solve in the streaming read API. Some are\nissues with index prefetching generally.\n\nNote that these two patches have to be applied before 21d9c3ee4e\nbecause Thomas hasn't released a rebased version of the streaming read\nAPI patches yet.\n\nIssues\n---\n- kill prior tuple\n\nThis optimization doesn't work with index prefetching with the current\ndesign. Kill prior tuple relies on alternating between fetching a\nsingle index tuple and visiting the heap. After visiting the heap we\ncan potentially kill the immediately preceding index tuple. Once we\nfetch multiple index tuples, enqueue their TIDs, and later visit the\nheap, the next index page we visit may not contain all of the index\ntuples deemed killable by our visit to the heap.\n\nIn our case, we could try and fix this by prefetching only heap blocks\nreferred to by index tuples on the same index page. Or we could try\nand keep a pool of index pages pinned and go back and kill index\ntuples on those pages.\n\nHaving disabled kill_prior_tuple is why the mvcc test fails. Perhaps\nthere is an easier way to fix this, as I don't think the mvcc test\nfailed on Tomas' version.\n\n- switching scan directions\n\nIf the index scan switches directions on a given invocation of\nIndexNext(), heap blocks may have already been prefetched and read for\nblocks containing tuples beyond the point at which we want to switch\ndirections.\n\nWe could fix this by having some kind of streaming read \"reset\"\ncallback to drop all of the buffers which have been prefetched which\nare now no longer needed. We'd have to go backwards from the last TID\nwhich was yielded to the caller and figure out which buffers in the\npgsr buffer ranges are associated with all of the TIDs which were\nprefetched after that TID. The TIDs are in the per_buffer_data\nassociated with each buffer in pgsr. The issue would be searching\nthrough those efficiently.\n\nThe other issue is that the streaming read API does not currently\nsupport backwards scans. So, if we switch to a backwards scan from a\nforwards scan, we would need to fallback to the non streaming read\nmethod. We could do this by just setting the TID queue size to 1\n(which is what I have currently implemented). Or we could add\nbackwards scan support to the streaming read API.\n\n- mark and restore\n\nSimilar to the issue with switching the scan direction, mark and\nrestore requires us to reset the TID queue and streaming read queue.\nFor now, I've hacked in something to the PlannerInfo and Plan to set\nthe TID queue size to 1 for plans containing a merge join (yikes).\n\n- multiple executions\n\nFor reasons I don't entirely understand yet, multiple executions (not\nrescan -- see ExecutorRun(...execute_once)) do not work. As in Tomas'\npatch, I have disabled prefetching (and made the TID queue size 1)\nwhen execute_once is false.\n\n- Index Only Scans need to return IndexTuples\n\nBecause index only scans return either the IndexTuple pointed to by\nIndexScanDesc->xs_itup or the HeapTuple pointed to by\nIndexScanDesc->xs_hitup -- both of which are populated by the index\nAM, we have to save copies of those IndexTupleData and HeapTupleDatas\nfor every TID whose block we prefetch.\n\nThis might be okay, but it is a bit sad to have to make copies of those tuples.\n\nIn this patch, I still haven't figured out the memory management part.\nI copy over the tuples when enqueuing a TID queue item and then copy\nthem back again when the streaming read API returns the\nper_buffer_data to us. Something is still not quite right here. I\nsuspect this is part of the reason why some of the other tests are\nfailing.\n\nOther issues/gaps in my implementation:\n\nDetermining where to allocate the memory for the streaming read object\nand the TID queue is an outstanding TODO. To implement a fallback\nmethod for cases in which streaming read doesn't work, I set the queue\nsize to 1. This is obviously not good.\n\nRight now, I allocate the TID queue and streaming read objects in\nIndexNext() and IndexOnlyNext(). This doesn't seem ideal. Doing it in\nindex_beginscan() (and index_beginscan_parallel()) is tricky though\nbecause we don't know the scan direction at that point (and the scan\ndirection can change). There are also callers of index_beginscan() who\ndo not call Index[Only]Next() (like systable_getnext() which calls\nindex_getnext_slot() directly).\n\nAlso, my implementation does not yet have the optimization Tomas does\nto skip prefetching recently prefetched blocks. As he has said, it\nprobably makes sense to add something to do this in a lower layer --\nsuch as in the streaming read API or even in bufmgr.c (maybe in\nPrefetchSharedBuffer()).\n\n- Melanie",
"msg_date": "Wed, 7 Feb 2024 16:48:18 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 2/7/24 22:48, Melanie Plageman wrote:\n> ...\n> \n> Attached is a patch which implements a real queue and fixes some of\n> the issues with the previous version. It doesn't pass tests yet and\n> has issues. Some are bugs in my implementation I need to fix. Some are\n> issues we would need to solve in the streaming read API. Some are\n> issues with index prefetching generally.\n> \n> Note that these two patches have to be applied before 21d9c3ee4e\n> because Thomas hasn't released a rebased version of the streaming read\n> API patches yet.\n> \n\nThanks for working on this, and for investigating the various issues.\n\n> Issues\n> ---\n> - kill prior tuple\n> \n> This optimization doesn't work with index prefetching with the current\n> design. Kill prior tuple relies on alternating between fetching a\n> single index tuple and visiting the heap. After visiting the heap we\n> can potentially kill the immediately preceding index tuple. Once we\n> fetch multiple index tuples, enqueue their TIDs, and later visit the\n> heap, the next index page we visit may not contain all of the index\n> tuples deemed killable by our visit to the heap.\n> \n\nI admit I haven't thought about kill_prior_tuple until you pointed out.\nYeah, prefetching separates (de-synchronizes) the two scans (index and\nheap) in a way that prevents this optimization. Or at least makes it\nmuch more complex :-(\n\n> In our case, we could try and fix this by prefetching only heap blocks\n> referred to by index tuples on the same index page. Or we could try\n> and keep a pool of index pages pinned and go back and kill index\n> tuples on those pages.\n> \n\nI think restricting the prefetching to a single index page would not be\na huge issue performance-wise - that's what the initial patch version\n(implemented at the index AM level) did, pretty much. The prefetch queue\nwould get drained as we approach the end of the index page, but luckily\nindex pages tend to have a lot of entries. But it'd put an upper bound\non the prefetch distance (much lower than the e_i_c maximum 1000, but\nI'd say common values are 10-100 anyway).\n\nBut how would we know we're on the same index page? That knowledge is\nnot available outside the index AM - the executor or indexam.c does not\nknow this, right? Presumably we could expose this, somehow, but it seems\nlike a violation of the abstraction ...\n\nThe same thing affects keeping multiple index pages pinned, for TIDs\nthat are yet to be used by the index scan. We'd need to know when to\nrelease a pinned page, once we're done with processing all items.\n\nFWIW I haven't tried to implementing any of this, so maybe I'm missing\nsomething and it can be made to work in a nice way.\n\n\n> Having disabled kill_prior_tuple is why the mvcc test fails. Perhaps\n> there is an easier way to fix this, as I don't think the mvcc test\n> failed on Tomas' version.\n> \n\nI kinda doubt it worked correctly, considering I simply ignored the\noptimization. It's far more likely it just worked by luck.\n\n\n> - switching scan directions\n> \n> If the index scan switches directions on a given invocation of\n> IndexNext(), heap blocks may have already been prefetched and read for\n> blocks containing tuples beyond the point at which we want to switch\n> directions.\n> \n> We could fix this by having some kind of streaming read \"reset\"\n> callback to drop all of the buffers which have been prefetched which\n> are now no longer needed. We'd have to go backwards from the last TID\n> which was yielded to the caller and figure out which buffers in the\n> pgsr buffer ranges are associated with all of the TIDs which were\n> prefetched after that TID. The TIDs are in the per_buffer_data\n> associated with each buffer in pgsr. The issue would be searching\n> through those efficiently.\n> \n\nYeah, that's roughly what I envisioned in one of my previous messages\nabout this issue - walking back the TIDs read from the index and added\nto the prefetch queue.\n\n> The other issue is that the streaming read API does not currently\n> support backwards scans. So, if we switch to a backwards scan from a\n> forwards scan, we would need to fallback to the non streaming read\n> method. We could do this by just setting the TID queue size to 1\n> (which is what I have currently implemented). Or we could add\n> backwards scan support to the streaming read API.\n> \n\nWhat do you mean by \"support for backwards scans\" in the streaming read\nAPI? I imagined it naively as\n\n1) drop all requests in the streaming read API queue\n\n2) walk back all \"future\" requests in the TID queue\n\n3) start prefetching as if from scratch\n\nMaybe there's a way to optimize this and reuse some of the work more\nefficiently, but my assumption is that the scan direction does not\nchange very often, and that we process many items in between.\n\n\n> - mark and restore\n> \n> Similar to the issue with switching the scan direction, mark and\n> restore requires us to reset the TID queue and streaming read queue.\n> For now, I've hacked in something to the PlannerInfo and Plan to set\n> the TID queue size to 1 for plans containing a merge join (yikes).\n> \n\nHaven't thought about this very much, will take a closer look.\n\n\n> - multiple executions\n> \n> For reasons I don't entirely understand yet, multiple executions (not\n> rescan -- see ExecutorRun(...execute_once)) do not work. As in Tomas'\n> patch, I have disabled prefetching (and made the TID queue size 1)\n> when execute_once is false.\n> \n\nDon't work in what sense? What is (not) happening?\n\n\n> - Index Only Scans need to return IndexTuples\n> \n> Because index only scans return either the IndexTuple pointed to by\n> IndexScanDesc->xs_itup or the HeapTuple pointed to by\n> IndexScanDesc->xs_hitup -- both of which are populated by the index\n> AM, we have to save copies of those IndexTupleData and HeapTupleDatas\n> for every TID whose block we prefetch.\n> \n> This might be okay, but it is a bit sad to have to make copies of those tuples.\n> \n> In this patch, I still haven't figured out the memory management part.\n> I copy over the tuples when enqueuing a TID queue item and then copy\n> them back again when the streaming read API returns the\n> per_buffer_data to us. Something is still not quite right here. I\n> suspect this is part of the reason why some of the other tests are\n> failing.\n> \n\nIt's not clear to me what you need to copy the tuples back - shouldn't\nit be enough to copy the tuple just once?\n\nFWIW if we decide to pin multiple index pages (to make kill_prior_tuple\nwork), that would also mean we don't need to copy any tuples, right? We\ncould point into the buffers for all of them, right?\n\n> Other issues/gaps in my implementation:\n> \n> Determining where to allocate the memory for the streaming read object\n> and the TID queue is an outstanding TODO. To implement a fallback\n> method for cases in which streaming read doesn't work, I set the queue\n> size to 1. This is obviously not good.\n> \n\nI think IndexFetchTableData seems like a not entirely terrible place for\nallocating the pgsr, but I wonder what Andres thinks about this. IIRC he\nadvocated for doing the prefetching in executor, and I'm not sure\nheapam_handled.c + relscan.h is what he imagined ...\n\nAlso, when you say \"obviously not good\" - why? Are you concerned about\nthe extra overhead of shuffling stuff between queues, or something else?\n\n\n> Right now, I allocate the TID queue and streaming read objects in\n> IndexNext() and IndexOnlyNext(). This doesn't seem ideal. Doing it in\n> index_beginscan() (and index_beginscan_parallel()) is tricky though\n> because we don't know the scan direction at that point (and the scan\n> direction can change). There are also callers of index_beginscan() who\n> do not call Index[Only]Next() (like systable_getnext() which calls\n> index_getnext_slot() directly).\n> \n\nYeah, not sure this is the right layering ... the initial patch did\neverything in individual index AMs, then it moved to indexam.c, then to\nexecutor. And this seems to move it to lower layers again ...\n\n> Also, my implementation does not yet have the optimization Tomas does\n> to skip prefetching recently prefetched blocks. As he has said, it\n> probably makes sense to add something to do this in a lower layer --\n> such as in the streaming read API or even in bufmgr.c (maybe in\n> PrefetchSharedBuffer()).\n> \n\nI agree this should happen in lower layers. I'd probably do this in the\nstreaming read API, because that would define \"scope\" of the cache\n(pages prefetched for that read). Doing it in PrefetchSharedBuffer seems\nlike it would do a single cache (for that particular backend).\n\nBut that's just an initial thought ...\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 13 Feb 2024 20:00:59 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 2:01 PM Tomas Vondra\n<[email protected]> wrote:\n> On 2/7/24 22:48, Melanie Plageman wrote:\n> I admit I haven't thought about kill_prior_tuple until you pointed out.\n> Yeah, prefetching separates (de-synchronizes) the two scans (index and\n> heap) in a way that prevents this optimization. Or at least makes it\n> much more complex :-(\n\nAnother thing that argues against doing this is that we might not need\nto visit any more B-Tree leaf pages when there is a LIMIT n involved.\nWe could end up scanning a whole extra leaf page (including all of its\ntuples) for want of the ability to \"push down\" a LIMIT to the index AM\n(that's not what happens right now, but it isn't really needed at all\nright now).\n\nThis property of index scans is fundamental to how index scans work.\nPinning an index page as an interlock against concurrently TID\nrecycling by VACUUM is directly described by the index API docs [1],\neven (the docs actually use terms like \"buffer pin\" rather than\nsomething more abstract sounding). I don't think that anything\naffecting that behavior should be considered an implementation detail\nof the nbtree index AM as such (nor any particular index AM).\n\nI think that it makes sense to put the index AM in control here --\nthat almost follows from what I said about the index AM API. The index\nAM already needs to be in control, in about the same way, to deal with\nkill_prior_tuple (plus it helps with the LIMIT issue I described).\n\nThere doesn't necessarily need to be much code duplication to make\nthat work. Offhand I suspect it would be kind of similar to how\ndeletion of LP_DEAD-marked index tuples by non-nbtree index AMs gets\nby with generic logic implemented by\nindex_compute_xid_horizon_for_tuples -- that's all that we need to\ndetermine a snapshotConflictHorizon value for recovery conflict\npurposes. Note that index_compute_xid_horizon_for_tuples() reads\n*index* pages, despite not being aware of the caller's index AM and\nindex tuple format.\n\n(The only reason why nbtree needs a custom solution is because it has\nposting list tuples to worry about, unlike GiST and unlike Hash, which\nconsistently use unadorned generic IndexTuple structs with heap TID\nrepresented in the standard/generic way only. While these concepts\nprobably all originated in nbtree, they're still not nbtree\nimplementation details.)\n\n> > Having disabled kill_prior_tuple is why the mvcc test fails. Perhaps\n> > there is an easier way to fix this, as I don't think the mvcc test\n> > failed on Tomas' version.\n> >\n>\n> I kinda doubt it worked correctly, considering I simply ignored the\n> optimization. It's far more likely it just worked by luck.\n\nThe test that did fail will have only revealed that the\nkill_prior_tuple wasn't operating as expected -- which isn't the same\nthing as giving wrong answers.\n\nNote that there are various ways that concurrent TID recycling might\nprevent _bt_killitems() from setting LP_DEAD bits. It's totally\nunsurprising that breaking kill_prior_tuple in some way could be\nmissed. Andres wrote the MVCC test in question precisely because\ncertain aspects of kill_prior_tuple were broken for months without\nanybody noticing.\n\n[1] https://www.postgresql.org/docs/devel/index-locking.html\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Feb 2024 14:54:14 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Feb 8, 2024 at 3:18 AM Melanie Plageman\n<[email protected]> wrote:\n> - kill prior tuple\n>\n> This optimization doesn't work with index prefetching with the current\n> design. Kill prior tuple relies on alternating between fetching a\n> single index tuple and visiting the heap. After visiting the heap we\n> can potentially kill the immediately preceding index tuple. Once we\n> fetch multiple index tuples, enqueue their TIDs, and later visit the\n> heap, the next index page we visit may not contain all of the index\n> tuples deemed killable by our visit to the heap.\n\nIs this maybe just a bookkeeping problem? A Boolean that says \"you can\nkill the prior tuple\" is well-suited if and only if the prior tuple is\nwell-defined. But perhaps it could be replaced with something more\nsophisticated that tells you which tuples are eligible to be killed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Feb 2024 12:40:14 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 2/13/24 20:54, Peter Geoghegan wrote:\n> On Tue, Feb 13, 2024 at 2:01 PM Tomas Vondra\n> <[email protected]> wrote:\n>> On 2/7/24 22:48, Melanie Plageman wrote:\n>> I admit I haven't thought about kill_prior_tuple until you pointed out.\n>> Yeah, prefetching separates (de-synchronizes) the two scans (index and\n>> heap) in a way that prevents this optimization. Or at least makes it\n>> much more complex :-(\n> \n> Another thing that argues against doing this is that we might not need\n> to visit any more B-Tree leaf pages when there is a LIMIT n involved.\n> We could end up scanning a whole extra leaf page (including all of its\n> tuples) for want of the ability to \"push down\" a LIMIT to the index AM\n> (that's not what happens right now, but it isn't really needed at all\n> right now).\n> \n\nI'm not quite sure I understand what is \"this\" that you argue against.\nAre you saying we should not separate the two scans? If yes, is there a\nbetter way to do this?\n\nThe LIMIT problem is not very clear to me either. Yes, if we get close\nto the end of the leaf page, we may need to visit the next leaf page.\nBut that's kinda the whole point of prefetching - reading stuff ahead,\nand reading too far ahead is an inherent risk. Isn't that a problem we\nhave even without LIMIT? The prefetch distance ramp up is meant to limit\nthe impact.\n\n> This property of index scans is fundamental to how index scans work.\n> Pinning an index page as an interlock against concurrently TID\n> recycling by VACUUM is directly described by the index API docs [1],\n> even (the docs actually use terms like \"buffer pin\" rather than\n> something more abstract sounding). I don't think that anything\n> affecting that behavior should be considered an implementation detail\n> of the nbtree index AM as such (nor any particular index AM).\n> \n\nGood point.\n\n> I think that it makes sense to put the index AM in control here --\n> that almost follows from what I said about the index AM API. The index\n> AM already needs to be in control, in about the same way, to deal with\n> kill_prior_tuple (plus it helps with the LIMIT issue I described).\n> \n\nIn control how? What would be the control flow - what part would be\nmanaged by the index AM?\n\nI initially did the prefetching entirely in each index AM, but it was\nsuggested doing this in the executor would be better. So I gradually\nmoved it to executor. But the idea to combine this with the streaming\nread API seems as a move from executor back to the lower levels ... and\nnow you're suggesting to make the index AM responsible for this again.\n\nI'm not saying any of those layering options is wrong, but it's not\nclear to me which is the right one.\n\n> There doesn't necessarily need to be much code duplication to make\n> that work. Offhand I suspect it would be kind of similar to how\n> deletion of LP_DEAD-marked index tuples by non-nbtree index AMs gets\n> by with generic logic implemented by\n> index_compute_xid_horizon_for_tuples -- that's all that we need to\n> determine a snapshotConflictHorizon value for recovery conflict\n> purposes. Note that index_compute_xid_horizon_for_tuples() reads\n> *index* pages, despite not being aware of the caller's index AM and\n> index tuple format.\n> \n> (The only reason why nbtree needs a custom solution is because it has\n> posting list tuples to worry about, unlike GiST and unlike Hash, which\n> consistently use unadorned generic IndexTuple structs with heap TID\n> represented in the standard/generic way only. While these concepts\n> probably all originated in nbtree, they're still not nbtree\n> implementation details.)\n> \n\nI haven't looked at the details, but I agree the LP_DEAD deletion seems\nlike a sensible inspiration.\n\n>>> Having disabled kill_prior_tuple is why the mvcc test fails. Perhaps\n>>> there is an easier way to fix this, as I don't think the mvcc test\n>>> failed on Tomas' version.\n>>>\n>>\n>> I kinda doubt it worked correctly, considering I simply ignored the\n>> optimization. It's far more likely it just worked by luck.\n> \n> The test that did fail will have only revealed that the\n> kill_prior_tuple wasn't operating as expected -- which isn't the same\n> thing as giving wrong answers.\n> \n\nPossible. But AFAIK it did fail for Melanie, and I don't have a very\ngood explanation for the difference in behavior.\n\n> Note that there are various ways that concurrent TID recycling might\n> prevent _bt_killitems() from setting LP_DEAD bits. It's totally\n> unsurprising that breaking kill_prior_tuple in some way could be\n> missed. Andres wrote the MVCC test in question precisely because\n> certain aspects of kill_prior_tuple were broken for months without\n> anybody noticing.\n> \n> [1] https://www.postgresql.org/docs/devel/index-locking.html\n\nYeah. There's clearly plenty of space for subtle issues.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Feb 2024 14:34:40 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 2/14/24 08:10, Robert Haas wrote:\n> On Thu, Feb 8, 2024 at 3:18 AM Melanie Plageman\n> <[email protected]> wrote:\n>> - kill prior tuple\n>>\n>> This optimization doesn't work with index prefetching with the current\n>> design. Kill prior tuple relies on alternating between fetching a\n>> single index tuple and visiting the heap. After visiting the heap we\n>> can potentially kill the immediately preceding index tuple. Once we\n>> fetch multiple index tuples, enqueue their TIDs, and later visit the\n>> heap, the next index page we visit may not contain all of the index\n>> tuples deemed killable by our visit to the heap.\n> \n> Is this maybe just a bookkeeping problem? A Boolean that says \"you can\n> kill the prior tuple\" is well-suited if and only if the prior tuple is\n> well-defined. But perhaps it could be replaced with something more\n> sophisticated that tells you which tuples are eligible to be killed.\n> \n\nI don't think it's just a bookkeeping problem. In a way, nbtree already\ndoes keep an array of tuples to kill (see btgettuple), but it's always\nfor the current index page. So it's not that we immediately go and kill\nthe prior tuple - nbtree already stashes it in an array, and kills all\nthose tuples when moving to the next index page.\n\nThe way I understand the problem is that with prefetching we're bound to\ndetermine the kill_prior_tuple flag with a delay, in which case we might\nhave already moved to the next index page ...\n\n\nSo to make this work, we'd need to:\n\n1) keep index pages pinned for all \"in flight\" TIDs (read from the\nindex, not yet consumed by the index scan)\n\n2) keep a separate array of \"to be killed\" index tuples for each page\n\n3) have a more sophisticated way to decide when to kill tuples and unpin\nthe index page (instead of just doing it when moving to the next index page)\n\nMaybe that's what you meant by \"more sophisticated bookkeeping\", ofc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Feb 2024 15:13:12 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 2:01 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/7/24 22:48, Melanie Plageman wrote:\n> > ...\nIssues\n> > ---\n> > - kill prior tuple\n> >\n> > This optimization doesn't work with index prefetching with the current\n> > design. Kill prior tuple relies on alternating between fetching a\n> > single index tuple and visiting the heap. After visiting the heap we\n> > can potentially kill the immediately preceding index tuple. Once we\n> > fetch multiple index tuples, enqueue their TIDs, and later visit the\n> > heap, the next index page we visit may not contain all of the index\n> > tuples deemed killable by our visit to the heap.\n> >\n>\n> I admit I haven't thought about kill_prior_tuple until you pointed out.\n> Yeah, prefetching separates (de-synchronizes) the two scans (index and\n> heap) in a way that prevents this optimization. Or at least makes it\n> much more complex :-(\n>\n> > In our case, we could try and fix this by prefetching only heap blocks\n> > referred to by index tuples on the same index page. Or we could try\n> > and keep a pool of index pages pinned and go back and kill index\n> > tuples on those pages.\n> >\n>\n> I think restricting the prefetching to a single index page would not be\n> a huge issue performance-wise - that's what the initial patch version\n> (implemented at the index AM level) did, pretty much. The prefetch queue\n> would get drained as we approach the end of the index page, but luckily\n> index pages tend to have a lot of entries. But it'd put an upper bound\n> on the prefetch distance (much lower than the e_i_c maximum 1000, but\n> I'd say common values are 10-100 anyway).\n>\n> But how would we know we're on the same index page? That knowledge is\n> not available outside the index AM - the executor or indexam.c does not\n> know this, right? Presumably we could expose this, somehow, but it seems\n> like a violation of the abstraction ...\n\nThe easiest way to do this would be to have the index AM amgettuple()\nfunctions set a new member in the IndexScanDescData which is either\nthe index page identifier or a boolean that indicates we have moved on\nto the next page. Then, when filling the queue, we would stop doing so\nwhen the page switches. Now, this wouldn't really work for the first\nindex tuple on each new page, so, perhaps we would need the index AMs\nto implement some kind of \"peek\" functionality.\n\nOr, we could provide the index AM with a max queue size and allow it\nto fill up the queue with the TIDs it wants (which it could keep to\nthe same index page). And, for the index-only scan case, could have\nsome kind of flag which indicates if the caller is putting\nTIDs+HeapTuples or TIDS+IndexTuples on the queue, which might reduce\nthe amount of space we need. I'm not sure who manages the memory here.\n\nI wasn't quite sure how we could use\nindex_compute_xid_horizon_for_tuples() for inspiration -- per Peter's\nsuggestion. But, I'd like to understand.\n\n> > - switching scan directions\n> >\n> > If the index scan switches directions on a given invocation of\n> > IndexNext(), heap blocks may have already been prefetched and read for\n> > blocks containing tuples beyond the point at which we want to switch\n> > directions.\n> >\n> > We could fix this by having some kind of streaming read \"reset\"\n> > callback to drop all of the buffers which have been prefetched which\n> > are now no longer needed. We'd have to go backwards from the last TID\n> > which was yielded to the caller and figure out which buffers in the\n> > pgsr buffer ranges are associated with all of the TIDs which were\n> > prefetched after that TID. The TIDs are in the per_buffer_data\n> > associated with each buffer in pgsr. The issue would be searching\n> > through those efficiently.\n> >\n>\n> Yeah, that's roughly what I envisioned in one of my previous messages\n> about this issue - walking back the TIDs read from the index and added\n> to the prefetch queue.\n>\n> > The other issue is that the streaming read API does not currently\n> > support backwards scans. So, if we switch to a backwards scan from a\n> > forwards scan, we would need to fallback to the non streaming read\n> > method. We could do this by just setting the TID queue size to 1\n> > (which is what I have currently implemented). Or we could add\n> > backwards scan support to the streaming read API.\n> >\n>\n> What do you mean by \"support for backwards scans\" in the streaming read\n> API? I imagined it naively as\n>\n> 1) drop all requests in the streaming read API queue\n>\n> 2) walk back all \"future\" requests in the TID queue\n>\n> 3) start prefetching as if from scratch\n>\n> Maybe there's a way to optimize this and reuse some of the work more\n> efficiently, but my assumption is that the scan direction does not\n> change very often, and that we process many items in between.\n\nYes, the steps you mention for resetting the queues make sense. What I\nmeant by \"backwards scan is not supported by the streaming read API\"\nis that Thomas/Andres had mentioned that the streaming read API does\nnot support backwards scans right now. Though, since the callback just\nreturns a block number, I don't know how it would break.\n\nWhen switching between a forwards and backwards scan, does it go\nbackwards from the current position or start at the end (or beginning)\nof the relation? If it is the former, then the blocks would most\nlikely be in shared buffers -- which the streaming read API handles.\nIt is not obvious to me from looking at the code what the gap is, so\nperhaps Thomas could weigh in.\n\nAs for handling this in index prefetching, if you think a TID queue\nsize of 1 is a sufficient fallback method, then resetting the pgsr\nqueue and resizing the TID queue to 1 would work with no issues. If\nthe fallback method requires the streaming read code path not be used\nat all, then that is more work.\n\n> > - multiple executions\n> >\n> > For reasons I don't entirely understand yet, multiple executions (not\n> > rescan -- see ExecutorRun(...execute_once)) do not work. As in Tomas'\n> > patch, I have disabled prefetching (and made the TID queue size 1)\n> > when execute_once is false.\n> >\n>\n> Don't work in what sense? What is (not) happening?\n\nI got wrong results for this. I'll have to do more investigation, but\nI assumed that not resetting the TID queue and pgsr queue was also the\nsource of this issue.\n\nWhat I imagined we would do is figure out if there is a viable\nsolution for the larger design issues and then investigate what seemed\nlike smaller issues. But, perhaps I should dig into this first to\nensure there isn't a larger issue.\n\n> > - Index Only Scans need to return IndexTuples\n> >\n> > Because index only scans return either the IndexTuple pointed to by\n> > IndexScanDesc->xs_itup or the HeapTuple pointed to by\n> > IndexScanDesc->xs_hitup -- both of which are populated by the index\n> > AM, we have to save copies of those IndexTupleData and HeapTupleDatas\n> > for every TID whose block we prefetch.\n> >\n> > This might be okay, but it is a bit sad to have to make copies of those tuples.\n> >\n> > In this patch, I still haven't figured out the memory management part.\n> > I copy over the tuples when enqueuing a TID queue item and then copy\n> > them back again when the streaming read API returns the\n> > per_buffer_data to us. Something is still not quite right here. I\n> > suspect this is part of the reason why some of the other tests are\n> > failing.\n> >\n>\n> It's not clear to me what you need to copy the tuples back - shouldn't\n> it be enough to copy the tuple just once?\n\nWhen enqueueing it, IndexTuple has to be copied from the scan\ndescriptor to somewhere in memory with a TIDQueueItem pointing to it.\nOnce we do this, the IndexTuple memory should stick around until we\nfree it, so yes, I'm not sure why I was seeing the IndexTuple no\nlonger be valid when I tried to put it in a slot. I'll have to do more\ninvestigation.\n\n> FWIW if we decide to pin multiple index pages (to make kill_prior_tuple\n> work), that would also mean we don't need to copy any tuples, right? We\n> could point into the buffers for all of them, right?\n\nYes, this would be a nice benefit.\n\n> > Other issues/gaps in my implementation:\n> >\n> > Determining where to allocate the memory for the streaming read object\n> > and the TID queue is an outstanding TODO. To implement a fallback\n> > method for cases in which streaming read doesn't work, I set the queue\n> > size to 1. This is obviously not good.\n> >\n>\n> I think IndexFetchTableData seems like a not entirely terrible place for\n> allocating the pgsr, but I wonder what Andres thinks about this. IIRC he\n> advocated for doing the prefetching in executor, and I'm not sure\n> heapam_handled.c + relscan.h is what he imagined ...\n>\n> Also, when you say \"obviously not good\" - why? Are you concerned about\n> the extra overhead of shuffling stuff between queues, or something else?\n\nWell, I didn't resize the queue, I just limited how much of it we can\nuse to a single member (thus wasting the other memory). But resizing a\nqueue isn't free either. Also, I wondered if a queue size of 1 for\nindex AMs using the fallback method is too confusing (like it is a\nfake queue?). But, I'd really, really rather not maintain both a queue\nand non-queue control flow for Index[Only]Next(). The maintenance\noverhead seems like it would outweigh the potential downsides.\n\n> > Right now, I allocate the TID queue and streaming read objects in\n> > IndexNext() and IndexOnlyNext(). This doesn't seem ideal. Doing it in\n> > index_beginscan() (and index_beginscan_parallel()) is tricky though\n> > because we don't know the scan direction at that point (and the scan\n> > direction can change). There are also callers of index_beginscan() who\n> > do not call Index[Only]Next() (like systable_getnext() which calls\n> > index_getnext_slot() directly).\n> >\n>\n> Yeah, not sure this is the right layering ... the initial patch did\n> everything in individual index AMs, then it moved to indexam.c, then to\n> executor. And this seems to move it to lower layers again ...\n\nIf we do something like make the index AM responsible for the TID\nqueue (as mentioned above as a potential solution to the kill prior\ntuple issue), then we might be able to allocate the TID queue in the\nindex AMs?\n\nAs for the streaming read object, if we were able to solve the issue\nwhere callers of index_beginscan() don't call Index[Only]Next() (and\nthus shouldn't allocate a streaming read object), then it seems easy\nenough to move the streaming read object allocation into the table\nAM-specific begin scan method.\n\n> > Also, my implementation does not yet have the optimization Tomas does\n> > to skip prefetching recently prefetched blocks. As he has said, it\n> > probably makes sense to add something to do this in a lower layer --\n> > such as in the streaming read API or even in bufmgr.c (maybe in\n> > PrefetchSharedBuffer()).\n> >\n>\n> I agree this should happen in lower layers. I'd probably do this in the\n> streaming read API, because that would define \"scope\" of the cache\n> (pages prefetched for that read). Doing it in PrefetchSharedBuffer seems\n> like it would do a single cache (for that particular backend).\n\nHmm. I wonder if there are any upsides to having the cache be\nper-backend. Though, that does sound like a whole other project...\n\n- Melanie\n\n\n",
"msg_date": "Wed, 14 Feb 2024 11:40:18 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 8:34 AM Tomas Vondra\n<[email protected]> wrote:\n> > Another thing that argues against doing this is that we might not need\n> > to visit any more B-Tree leaf pages when there is a LIMIT n involved.\n> > We could end up scanning a whole extra leaf page (including all of its\n> > tuples) for want of the ability to \"push down\" a LIMIT to the index AM\n> > (that's not what happens right now, but it isn't really needed at all\n> > right now).\n> >\n>\n> I'm not quite sure I understand what is \"this\" that you argue against.\n> Are you saying we should not separate the two scans? If yes, is there a\n> better way to do this?\n\nWhat I'm concerned about is the difficulty and complexity of any\ndesign that requires revising \"63.4. Index Locking Considerations\",\nsince that's pretty subtle stuff. In particular, if prefetching\n\"de-synchronizes\" (to use your term) the index leaf page level scan\nand the heap page scan, then we'll probably have to totally revise the\nbasic API.\n\nMaybe that'll actually turn out to be the right thing to do -- it\ncould just be the only thing that can unleash the full potential of\nprefetching. But I'm not aware of any evidence that points in that\ndirection. Are you? (I might have just missed it.)\n\n> The LIMIT problem is not very clear to me either. Yes, if we get close\n> to the end of the leaf page, we may need to visit the next leaf page.\n> But that's kinda the whole point of prefetching - reading stuff ahead,\n> and reading too far ahead is an inherent risk. Isn't that a problem we\n> have even without LIMIT? The prefetch distance ramp up is meant to limit\n> the impact.\n\nRight now, the index AM doesn't know anything about LIMIT at all. That\ndoesn't matter, since the index AM can only read/scan one full leaf\npage before returning control back to the executor proper. The\nexecutor proper can just shut down the whole index scan upon finding\nthat we've already returned N tuples for a LIMIT N.\n\nWe don't do prefetching right now, but we also don't risk reading a\nleaf page that'll just never be needed. Those two things are in\ntension, but I don't think that that's quite the same thing as the\nusual standard prefetching tension/problem. Here there is uncertainty\nabout whether what we're prefetching will *ever* be required -- not\nuncertainty about when exactly it'll be required. (Perhaps this\ndistinction doesn't mean much to you. I'm just telling you how I think\nabout it, in case it helps move the discussion forward.)\n\n> > This property of index scans is fundamental to how index scans work.\n> > Pinning an index page as an interlock against concurrently TID\n> > recycling by VACUUM is directly described by the index API docs [1],\n> > even (the docs actually use terms like \"buffer pin\" rather than\n> > something more abstract sounding). I don't think that anything\n> > affecting that behavior should be considered an implementation detail\n> > of the nbtree index AM as such (nor any particular index AM).\n> >\n>\n> Good point.\n\nThe main reason why the index AM docs require this interlock is\nbecause we need such an interlock to make non-MVCC snapshot scans\nsafe. If you remove the interlock (the buffer pin interlock that\nprotects against TID recycling by VACUUM), you can still avoid the\nsame race condition by using an MVCC snapshot. This is why using an\nMVCC snapshot is a requirement for bitmap index scans. I believe that\nit's also a requirement for index-only scans, but the index AM docs\ndon't spell that out.\n\nAnother factor that complicates things here is mark/restore\nprocessing. The design for that has the idea of processing one page at\na time baked-in. Kinda like with the kill_prior_tuple issue.\n\nIt's certainly possible that you could figure out various workarounds\nfor each of these issues (plus the kill_prior_tuple issue) with a\nprefetching design that \"de-synchronizes\" the index access and the\nheap access. But it might well be better to extend the existing design\nin a way that just avoids all these problems in the first place. Maybe\n\"de-synchronization\" really can pay for itself (because the benefits\nwill outweigh these costs), but if you go that way then I'd really\nprefer it that way.\n\n> > I think that it makes sense to put the index AM in control here --\n> > that almost follows from what I said about the index AM API. The index\n> > AM already needs to be in control, in about the same way, to deal with\n> > kill_prior_tuple (plus it helps with the LIMIT issue I described).\n> >\n>\n> In control how? What would be the control flow - what part would be\n> managed by the index AM?\n\nISTM that prefetching for an index scan is about the index scan\nitself, first and foremost. The heap accesses are usually the dominant\ncost, of course, but sometimes the index leaf page accesses really do\nmake up a significant fraction of the overall cost of the index scan.\nEspecially with an expensive index qual. So if you just assume that\nthe TIDs returned by the index scan are the only thing that matters,\nyou might have a model that's basically correct on average, but is\noccasionally very wrong. That's one reason for \"putting the index AM\nin control\".\n\nAs I said back in June, we should probably be marrying information\nfrom the index scan with information from the heap. This is something\nthat is arguably a modularity violation. But it might just be that you\nreally do need to take information from both places to consistently\nmake the right trade-off.\n\nPerhaps the best arguments for \"putting the index AM in control\" only\nwork when you go to fix the problems that \"naive de-synchronization\"\ncreates. Thinking about that side of things some more might make\n\"putting the index AM in control\" seem more natural.\n\nSuppose, for example, you try to make a prefetching design based on\n\"de-synchronization\" work with kill_prior_tuple -- suppose you try to\nfix that problem. You're likely going to need to make some kind of\ntrade-off that gets you most of the advantages that that approach\noffers (assuming that there really are significant advantages), while\nstill retaining most of the advantages that we already get from\nkill_prior_tuple (basically we want to LP_DEAD-mark index tuples with\nalmost or exactly the same consistency as we manage today). Maybe your\napproach involves tracking multiple LSNs for each prefetch-pending\nleaf page, or perhaps you hold on to a pin on some number of leaf\npages instead (right now nbtree does both [1], which I go into more\nbelow). Either way, you're pushing stuff down into the index AM.\n\nNote that we already hang onto more than one pin at a time in rare\ncases involving mark/restore processing. For example, it can happen\nfor a merge join that happens to involve an unlogged index, if the\nmarkpos and curpos are a certain way relative to the current leaf page\n(yeah, really). So putting stuff like that under the control of the\nindex AM (while also applying basic information that comes from the\nheap) in order to fix the kill_prior_tuple issue is arguably something\nthat has a kind of a precedent for us to follow.\n\nEven if you disagree with me here (\"precedent\" might be overstating\nit), perhaps you still get some general sense of why I have an inkling\nthat putting prefetching in the index AM is the way to go. It's very\nhard to provide one really strong justification for all this, and I'm\ncertainly not expecting you to just agree with me right away. I'm also\nnot trying to impose any conditions on committing this patch.\n\nThinking about this some more, \"making kill_prior_tuple work with\nde-synchronization\" is a bit of a misleading way of putting it. The\nway that you'd actually work around this is (at a very high level)\n*dynamically* making some kind of *trade-off* between synchronization\nand desynchronization. Up until now, we've been talking in terms of a\nstrict dichotomy between the old index AM API design\n(index-page-at-a-time synchronization), and a \"de-synchronizing\"\nprefetching design that\nembraces the opposite extreme -- a design where we only think in terms\nof heap TIDs, and completely ignore anything that happens in the index\nstructure (and consequently makes kill_prior_tuple ineffective). That\nnow seems like a false dichotomy.\n\n> I initially did the prefetching entirely in each index AM, but it was\n> suggested doing this in the executor would be better. So I gradually\n> moved it to executor. But the idea to combine this with the streaming\n> read API seems as a move from executor back to the lower levels ... and\n> now you're suggesting to make the index AM responsible for this again.\n\nI did predict that there'd be lots of difficulties around the layering\nback in June. :-)\n\n> I'm not saying any of those layering options is wrong, but it's not\n> clear to me which is the right one.\n\nI don't claim to know what the right trade-off is myself. The fact\nthat all of these things are in tension doesn't surprise me. It's just\na hard problem.\n\n> Possible. But AFAIK it did fail for Melanie, and I don't have a very\n> good explanation for the difference in behavior.\n\nIf you take a look at _bt_killitems(), you'll see that it actually has\ntwo fairly different strategies for avoiding TID recycling race\ncondition issues, applied in each of two different cases:\n\n1. Cases where we really have held onto a buffer pin, per the index AM\nAPI -- the \"inde AM orthodox\" approach. (The aforementioned issue\nwith unlogged indexes exists because with an unlogged index we must\nuse approach 1, per the nbtree README section [1]).\n\n2. Cases where we drop the pin as an optimization (also per [1]), and\nnow have to detect the possibility of concurrent modifications by\nVACUUM (that could have led to concurrent TID recycling). We\nconservatively do nothing (don't mark any index tuples LP_DEAD),\nunless the LSN is exactly the same as it was back when the page was\nscanned/read by _bt_readpage().\n\nSo some accidental detail with LSNs (like using or not using an\nunlogged index) could cause bugs in this area to \"accidentally fail to\nfail\". Since the nbtree index AM has its own optimizations here, which\nprobably has a tendency to mask problems/bugs. (I sometimes use\nunlogged indexes for some of my nbtree related test cases, just to\nreduce certain kinds of variability, including variability in this\narea.)\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/README;h=52e646c7f759a5d9cfdc32b86f6aff8460891e12;hb=3e8235ba4f9cc3375b061fb5d3f3575434539b5f#l443\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Feb 2024 13:21:00 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 11:40 AM Melanie Plageman\n<[email protected]> wrote:\n> I wasn't quite sure how we could use\n> index_compute_xid_horizon_for_tuples() for inspiration -- per Peter's\n> suggestion. But, I'd like to understand.\n\nThe point I was trying to make with that example was: a highly generic\nmechanism can sometimes work across disparate index AMs (that all at\nleast support plain index scans) when it just so happens that these\nAMs don't actually differ in a way that could possibly matter to that\nmechanism. While it's true that (say) nbtree and hash are very\ndifferent at a high level, it's nevertheless also true that the way\nthings work at the level of individual index pages is much more\nsimilar than different.\n\nWith index deletion, we know that we're differences between each\nsupported index AM either don't matter at all (which is what obviates\nthe need for index_compute_xid_horizon_for_tuples() to be directly\naware of which index AM the page it is passed comes from), or matter\nonly in small, incidental ways (e.g., nbtree stores posting lists in\nits tuples, despite using IndexTuple structs).\n\nWith prefetching, it seems reasonable to suppose that an index-AM\nspecific approach would end up needing very little truly custom code.\nThis is pretty strongly suggested by the fact that the rules around\nbuffer pins (as an interlock against concurrent TID recycling by\nVACUUM) are standardized by the index AM API itself. Those rules might\nbe slightly more natural with nbtree, but that's kinda beside the\npoint. While the basic organizing principle for where each index tuple\ngoes can vary enormously, it doesn't necessarily matter at all -- in\nthe end, you're really just reading each index page (that has TIDs to\nread) exactly once per scan, in some fixed order, with interlaced\ninline heap accesses (that go fetch heap tuples for each individual\nTID read from each index page).\n\nIn general I don't accept that we need to do things outside the index\nAM, because software architecture encapsulation something something. I\nsuspect that we'll need to share some limited information across\ndifferent layers of abstraction, because that's just fundamentally\nwhat's required by the constraints we're operating under. Can't really\nprove it, though.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Feb 2024 14:21:49 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 11:40 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Tue, Feb 13, 2024 at 2:01 PM Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > On 2/7/24 22:48, Melanie Plageman wrote:\n> > > ...\n> > > - switching scan directions\n> > >\n> > > If the index scan switches directions on a given invocation of\n> > > IndexNext(), heap blocks may have already been prefetched and read for\n> > > blocks containing tuples beyond the point at which we want to switch\n> > > directions.\n> > >\n> > > We could fix this by having some kind of streaming read \"reset\"\n> > > callback to drop all of the buffers which have been prefetched which\n> > > are now no longer needed. We'd have to go backwards from the last TID\n> > > which was yielded to the caller and figure out which buffers in the\n> > > pgsr buffer ranges are associated with all of the TIDs which were\n> > > prefetched after that TID. The TIDs are in the per_buffer_data\n> > > associated with each buffer in pgsr. The issue would be searching\n> > > through those efficiently.\n> > >\n> >\n> > Yeah, that's roughly what I envisioned in one of my previous messages\n> > about this issue - walking back the TIDs read from the index and added\n> > to the prefetch queue.\n> >\n> > > The other issue is that the streaming read API does not currently\n> > > support backwards scans. So, if we switch to a backwards scan from a\n> > > forwards scan, we would need to fallback to the non streaming read\n> > > method. We could do this by just setting the TID queue size to 1\n> > > (which is what I have currently implemented). Or we could add\n> > > backwards scan support to the streaming read API.\n> > >\n> >\n> > What do you mean by \"support for backwards scans\" in the streaming read\n> > API? I imagined it naively as\n> >\n> > 1) drop all requests in the streaming read API queue\n> >\n> > 2) walk back all \"future\" requests in the TID queue\n> >\n> > 3) start prefetching as if from scratch\n> >\n> > Maybe there's a way to optimize this and reuse some of the work more\n> > efficiently, but my assumption is that the scan direction does not\n> > change very often, and that we process many items in between.\n>\n> Yes, the steps you mention for resetting the queues make sense. What I\n> meant by \"backwards scan is not supported by the streaming read API\"\n> is that Thomas/Andres had mentioned that the streaming read API does\n> not support backwards scans right now. Though, since the callback just\n> returns a block number, I don't know how it would break.\n>\n> When switching between a forwards and backwards scan, does it go\n> backwards from the current position or start at the end (or beginning)\n> of the relation?\n\nOkay, well I answered this question for myself, by, um, trying it :).\nFETCH backward will go backwards from the current cursor position. So,\nI don't see exactly why this would be an issue.\n\n> If it is the former, then the blocks would most\n> likely be in shared buffers -- which the streaming read API handles.\n> It is not obvious to me from looking at the code what the gap is, so\n> perhaps Thomas could weigh in.\n\nI have the same problem with the sequential scan streaming read user,\nso I am going to try and figure this backwards scan and switching scan\ndirection thing there (where we don't have other issues).\n\n- Melanie\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:02:29 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 1:21 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Feb 14, 2024 at 8:34 AM Tomas Vondra\n> <[email protected]> wrote:\n> > > Another thing that argues against doing this is that we might not need\n> > > to visit any more B-Tree leaf pages when there is a LIMIT n involved.\n> > > We could end up scanning a whole extra leaf page (including all of its\n> > > tuples) for want of the ability to \"push down\" a LIMIT to the index AM\n> > > (that's not what happens right now, but it isn't really needed at all\n> > > right now).\n> > >\n> >\n> > I'm not quite sure I understand what is \"this\" that you argue against.\n> > Are you saying we should not separate the two scans? If yes, is there a\n> > better way to do this?\n>\n> What I'm concerned about is the difficulty and complexity of any\n> design that requires revising \"63.4. Index Locking Considerations\",\n> since that's pretty subtle stuff. In particular, if prefetching\n> \"de-synchronizes\" (to use your term) the index leaf page level scan\n> and the heap page scan, then we'll probably have to totally revise the\n> basic API.\n\nSo, a pin on the index leaf page is sufficient to keep line pointers\nfrom being reused? If we stick to prefetching heap blocks referred to\nby index tuples in a single index leaf page, and we keep that page\npinned, will we still have a problem?\n\n> > The LIMIT problem is not very clear to me either. Yes, if we get close\n> > to the end of the leaf page, we may need to visit the next leaf page.\n> > But that's kinda the whole point of prefetching - reading stuff ahead,\n> > and reading too far ahead is an inherent risk. Isn't that a problem we\n> > have even without LIMIT? The prefetch distance ramp up is meant to limit\n> > the impact.\n>\n> Right now, the index AM doesn't know anything about LIMIT at all. That\n> doesn't matter, since the index AM can only read/scan one full leaf\n> page before returning control back to the executor proper. The\n> executor proper can just shut down the whole index scan upon finding\n> that we've already returned N tuples for a LIMIT N.\n>\n> We don't do prefetching right now, but we also don't risk reading a\n> leaf page that'll just never be needed. Those two things are in\n> tension, but I don't think that that's quite the same thing as the\n> usual standard prefetching tension/problem. Here there is uncertainty\n> about whether what we're prefetching will *ever* be required -- not\n> uncertainty about when exactly it'll be required. (Perhaps this\n> distinction doesn't mean much to you. I'm just telling you how I think\n> about it, in case it helps move the discussion forward.)\n\nI don't think that the LIMIT problem is too different for index scans\nthan heap scans. We will need some advice from planner to come down to\nprevent over-eager prefetching in all cases.\n\n> Another factor that complicates things here is mark/restore\n> processing. The design for that has the idea of processing one page at\n> a time baked-in. Kinda like with the kill_prior_tuple issue.\n\nYes, I mentioned this in my earlier email. I think we can resolve\nmark/restore by resetting the prefetch and TID queues and restoring\nthe last used heap TID in the index scan descriptor.\n\n> It's certainly possible that you could figure out various workarounds\n> for each of these issues (plus the kill_prior_tuple issue) with a\n> prefetching design that \"de-synchronizes\" the index access and the\n> heap access. But it might well be better to extend the existing design\n> in a way that just avoids all these problems in the first place. Maybe\n> \"de-synchronization\" really can pay for itself (because the benefits\n> will outweigh these costs), but if you go that way then I'd really\n> prefer it that way.\n\nForcing each index access to be synchronous and interleaved with each\ntable access seems like an unprincipled design constraint. While it is\ntrue that we rely on that in our current implementation (when using\nnon-MVCC snapshots), it doesn't seem like a principle inherent to\naccessing indexes and tables.\n\n> > > I think that it makes sense to put the index AM in control here --\n> > > that almost follows from what I said about the index AM API. The index\n> > > AM already needs to be in control, in about the same way, to deal with\n> > > kill_prior_tuple (plus it helps with the LIMIT issue I described).\n> > >\n> >\n> > In control how? What would be the control flow - what part would be\n> > managed by the index AM?\n>\n> ISTM that prefetching for an index scan is about the index scan\n> itself, first and foremost. The heap accesses are usually the dominant\n> cost, of course, but sometimes the index leaf page accesses really do\n> make up a significant fraction of the overall cost of the index scan.\n> Especially with an expensive index qual. So if you just assume that\n> the TIDs returned by the index scan are the only thing that matters,\n> you might have a model that's basically correct on average, but is\n> occasionally very wrong. That's one reason for \"putting the index AM\n> in control\".\n\nI don't think the fact that it would also be valuable to do index\nprefetching is a reason not to do prefetching of heap pages. And,\nwhile it is true that were you to add index interior or leaf page\nprefetching, it would impact the heap prefetching, at the end of the\nday, the table AM needs some TID or TID-equivalents that whose blocks\nit can go fetch. The index AM has to produce something that the table\nAM will consume. So, if we add prefetching of heap pages and get the\ntable AM input right, it shouldn't require a full redesign to add\nindex page prefetching later.\n\nYou could argue that my suggestion to have the index AM manage and\npopulate a queue of TIDs for use by the table AM puts the index AM in\ncontrol. I do think having so many members of the IndexScanDescriptor\nwhich imply a one-at-a-time (xs_heaptid, xs_itup, etc) synchronous\ninterplay between fetching an index tuple and fetching a heap tuple is\nconfusing and error prone.\n\n> As I said back in June, we should probably be marrying information\n> from the index scan with information from the heap. This is something\n> that is arguably a modularity violation. But it might just be that you\n> really do need to take information from both places to consistently\n> make the right trade-off.\n\nAgreed that we are going to need to mix information from both places.\n\n> If you take a look at _bt_killitems(), you'll see that it actually has\n> two fairly different strategies for avoiding TID recycling race\n> condition issues, applied in each of two different cases:\n>\n> 1. Cases where we really have held onto a buffer pin, per the index AM\n> API -- the \"inde AM orthodox\" approach. (The aforementioned issue\n> with unlogged indexes exists because with an unlogged index we must\n> use approach 1, per the nbtree README section [1]).\n>\n> 2. Cases where we drop the pin as an optimization (also per [1]), and\n> now have to detect the possibility of concurrent modifications by\n> VACUUM (that could have led to concurrent TID recycling). We\n> conservatively do nothing (don't mark any index tuples LP_DEAD),\n> unless the LSN is exactly the same as it was back when the page was\n> scanned/read by _bt_readpage().\n\nRe 2: so the LSN could have been changed by some other process (i.e.\nnot vacuum), so how often in practice is the LSN actually the same as\nwhen the page was scanned/read? Do you think we would catch a\nmeaningful number of kill prior tuple opportunities if we used an LSN\ntracking method like this? Something that let us drop the pin on the\npage would obviously be better.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:45:57 -0500",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 4:46 PM Melanie Plageman\n<[email protected]> wrote:\n> So, a pin on the index leaf page is sufficient to keep line pointers\n> from being reused? If we stick to prefetching heap blocks referred to\n> by index tuples in a single index leaf page, and we keep that page\n> pinned, will we still have a problem?\n\nThat's certainly one way of dealing with it. Obviously, there are\nquestions about how you do that in a way that consistently avoids\ncreating new problems.\n\n> I don't think that the LIMIT problem is too different for index scans\n> than heap scans. We will need some advice from planner to come down to\n> prevent over-eager prefetching in all cases.\n\nI think that I'd rather use information at execution time instead, if\nat all possible (perhaps in addition to a hint given by the planner).\nBut it seems a bit premature to discuss this problem now, except to\nsay that it might indeed be a problem.\n\n> > It's certainly possible that you could figure out various workarounds\n> > for each of these issues (plus the kill_prior_tuple issue) with a\n> > prefetching design that \"de-synchronizes\" the index access and the\n> > heap access. But it might well be better to extend the existing design\n> > in a way that just avoids all these problems in the first place. Maybe\n> > \"de-synchronization\" really can pay for itself (because the benefits\n> > will outweigh these costs), but if you go that way then I'd really\n> > prefer it that way.\n>\n> Forcing each index access to be synchronous and interleaved with each\n> table access seems like an unprincipled design constraint. While it is\n> true that we rely on that in our current implementation (when using\n> non-MVCC snapshots), it doesn't seem like a principle inherent to\n> accessing indexes and tables.\n\nThere is nothing sacred about the way plain index scans work right now\n-- especially the part about buffer pins as an interlock.\n\nIf the pin thing really was sacred, then we could never have allowed\nnbtree to selectively opt-out in cases where it's possible to provide\nan equivalent correctness guarantee without holding onto buffer pins,\nwhich, as I went into, is how it actually works in nbtree's\n_bt_killitems() today (see commit 2ed5b87f96 for full details). And so\nin principle I have no problem with the idea of revising the basic\ndefinition of plain index scans -- especially if it's to make the\ndefinition more abstract, without fundamentally changing it (e.g., to\nmake it no longer reference buffer pins, making life easier for\nprefetching, while at the same time still implying the same underlying\nguarantees sufficient to allow nbtree to mostly work the same way as\ntoday).\n\nAll I'm really saying is:\n\n1. The sort of tricks that we can do in nbtree's _bt_killitems() are\nquite useful, and ought to be preserved in something like their\ncurrent form, even when prefetching is in use.\n\nThis seems to push things in the direction of centralizing control of\nthe process in index scan code. For example, it has to understand that\n_bt_killitems() will be called at some regular cadence that is well\ndefined and sensible from an index point of view.\n\n2. Are you sure that the leaf-page-at-a-time thing is such a huge\nhindrance to effective prefetching?\n\nI suppose that it might be much more important than I imagine it is\nright now, but it'd be nice to have something a bit more concrete to\ngo on.\n\n3. Even if it is somewhat important, do you really need to get that\npart working in v1?\n\nTomas' original prototype worked with the leaf-page-at-a-time thing,\nand that still seemed like a big improvement to me. While being less\ninvasive, in effect. If we can agree that something like that\nrepresents a useful step in the right direction (not an evolutionary\ndead end), then we can make good incremental progress within a single\nrelease.\n\n> I don't think the fact that it would also be valuable to do index\n> prefetching is a reason not to do prefetching of heap pages. And,\n> while it is true that were you to add index interior or leaf page\n> prefetching, it would impact the heap prefetching, at the end of the\n> day, the table AM needs some TID or TID-equivalents that whose blocks\n> it can go fetch.\n\nI wasn't really thinking of index page prefetching at all. Just the\ncost of applying index quals to read leaf pages that might never\nactually need to be read, due to the presence of a LIMIT. That is kind\nof a new problem created by eagerly reading (without actually\nprefetching) leaf pages.\n\n> You could argue that my suggestion to have the index AM manage and\n> populate a queue of TIDs for use by the table AM puts the index AM in\n> control. I do think having so many members of the IndexScanDescriptor\n> which imply a one-at-a-time (xs_heaptid, xs_itup, etc) synchronous\n> interplay between fetching an index tuple and fetching a heap tuple is\n> confusing and error prone.\n\nBut that's kinda how amgettuple is supposed to work -- cursors need it\nto work that way. Having some kind of general notion of scan order is\nalso important to avoid returning duplicate TIDs to the scan. In\ncontrast, GIN heavily relies on the fact that it only supports bitmap\nscans -- that allows it to not have to reason about returning\nduplicate TIDs (when dealing with a concurrently merged pending list,\nand other stuff like that).\n\nAnd so nbtree (and basically every other index AM that supports plain\nindex scans) kinda pretends to process a single tuple at a time, in\nsome fixed order that's convenient for the scan to work with (that's\nhow the executor thinks of things). In reality these index AMs\nactually process batches consisting of a single leaf page worth of\ntuples.\n\nI don't see how the IndexScanDescData side of things makes life any\nharder for this patch -- ISTM that you'll always need to pretend to\nreturn one tuple at a time from the index scan, regardless of what\nhappens under the hood, with pins and whatnot. The page-at-a-time\nthing is more or less an implementation detail that's private to index\nAMs (albeit in a way that follows certain standard conventions across\nindex AMs) -- it's a leaky abstraction only due to the interactions\nwith VACUUM/TID recycle safety.\n\n> Re 2: so the LSN could have been changed by some other process (i.e.\n> not vacuum), so how often in practice is the LSN actually the same as\n> when the page was scanned/read?\n\nIt seems very hard to make generalizations about that sort of thing.\n\nIt doesn't help that we now have batching logic inside\n_bt_simpledel_pass() that will make up for the problem of not setting\nas many LP_DEAD bits as we could in many important cases. (I recall\nthat that was one factor that allowed the bug that Andres fixed in\ncommit 90c885cd to go undetected for months. I recall discussing the\nissue with Andres around that time.)\n\n> Do you think we would catch a\n> meaningful number of kill prior tuple opportunities if we used an LSN\n> tracking method like this? Something that let us drop the pin on the\n> page would obviously be better.\n\nQuite possibly, yes. But it's hard to say for sure without far more\ndetailed analysis. Plus you have problems with things like unlogged\nindexes not having an LSN to use as a canary condition, which makes it\na bit messy (it's already kind of weird that we treat unlogged indexes\ndifferently here IMV).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Feb 2024 18:06:52 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-14 16:45:57 -0500, Melanie Plageman wrote:\n> > > The LIMIT problem is not very clear to me either. Yes, if we get close\n> > > to the end of the leaf page, we may need to visit the next leaf page.\n> > > But that's kinda the whole point of prefetching - reading stuff ahead,\n> > > and reading too far ahead is an inherent risk. Isn't that a problem we\n> > > have even without LIMIT? The prefetch distance ramp up is meant to limit\n> > > the impact.\n> >\n> > Right now, the index AM doesn't know anything about LIMIT at all. That\n> > doesn't matter, since the index AM can only read/scan one full leaf\n> > page before returning control back to the executor proper. The\n> > executor proper can just shut down the whole index scan upon finding\n> > that we've already returned N tuples for a LIMIT N.\n> >\n> > We don't do prefetching right now, but we also don't risk reading a\n> > leaf page that'll just never be needed. Those two things are in\n> > tension, but I don't think that that's quite the same thing as the\n> > usual standard prefetching tension/problem. Here there is uncertainty\n> > about whether what we're prefetching will *ever* be required -- not\n> > uncertainty about when exactly it'll be required. (Perhaps this\n> > distinction doesn't mean much to you. I'm just telling you how I think\n> > about it, in case it helps move the discussion forward.)\n>\n> I don't think that the LIMIT problem is too different for index scans\n> than heap scans. We will need some advice from planner to come down to\n> prevent over-eager prefetching in all cases.\n\nI'm not sure that that's really true. I think the more common and more\nproblematic case for partially executing a sub-tree of a query are nested\nloops (worse because that happens many times within a query). Particularly for\nanti-joins prefetching too aggressively could lead to a significant IO\namplification.\n\nAt the same time it's IMO more important to ramp up prefetching distance\nfairly aggressively for index scans than it is for sequential scans. For\nsequential scans it's quite likely that either the whole scan takes quite a\nwhile (thus slowly ramping doesn't affect overall time that much) or that the\ndata is cached anyway because the tables are small and frequently used (in\nwhich case we don't need to ramp). And even if smaller tables aren't cached,\nbecause it's sequential IO, the IOs are cheaper as they're sequential.\nContrast that to index scans, where it's much more likely that you have cache\nmisses in queries that do an overall fairly small number of IOs and where that\nIO is largely random.\n\nI think we'll need some awareness at ExecInitNode() time about how the results\nof the nodes are used. I see a few \"classes\":\n\n1) All rows are needed, because the node is below an Agg, Hash, Materialize,\n Sort, .... Can be determined purely by the plan shape.\n\n2) All rows are needed, because the node is completely consumed by the\n top-level (i.e. no limit, anti-joins or such inbetween) and the top-level\n wants to run the whole query. Unfortunately I don't think we know this at\n plan time at the moment (it's just determined by what's passed to\n ExecutorRun()).\n\n3) Some rows are needed, but it's hard to know the precise number. E.g. because\n of a LIMIT further up.\n\n4) Only a single row is going to be needed, albeit possibly after filtering on\n the node level. E.g. the anti-join case.\n\n\nThere are different times at which we could determine how each node is\nconsumed:\n\na) Determine node consumption \"class\" purely within ExecInit*, via different\n eflags.\n\n Today that couldn't deal with 2), but I think it'd not too hard to modify\n callers that consume query results completely to tell that ExecutorStart(),\n not just ExecutorRun().\n\n A disadvantage would be that this prevents us from taking IO depth into\n account during costing. There very well might be plans that are cheaper\n than others because the plan shape allows more concurrent IO.\n\n\nb) Determine node consumption class at plan time.\n\n This also couldn't deal with 2), but fixing that probably would be harder,\n because we'll often not know at plan time how the query will be\n executed. And in fact the same plan might be executed multiple ways, in\n case of prepared statements.\n\n The obvious advantage is of course that we can influence the choice of\n paths.\n\n\nI suspect we'd eventually want a mix of both. Plan time to be able to\ninfluence plan shape, ExecInit* to deal with not knowing how the query will be\nconsumed at plan time. Which suggests that we could start with whichever is\neasier and extend later.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Feb 2024 15:43:02 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-13 14:54:14 -0500, Peter Geoghegan wrote:\n> This property of index scans is fundamental to how index scans work.\n> Pinning an index page as an interlock against concurrently TID\n> recycling by VACUUM is directly described by the index API docs [1],\n> even (the docs actually use terms like \"buffer pin\" rather than\n> something more abstract sounding). I don't think that anything\n> affecting that behavior should be considered an implementation detail\n> of the nbtree index AM as such (nor any particular index AM).\n\nGiven that the interlock is only needed for non-mvcc scans, that non-mvcc\nscans are rare due to catalog accesses using snapshots these days and that\nmost non-mvcc scans do single-tuple lookups, it might be viable to be more\nrestrictive about prefetching iff non-mvcc snapshots are in use and to use\nmethod of cleanup that allows multiple pages to be cleaned up otherwise.\n\nHowever, I don't think we would necessarily have to relax the IAM pinning\nrules, just to be able to do prefetching of more than one index leaf\npage. Restricting prefetching to entries within a single leaf page obviously\nhas the disadvantage of not being able to benefit from concurrent IO whenever\ncrossing a leaf page boundary, but at the same time processing entries from\njust two leaf pages would often allow for a sufficiently aggressive\nprefetching. Pinning a small number of leaf pages instead of a single leaf\npage shouldn't be a problem.\n\n\nOne argument for loosening the tight coupling between kill_prior_tuples and\nindex scan progress is that the lack of kill_prior_tuples for bitmap scans is\nquite problematic. I've seen numerous production issues with bitmap scans\ncaused by subsequent scans processing a growing set of dead tuples, where\nplain index scans were substantially slower initially but didn't get much\nslower over time. We might be able to design a system where the bitmap\ncontains a certain number of back-references to the index, allowing later\ncleanup if there weren't any page splits or such.\n\n\n\n> I think that it makes sense to put the index AM in control here --\n> that almost follows from what I said about the index AM API. The index\n> AM already needs to be in control, in about the same way, to deal with\n> kill_prior_tuple (plus it helps with the LIMIT issue I described).\n\nDepending on what \"control\" means I'm doubtful:\n\nImo there are decisions influencing prefetching that an index AM shouldn't\nneed to know about directly, e.g. how the plan shape influences how many\ntuples are actually going to be consumed. Of course that determination could\nbe made in planner/executor and handed to IAMs, for the IAM to then \"control\"\nthe prefetching.\n\nAnother aspect is that *long* term I think we want to be able to execute\ndifferent parts of the plan tree when one part is blocked for IO. Of course\nthat's not always possible. But particularly with partitioned queries it often\nis. Depending on the form of \"control\" that's harder if IAMs are in control,\nbecause control flow needs to return to the executor to be able to switch to a\ndifferent node, so we can't wait for IO inside the AM.\n\nThere probably are ways IAMs could be in \"control\" that would be compatible\nwith such constraints however.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Feb 2024 16:28:51 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 7:28 PM Andres Freund <[email protected]> wrote:\n> On 2024-02-13 14:54:14 -0500, Peter Geoghegan wrote:\n> > This property of index scans is fundamental to how index scans work.\n> > Pinning an index page as an interlock against concurrently TID\n> > recycling by VACUUM is directly described by the index API docs [1],\n> > even (the docs actually use terms like \"buffer pin\" rather than\n> > something more abstract sounding). I don't think that anything\n> > affecting that behavior should be considered an implementation detail\n> > of the nbtree index AM as such (nor any particular index AM).\n>\n> Given that the interlock is only needed for non-mvcc scans, that non-mvcc\n> scans are rare due to catalog accesses using snapshots these days and that\n> most non-mvcc scans do single-tuple lookups, it might be viable to be more\n> restrictive about prefetching iff non-mvcc snapshots are in use and to use\n> method of cleanup that allows multiple pages to be cleaned up otherwise.\n\nI agree, but don't think that it matters all that much.\n\nIf you have an MVCC snapshot, that doesn't mean that TID recycle\nsafety problems automatically go away. It only means that you have one\nknown and supported alternative approach to dealing with such\nproblems. It's not like you just get that for free, just by using an\nMVCC snapshot, though -- it has downsides. Downsides such as the\ncurrent _bt_killitems() behavior with a concurrently-modified leaf\npage (modified when we didn't hold a leaf page pin). It'll just give\nup on setting any LP_DEAD bits due to noticing that the leaf page's\nLSN changed. (Plus there are implementation restrictions that I won't\nrepeat again now.)\n\nWhen I refer to the buffer pin interlock, I'm mostly referring to the\ngeneral need for something like that in the context of index scans.\nPrincipally in order to make kill_prior_tuple continue to work in\nsomething more or less like its current form.\n\n> However, I don't think we would necessarily have to relax the IAM pinning\n> rules, just to be able to do prefetching of more than one index leaf\n> page.\n\nTo be clear, we already do relax the IAM pinning rules. Or at least\nnbtree selectively opts out, as I've gone into already.\n\n> Restricting prefetching to entries within a single leaf page obviously\n> has the disadvantage of not being able to benefit from concurrent IO whenever\n> crossing a leaf page boundary, but at the same time processing entries from\n> just two leaf pages would often allow for a sufficiently aggressive\n> prefetching. Pinning a small number of leaf pages instead of a single leaf\n> page shouldn't be a problem.\n\nYou're probably right. I just don't see any need to solve that problem in v1.\n\n> One argument for loosening the tight coupling between kill_prior_tuples and\n> index scan progress is that the lack of kill_prior_tuples for bitmap scans is\n> quite problematic. I've seen numerous production issues with bitmap scans\n> caused by subsequent scans processing a growing set of dead tuples, where\n> plain index scans were substantially slower initially but didn't get much\n> slower over time.\n\nI've seen production issues like that too. No doubt it's a problem.\n\n> We might be able to design a system where the bitmap\n> contains a certain number of back-references to the index, allowing later\n> cleanup if there weren't any page splits or such.\n\nThat does seem possible, but do you really want a design for index\nprefetching that relies on that massive enhancement (a total redesign\nof kill_prior_tuple) happening at some point in the not-too-distant\nfuture? Seems risky, from a project management point of view.\n\nThis back-references idea seems rather complicated, especially if it\nneeds to work with very large bitmap index scans. Since you'll still\nhave the basic problem of TID recycle safety to deal with (even with\nan MVCC snapshot), you don't just have to revisit the leaf pages. You\nalso have to revisit the corresponding heap pages (generally they'll\nbe a lot more numerous than leaf pages). You'll have traded one\nproblem for another (which is not to say that it's not a good\ntrade-off).\n\nRight now the executor uses a amgettuple interface, and knows nothing\nabout index related costs (e.g., pages accessed in any index, index\nqual costs). While the index AM has some limited understanding of heap\naccess costs. So the index AM kinda knows a small bit about both types\nof costs (possibly not enough, but something). That informs the\nlanguage I'm using to describe all this.\n\nTo do something like your \"back-references to the index\" thing well, I\nthink that you need more dynamic behavior around when you visit the\nheap to get heap tuples pointed to by TIDs from index pages (i.e.\ndynamic behavior that determines how many leaf pages to go before\ngoing to the heap to get pointed-to TIDs). That is basically what I\nmeant by \"put the index AM in control\" -- it doesn't *strictly*\nrequire that the index AM actually do that. Just that a single piece\nof code has to have access to the full context, in order to make the\nright trade-offs around how both index and heap accesses are\nscheduled.\n\n> > I think that it makes sense to put the index AM in control here --\n> > that almost follows from what I said about the index AM API. The index\n> > AM already needs to be in control, in about the same way, to deal with\n> > kill_prior_tuple (plus it helps with the LIMIT issue I described).\n>\n> Depending on what \"control\" means I'm doubtful:\n>\n> Imo there are decisions influencing prefetching that an index AM shouldn't\n> need to know about directly, e.g. how the plan shape influences how many\n> tuples are actually going to be consumed. Of course that determination could\n> be made in planner/executor and handed to IAMs, for the IAM to then \"control\"\n> the prefetching.\n\nI agree with all this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Feb 2024 20:37:54 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 7:43 PM Tomas Vondra\n<[email protected]> wrote:\n> I don't think it's just a bookkeeping problem. In a way, nbtree already\n> does keep an array of tuples to kill (see btgettuple), but it's always\n> for the current index page. So it's not that we immediately go and kill\n> the prior tuple - nbtree already stashes it in an array, and kills all\n> those tuples when moving to the next index page.\n>\n> The way I understand the problem is that with prefetching we're bound to\n> determine the kill_prior_tuple flag with a delay, in which case we might\n> have already moved to the next index page ...\n\nWell... I'm not clear on all of the details of how this works, but\nthis sounds broken to me, for the reasons that Peter G. mentions in\nhis comments about desynchronization. If we currently have a rule that\nyou hold a pin on the index page while processing the heap tuples it\nreferences, you can't just throw that out the window and expect things\nto keep working. Saying that kill_prior_tuple doesn't work when you\nthrow that rule out the window is probably understating the extent of\nthe problem very considerably.\n\nI would have thought that the way this prefetching would work is that\nwe would bring pages into shared_buffers sooner than we currently do,\nbut not actually pin them until we're ready to use them, so that it's\npossible they might be evicted again before we get around to them, if\nwe prefetch too far and the system is too busy. Alternately, it also\nseems OK to read those later pages and pin them right away, as long as\n(1) we don't also give up pins that we would have held in the absence\nof prefetching and (2) we have some mechanism for limiting the number\nof extra pins that we're holding to a reasonable number given the size\nof shared_buffers.\n\nHowever, it doesn't seem OK at all to give up pins that the current\ncode holds sooner than the current code would do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Feb 2024 09:59:27 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-15 09:59:27 +0530, Robert Haas wrote:\n> I would have thought that the way this prefetching would work is that\n> we would bring pages into shared_buffers sooner than we currently do,\n> but not actually pin them until we're ready to use them, so that it's\n> possible they might be evicted again before we get around to them, if\n> we prefetch too far and the system is too busy.\n\nThe issue here is that we need to read index leaf pages (synchronously for\nnow!) to get the tids to do readahead of table data. What you describe is done\nfor the table data (IMO not a good idea medium term [1]), but the problem at\nhand is that once we've done readahead for all the tids on one index page, we\ncan't do more readahead without looking at the next index leaf page.\n\nObviously that would lead to a sawtooth like IO pattern, where you'd regularly\nhave to wait for IO for the first tuples referenced by an index leaf page.\n\nHowever, if we want to issue table readahead for tids on the neighboring index\nleaf page, we'll - as the patch stands - not hold a pin on the \"current\" index\nleaf page. Which makes index prefetching as currently implemented incompatible\nwith kill_prior_tuple, as that requires the index leaf page pin being held.\n\n\n> Alternately, it also seems OK to read those later pages and pin them right\n> away, as long as (1) we don't also give up pins that we would have held in\n> the absence of prefetching and (2) we have some mechanism for limiting the\n> number of extra pins that we're holding to a reasonable number given the\n> size of shared_buffers.\n\nFWIW, there's already some logic for (2) in LimitAdditionalPins(). Currently\nused to limit how many buffers a backend may pin for bulk relation extension.\n\nGreetings,\n\nAndres Freund\n\n\n[1] The main reasons that I think that just doing readahead without keeping a\npin is a bad idea, at least medium term, are:\n\na) To do AIO you need to hold a pin on the page while the IO is in progress,\nas the target buffer contents will be modified at some moment you don't\ncontrol, so that buffer should better not be replaced while IO is in\nprogress. So at the very least you need to hold a pin until the IO is over.\n\nb) If you do not keep a pin until you actually use the page, you need to\neither do another buffer lookup (expensive!) or you need to remember the\nbuffer id and revalidate that it's still pointing to the same block (cheaper,\nbut still not cheap). That's not just bad because it's slow in an absolute\nsense, more importantly it increases the potential performance downside of\ndoing readahead for fully cached workloads, because you don't gain anything,\nbut pay the price of two lookups/revalidation.\n\nNote that these reasons really just apply to cases where we read ahead because\nwe are quite certain we'll need exactly those blocks (leaving errors or\nqueries ending early aside), not for \"heuristic\" prefetching. If we e.g. were\nto issue prefetch requests for neighboring index pages while descending during\nan ordered index scan, without checking that we'll need those, it'd make sense\nto just do a \"throway\" prefetch request.\n\n\n",
"msg_date": "Wed, 14 Feb 2024 21:03:11 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 10:33 AM Andres Freund <[email protected]> wrote:\n> The issue here is that we need to read index leaf pages (synchronously for\n> now!) to get the tids to do readahead of table data. What you describe is done\n> for the table data (IMO not a good idea medium term [1]), but the problem at\n> hand is that once we've done readahead for all the tids on one index page, we\n> can't do more readahead without looking at the next index leaf page.\n\nOh, right.\n\n> However, if we want to issue table readahead for tids on the neighboring index\n> leaf page, we'll - as the patch stands - not hold a pin on the \"current\" index\n> leaf page. Which makes index prefetching as currently implemented incompatible\n> with kill_prior_tuple, as that requires the index leaf page pin being held.\n\nBut I think it probably also breaks MVCC, as Peter was saying.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Feb 2024 10:35:13 +0530",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 2/15/24 00:06, Peter Geoghegan wrote:\n> On Wed, Feb 14, 2024 at 4:46 PM Melanie Plageman\n> <[email protected]> wrote:\n>\n>> ...\n> \n> 2. Are you sure that the leaf-page-at-a-time thing is such a huge\n> hindrance to effective prefetching?\n> \n> I suppose that it might be much more important than I imagine it is\n> right now, but it'd be nice to have something a bit more concrete to\n> go on.\n> \n\nThis probably depends on which corner cases are considered important.\n\nThe page-at-a-time approach essentially means index items at the\nbeginning of the page won't get prefetched (or vice versa, prefetch\ndistance drops to 0 when we get to end of index page).\n\nThat may be acceptable, considering we can usually fit 200+ index items\non a single page. Even then it limits what effective_io_concurrency\nvalues are sensible, but in my experience quickly diminish past ~32.\n\n\n> 3. Even if it is somewhat important, do you really need to get that\n> part working in v1?\n> \n> Tomas' original prototype worked with the leaf-page-at-a-time thing,\n> and that still seemed like a big improvement to me. While being less\n> invasive, in effect. If we can agree that something like that\n> represents a useful step in the right direction (not an evolutionary\n> dead end), then we can make good incremental progress within a single\n> release.\n> \n\nIt certainly was a great improvement, no doubt about that. I dislike the\nrestriction, but that's partially for aesthetic reasons - it just seems\nit'd be nice to not have this.\n\nThat being said, I'd be OK with having this restriction if it makes v1\nfeasible. For me, the big question is whether it'd mean we're stuck with\nthis restriction forever, or whether there's a viable way to improve\nthis in v2.\n\nAnd I don't have answer to that :-( I got completely lost in the ongoing\ndiscussion about the locking implications (which I happily ignored while\nworking on the PoC patch), layering tensions and questions which part\nshould be \"in control\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Feb 2024 15:36:19 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 9:36 AM Tomas Vondra\n<[email protected]> wrote:\n> On 2/15/24 00:06, Peter Geoghegan wrote:\n> > I suppose that it might be much more important than I imagine it is\n> > right now, but it'd be nice to have something a bit more concrete to\n> > go on.\n> >\n>\n> This probably depends on which corner cases are considered important.\n>\n> The page-at-a-time approach essentially means index items at the\n> beginning of the page won't get prefetched (or vice versa, prefetch\n> distance drops to 0 when we get to end of index page).\n\nI don't think that's true. At least not for nbtree scans.\n\nAs I went into last year, you'd get the benefit of the work I've done\non \"boundary cases\" (most recently in commit c9c0589f from just a\ncouple of months back), which helps us get the most out of suffix\ntruncation. This maximizes the chances of only having to scan a single\nindex leaf page in many important cases. So I can see no reason why\nindex items at the beginning of the page are at any particular\ndisadvantage (compared to those from the middle or the end of the\npage).\n\nWhere you might have a problem is cases where it's just inherently\nnecessary to visit more than a single leaf page, despite the best\nefforts of the nbtsplitloc.c logic -- cases where the scan just\ninherently needs to return tuples that \"straddle the boundary between\ntwo neighboring pages\". That isn't a particularly natural restriction,\nbut it's also not obvious that it's all that much of a disadvantage in\npractice.\n\n> It certainly was a great improvement, no doubt about that. I dislike the\n> restriction, but that's partially for aesthetic reasons - it just seems\n> it'd be nice to not have this.\n>\n> That being said, I'd be OK with having this restriction if it makes v1\n> feasible. For me, the big question is whether it'd mean we're stuck with\n> this restriction forever, or whether there's a viable way to improve\n> this in v2.\n\nI think that there is no question that this will need to not\ncompletely disable kill_prior_tuple -- I'd be surprised if one single\nperson disagreed with me on this point. There is also a more nuanced\nway of describing this same restriction, but we don't necessarily need\nto agree on what exactly that is right now.\n\n> And I don't have answer to that :-( I got completely lost in the ongoing\n> discussion about the locking implications (which I happily ignored while\n> working on the PoC patch), layering tensions and questions which part\n> should be \"in control\".\n\nHonestly, I always thought that it made sense to do things on the\nindex AM side. When you went the other way I was surprised. Perhaps I\nshould have said more about that, sooner, but I'd already said quite a\nbit at that point, so...\n\nAnyway, I think that it's pretty clear that \"naive desynchronization\"\nis just not acceptable, because that'll disable kill_prior_tuple\naltogether. So you're going to have to do this in a way that more or\nless preserves something like the current kill_prior_tuple behavior.\nIt's going to have some downsides, but those can be managed. They can\nbe managed from within the index AM itself, a bit like the\n_bt_killitems() no-pin stuff does things already.\n\nObviously this interpretation suggests that doing things at the index\nAM level is indeed the right way to go, layering-wise. Does it make\nsense to you, though?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Feb 2024 11:42:07 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "\n\nOn 2/15/24 17:42, Peter Geoghegan wrote:\n> On Thu, Feb 15, 2024 at 9:36 AM Tomas Vondra\n> <[email protected]> wrote:\n>> On 2/15/24 00:06, Peter Geoghegan wrote:\n>>> I suppose that it might be much more important than I imagine it is\n>>> right now, but it'd be nice to have something a bit more concrete to\n>>> go on.\n>>>\n>>\n>> This probably depends on which corner cases are considered important.\n>>\n>> The page-at-a-time approach essentially means index items at the\n>> beginning of the page won't get prefetched (or vice versa, prefetch\n>> distance drops to 0 when we get to end of index page).\n> \n> I don't think that's true. At least not for nbtree scans.\n> \n> As I went into last year, you'd get the benefit of the work I've done\n> on \"boundary cases\" (most recently in commit c9c0589f from just a\n> couple of months back), which helps us get the most out of suffix\n> truncation. This maximizes the chances of only having to scan a single\n> index leaf page in many important cases. So I can see no reason why\n> index items at the beginning of the page are at any particular\n> disadvantage (compared to those from the middle or the end of the\n> page).\n> \n\nI may be missing something, but it seems fairly self-evident to me an\nentry at the beginning of an index page won't get prefetched (assuming\nthe page-at-a-time thing).\n\nIf I understand your point about boundary cases / suffix truncation,\nthat helps us by (a) picking the split in a way to minimize a single key\nspanning multiple pages, if possible and (b) increasing the number of\nentries that fit onto a single index page.\n\nThat's certainly true / helpful, and it makes the \"first entry\" issue\nmuch less common. But the issue is still there. Of course, this says\nnothing about the importance of the issue - the impact may easily be so\nsmall it's not worth worrying about.\n\n> Where you might have a problem is cases where it's just inherently\n> necessary to visit more than a single leaf page, despite the best\n> efforts of the nbtsplitloc.c logic -- cases where the scan just\n> inherently needs to return tuples that \"straddle the boundary between\n> two neighboring pages\". That isn't a particularly natural restriction,\n> but it's also not obvious that it's all that much of a disadvantage in\n> practice.\n> \n\nOne case I've been thinking about is sorting using index, where we often\nread large part of the index.\n\n>> It certainly was a great improvement, no doubt about that. I dislike the\n>> restriction, but that's partially for aesthetic reasons - it just seems\n>> it'd be nice to not have this.\n>>\n>> That being said, I'd be OK with having this restriction if it makes v1\n>> feasible. For me, the big question is whether it'd mean we're stuck with\n>> this restriction forever, or whether there's a viable way to improve\n>> this in v2.\n> \n> I think that there is no question that this will need to not\n> completely disable kill_prior_tuple -- I'd be surprised if one single\n> person disagreed with me on this point. There is also a more nuanced\n> way of describing this same restriction, but we don't necessarily need\n> to agree on what exactly that is right now.\n> \n\nEven for the page-at-a-time approach? Or are you talking about the v2?\n\n>> And I don't have answer to that :-( I got completely lost in the ongoing\n>> discussion about the locking implications (which I happily ignored while\n>> working on the PoC patch), layering tensions and questions which part\n>> should be \"in control\".\n> \n> Honestly, I always thought that it made sense to do things on the\n> index AM side. When you went the other way I was surprised. Perhaps I\n> should have said more about that, sooner, but I'd already said quite a\n> bit at that point, so...\n> \n> Anyway, I think that it's pretty clear that \"naive desynchronization\"\n> is just not acceptable, because that'll disable kill_prior_tuple\n> altogether. So you're going to have to do this in a way that more or\n> less preserves something like the current kill_prior_tuple behavior.\n> It's going to have some downsides, but those can be managed. They can\n> be managed from within the index AM itself, a bit like the\n> _bt_killitems() no-pin stuff does things already.\n> \n> Obviously this interpretation suggests that doing things at the index\n> AM level is indeed the right way to go, layering-wise. Does it make\n> sense to you, though?\n> \n\nYeah. The basic idea was that by moving this above index AM it will work\nfor all indexes automatically - but given the current discussion about\nkill_prior_tuple, locking etc. I'm not sure that's really feasible.\n\nThe index AM clearly needs to have more control over this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Feb 2024 18:26:09 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 12:26 PM Tomas Vondra\n<[email protected]> wrote:\n> I may be missing something, but it seems fairly self-evident to me an\n> entry at the beginning of an index page won't get prefetched (assuming\n> the page-at-a-time thing).\n\nSure, if the first item on the page is also the first item that we\nneed the scan to return (having just descended the tree), then it\nwon't get prefetched under a scheme that sticks with the current\npage-at-a-time behavior (at least in v1). Just like when the first\nitem that we need the scan to return is from the middle of the page,\nor more towards the end of the page.\n\nIt is of course also true that we can't prefetch the next page's\nfirst item until we actually visit the next page -- clearly that's\nsuboptimal. Just like we can't prefetch any other, later tuples from\nthe next page (until such time as we have determined for sure that\nthere really will be a next page, and have called _bt_readpage for\nthat next page.)\n\nThis is why I don't think that the tuples with lower page offset\nnumbers are in any way significant here. The significant part is\nwhether or not you'll actually need to visit more than one leaf page\nin the first place (plus the penalty from not being able to reorder\nthe work across page boundaries in your initial v1 of prefetching).\n\n> If I understand your point about boundary cases / suffix truncation,\n> that helps us by (a) picking the split in a way to minimize a single key\n> spanning multiple pages, if possible and (b) increasing the number of\n> entries that fit onto a single index page.\n\nMore like it makes the boundaries between leaf pages (i.e. high keys)\nalign with the \"natural boundaries of the key space\". Simple point\nqueries should practically never require more than a single leaf page\naccess as a result. Even somewhat complicated index scans that are\nreasonably selective (think tens to low hundreds of matches) don't\ntend to need to read more than a single leaf page match, at least with\nequality type scan keys for the index qual.\n\n> That's certainly true / helpful, and it makes the \"first entry\" issue\n> much less common. But the issue is still there. Of course, this says\n> nothing about the importance of the issue - the impact may easily be so\n> small it's not worth worrying about.\n\nRight. And I want to be clear: I'm really *not* sure how much it\nmatters. I just doubt that it's worth worrying about in v1 -- time\ngrows short. Although I agree that we should commit a v1 that leaves\nthe door open to improving matters in this area in v2.\n\n> One case I've been thinking about is sorting using index, where we often\n> read large part of the index.\n\nThat definitely seems like a case where reordering\nwork/desynchronization of the heap and index scans might be relatively\nimportant.\n\n> > I think that there is no question that this will need to not\n> > completely disable kill_prior_tuple -- I'd be surprised if one single\n> > person disagreed with me on this point. There is also a more nuanced\n> > way of describing this same restriction, but we don't necessarily need\n> > to agree on what exactly that is right now.\n> >\n>\n> Even for the page-at-a-time approach? Or are you talking about the v2?\n\nI meant that the current kill_prior_tuple behavior isn't sacred, and\ncan be revised in v2, for the benefit of lifting the restriction on\nprefetching. But that's going to involve a trade-off of some kind. And\nnot a particularly simple one.\n\n> Yeah. The basic idea was that by moving this above index AM it will work\n> for all indexes automatically - but given the current discussion about\n> kill_prior_tuple, locking etc. I'm not sure that's really feasible.\n>\n> The index AM clearly needs to have more control over this.\n\nCool. I think that that makes the layering question a lot clearer, then.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Feb 2024 12:53:10 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-15 12:53:10 -0500, Peter Geoghegan wrote:\n> On Thu, Feb 15, 2024 at 12:26 PM Tomas Vondra\n> <[email protected]> wrote:\n> > I may be missing something, but it seems fairly self-evident to me an\n> > entry at the beginning of an index page won't get prefetched (assuming\n> > the page-at-a-time thing).\n> \n> Sure, if the first item on the page is also the first item that we\n> need the scan to return (having just descended the tree), then it\n> won't get prefetched under a scheme that sticks with the current\n> page-at-a-time behavior (at least in v1). Just like when the first\n> item that we need the scan to return is from the middle of the page,\n> or more towards the end of the page.\n> \n> It is of course also true that we can't prefetch the next page's\n> first item until we actually visit the next page -- clearly that's\n> suboptimal. Just like we can't prefetch any other, later tuples from\n> the next page (until such time as we have determined for sure that\n> there really will be a next page, and have called _bt_readpage for\n> that next page.)\n>\n> This is why I don't think that the tuples with lower page offset\n> numbers are in any way significant here. The significant part is\n> whether or not you'll actually need to visit more than one leaf page\n> in the first place (plus the penalty from not being able to reorder\n> the work across page boundaries in your initial v1 of prefetching).\n\nTo me this your phrasing just seems to reformulate the issue.\n\nIn practical terms you'll have to wait for the full IO latency when fetching\nthe table tuple corresponding to the first tid on a leaf page. Of course\nthat's also the moment you had to visit another leaf page. Whether the stall\nis due to visit another leaf page or due to processing the first entry on such\na leaf page is a distinction without a difference.\n\n\n> > That's certainly true / helpful, and it makes the \"first entry\" issue\n> > much less common. But the issue is still there. Of course, this says\n> > nothing about the importance of the issue - the impact may easily be so\n> > small it's not worth worrying about.\n> \n> Right. And I want to be clear: I'm really *not* sure how much it\n> matters. I just doubt that it's worth worrying about in v1 -- time\n> grows short. Although I agree that we should commit a v1 that leaves\n> the door open to improving matters in this area in v2.\n\nI somewhat doubt that it's realistic to aim for 17 at this point. We seem to\nstill be doing fairly fundamental architectual work. I think it might be the\nright thing even for 18 to go for the simpler only-a-single-leaf-page\napproach though.\n\nI wonder if there are prerequisites that can be tackled for 17. One idea is to\nwork on infrastructure to provide executor nodes with information about the\nnumber of tuples likely to be fetched - I suspect we'll trigger regressions\nwithout that in place.\n\n\n\nOne way to *sometimes* process more than a single leaf page, without having to\nredesign kill_prior_tuple, would be to use the visibilitymap to check if the\ntarget pages are all-visible. If all the table pages on a leaf page are\nall-visible, we know that we don't need to kill index entries, and thus can\nmove on to the next leaf page\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Feb 2024 12:13:37 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Thu, Feb 15, 2024 at 3:13 PM Andres Freund <[email protected]> wrote:\n> > This is why I don't think that the tuples with lower page offset\n> > numbers are in any way significant here. The significant part is\n> > whether or not you'll actually need to visit more than one leaf page\n> > in the first place (plus the penalty from not being able to reorder\n> > the work across page boundaries in your initial v1 of prefetching).\n>\n> To me this your phrasing just seems to reformulate the issue.\n\nWhat I said to Tomas seems very obvious to me. I think that there\nmight have been some kind of miscommunication (not a real\ndisagreement). I was just trying to work through that.\n\n> In practical terms you'll have to wait for the full IO latency when fetching\n> the table tuple corresponding to the first tid on a leaf page. Of course\n> that's also the moment you had to visit another leaf page. Whether the stall\n> is due to visit another leaf page or due to processing the first entry on such\n> a leaf page is a distinction without a difference.\n\nI don't think anybody said otherwise?\n\n> > > That's certainly true / helpful, and it makes the \"first entry\" issue\n> > > much less common. But the issue is still there. Of course, this says\n> > > nothing about the importance of the issue - the impact may easily be so\n> > > small it's not worth worrying about.\n> >\n> > Right. And I want to be clear: I'm really *not* sure how much it\n> > matters. I just doubt that it's worth worrying about in v1 -- time\n> > grows short. Although I agree that we should commit a v1 that leaves\n> > the door open to improving matters in this area in v2.\n>\n> I somewhat doubt that it's realistic to aim for 17 at this point.\n\nThat's a fair point. Tomas?\n\n> We seem to\n> still be doing fairly fundamental architectual work. I think it might be the\n> right thing even for 18 to go for the simpler only-a-single-leaf-page\n> approach though.\n\nI definitely think it's a good idea to have that as a fall back\noption. And to not commit ourselves to having something better than\nthat for v1 (though we probably should commit to making that possible\nin v2).\n\n> I wonder if there are prerequisites that can be tackled for 17. One idea is to\n> work on infrastructure to provide executor nodes with information about the\n> number of tuples likely to be fetched - I suspect we'll trigger regressions\n> without that in place.\n\nI don't think that there'll be regressions if we just take the simpler\nonly-a-single-leaf-page approach. At least it seems much less likely.\n\n> One way to *sometimes* process more than a single leaf page, without having to\n> redesign kill_prior_tuple, would be to use the visibilitymap to check if the\n> target pages are all-visible. If all the table pages on a leaf page are\n> all-visible, we know that we don't need to kill index entries, and thus can\n> move on to the next leaf page\n\nIt's possible that we'll need a variety of different strategies.\nnbtree already has two such strategies in _bt_killitems(), in a way.\nThough its \"Modified while not pinned means hinting is not safe\" path\n(LSN doesn't match canary value path) seems pretty naive. The\nprefetching stuff might present us with a good opportunity to replace\nthat with something fundamentally better.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Feb 2024 15:30:06 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 7:13 PM Tomas Vondra\n<[email protected]> wrote:\n[\n>\n> (1) Melanie actually presented a very different way to implement this,\n> relying on the StreamingRead API. So chances are this struct won't\n> actually be used.\n\nGiven lots of effort already spent on this and the fact that is thread\nis actually two:\n\na. index/table prefetching since Jun 2023 till ~Jan 2024\nb. afterwards index/table prefetching with Streaming API, but there\nare some doubts of whether it could happen for v17 [1]\n\n... it would be pitty to not take benefits of such work (even if\nStreaming API wouldn't be ready for this; although there's lots of\nmovement in the area), so I've played a little with with the earlier\nimplementation from [2] without streaming API as it already received\nfeedback, it demonstrated big benefits, and earlier it got attention\non pgcon unconference. Perhaps, some of those comment might be passed\nlater to the \"b\"-patch (once that's feasible):\n\n1. v20240124-0001-Prefetch-heap-pages-during-index-scans.patch does\nnot apply cleanly anymore, due show_buffer_usage() being quite\nrecently refactored in 5de890e3610d5a12cdaea36413d967cf5c544e20 :\n\npatching file src/backend/commands/explain.c\nHunk #1 FAILED at 3568.\nHunk #2 FAILED at 3679.\n2 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/commands/explain.c.rej\n\n2. v2 applies (fixup), but it would nice to see that integrated into\nmain patch (it adds IndexOnlyPrefetchInfo) into one patch\n\n3. execMain.c :\n\n + * XXX It might be possible to improve the prefetching code\nto handle this\n + * by \"walking back\" the TID queue, but it's not clear if\nit's worth it.\n\nShouldn't we just remove the XXX? The walking-back seems to be niche\nso are fetches using cursors when looking at real world users queries\n? (support cases bias here when looking at peopel's pg_stat_activity)\n\n4. Wouldn't it be better to leave PREFETCH_LRU_SIZE at static of 8,\nbut base PREFETCH_LRU_COUNT on effective_io_concurrency instead?\n(allowing it to follow dynamically; the more prefetches the user wants\nto perform, the more you spread them across shared LRUs and the more\nmemory for history is required?)\n\n + * XXX Maybe we could consider effective_cache_size when sizing the cache?\n + * Not to size the cache for that, ofc, but maybe as a guidance of how many\n + * heap pages it might keep. Maybe just a fraction fraction of the value,\n + * say Max(8MB, effective_cache_size / max_connections) or something.\n + */\n +#define PREFETCH_LRU_SIZE 8 /* slots in one LRU */\n +#define PREFETCH_LRU_COUNT 128 /* number of LRUs */\n +#define PREFETCH_CACHE_SIZE (PREFETCH_LRU_SIZE *\nPREFETCH_LRU_COUNT)\n\nBTW:\n + * heap pages it might keep. Maybe just a fraction fraction of the value,\nthat's a duplicated \"fraction\" word over there.\n\n5.\n + * XXX Could it be harmful that we read the queue backwards?\nMaybe memory\n + * prefetching works better for the forward direction?\n\nI wouldn't care, we are optimizing I/O (and context-switching) which\nweighs much more than memory access direction impact and Dilipi\nearlier also expressed no concern, so maybe it could be also removed\n(one less \"XXX\" to care about)\n\n6. in IndexPrefetchFillQueue()\n\n + while (!PREFETCH_QUEUE_FULL(prefetch))\n + {\n + IndexPrefetchEntry *entry\n + = prefetch->next_cb(scan, direction, prefetch->data);\n\nIf we are at it... that's a strange split and assignment not indented :^)\n\n7. in IndexPrefetchComputeTarget()\n\n + * XXX We cap the target to plan_rows, becausse it's pointless to prefetch\n + * more than we expect to use.\n\nThat's a nice fact that's already in patch, so XXX isn't needed?\n\n8.\n + * XXX Maybe we should reduce the value with parallel workers?\n\nI was assuming it could be a good idea, but the same doesn't seem\n(eic/actual_parallel_works_per_gather) to be performed for bitmap heap\nscan prefetches, so no?\n\n9.\n + /*\n + * No prefetching for direct I/O.\n + *\n + * XXX Shouldn't we do prefetching even for direct I/O? We would only\n + * pretend doing it now, ofc, because we'd not do posix_fadvise(), but\n + * once the code starts loading into shared buffers, that'd work.\n + */\n + if ((io_direct_flags & IO_DIRECT_DATA) != 0)\n + return 0;\n\nIt's redundant (?) and could be removed as\nPrefetchBuffer()->PrefetchSharedBuffer() already has this at line 571:\n\n 5 #ifdef USE_PREFETCH\n 4 │ │ /*\n 3 │ │ │* Try to initiate an asynchronous read. This\nreturns false in\n 2 │ │ │* recovery if the relation file doesn't exist.\n 1 │ │ │*/\n 571 │ │ if ((io_direct_flags & IO_DIRECT_DATA) == 0 &&\n 1 │ │ │ smgrprefetch(smgr_reln, forkNum, blockNum, 1))\n 2 │ │ {\n 3 │ │ │ result.initiated_io = true;\n 4 │ │ }\n 5 #endif> > > > > > > /* USE_PREFETCH */\n\n11. in IndexPrefetchStats() and ExecReScanIndexScan()\n\n + * FIXME Should be only in debug builds, or something like that.\n\n + /* XXX Print some debug stats. Should be removed. */\n + IndexPrefetchStats(indexScanDesc, node->iss_prefetch);\n\nHmm, but it could be useful in tuning the real world systems, no? E.g.\nrecovery prefetcher gives some info through pg_stat_recovery_prefetch\nview, but e.g. bitmap heap scans do not provide us with anything at\nall. I don't have a strong opinion. Exposing such stuff would take\naway your main doubt (XXX) from execPrefetch.c\n``auto-tuning/self-adjustment\". And if we are at it, we could think in\nfar future about adding new session GUC track_cachestat or EXPLAIN\n(cachestat/prefetch, analyze) (this new syscall for Linux >= 6.5)\nwhere we could present both index stats (as what IndexPrefetchStats()\ndoes) *and* cachestat() results there for interested users. Of course\nit would have to be generic enough for the bitmap heap scan case too.\nSuch insight would also allow fine tuning eic, PREFETCH_LRU_COUNT,\nPREFETCH_QUEUE_HISTORY. Just an idea.\n\n12.\n\n + * XXX Maybe we should reduce the target in case this is\na parallel index\n + * scan. We don't want to issue a multiple of\neffective_io_concurrency.\n\nin IndexOnlyPrefetchCleanup() and IndexNext()\n\n+ * XXX Maybe we should reduce the value with parallel workers?\n\nIt's redundant XXX-comment (there are two for the same), as you it was\nalready there just before IndexPrefetchComputeTarget()\n\n13. The previous bitmap prefetch code uses #ifdef USE_PREFETCH, maybe\nit would make some sense to follow the consistency pattern , to avoid\nadding implementation on platforms without prefetching ?\n\n14. The patch is missing documentation, so how about just this?\n\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -2527,7 +2527,8 @@ include_dir 'conf.d'\n operations that any individual\n<productname>PostgreSQL</productname> session\n attempts to initiate in parallel. The allowed range is 1 to 1000,\n or zero to disable issuance of asynchronous I/O requests. Currently,\n- this setting only affects bitmap heap scans.\n+ this setting only enables prefetching for HEAP data blocks\nwhen performing\n+ bitmap heap scans and index (only) scans.\n </para>\n\nSome further tests, given data:\n\nCREATE TABLE test (id bigint, val bigint, str text);\nALTER TABLE test ALTER COLUMN str SET STORAGE EXTERNAL;\nINSERT INTO test SELECT g, g, repeat(chr(65 + (10*random())::int),\n3000) FROM generate_series(1, 10000) g;\n-- or INSERT INTO test SELECT x.r, x.r, repeat(chr(65 +\n(10*random())::int), 3000) from (select 10000 * random() as r from\ngenerate_series(1, 10000)) x;\nVACUUM ANALYZE test;\nCREATE INDEX on test (id) ;\n\n1. the patch correctly detects sequential access (e.g. we issue up to\n6 fadvise() syscalls (8kB each) out and 17 preads() to heap fd for\nquery like `SELECT sum(val) FROM test WHERE id BETWEEN 10 AND 2000;`\n-- offset of fadvise calls and pread match), so that's good.\n\n2. Prefetching for TOASTed heap seems to be not implemented at all,\ncorrect? (Is my assumption that we should go like this:\nt_index->t->toast_idx->toast_heap)?, but I'm too newbie to actually\nsee the code path where it could be added - certainly it's not blocker\n-- but maybe in commit message a list of improvements for future could\nbe listed?):\n\n2024-02-29 11:45:14.259 CET [11098] LOG: index prefetch stats:\nrequests 1990 prefetches 17 (0.854271) skip cached 0 sequential 1973\n2024-02-29 11:45:14.259 CET [11098] STATEMENT: SELECT\nmd5(string_agg(md5(str),',')) FROM test WHERE id BETWEEN 10 AND 2000;\n\nfadvise64(37, 40960, 8192, POSIX_FADV_WILLNEED) = 0\npread64(50, \"\\0\\0\\0\\0\\350Jv\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n\\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2998272) = 8192\npread64(49, \"\\0\\0\\0\\0@Hw\\1\\0\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\340\\237\n\\0\\320\\237 \\0\"..., 8192, 40960) = 8192\npread64(50, \"\\0\\0\\0\\0\\2200v\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n\\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2990080) = 8192\npread64(50, \"\\0\\0\\0\\08\\26v\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n\\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2981888) = 8192\npread64(50, \"\\0\\0\\0\\0\\340\\373u\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n\\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2973696) = 8192\n[..no fadvises for fd=50 which was pg_toast_rel..]\n\n3. I'm not sure if I got good-enough results for DESCending index\n`create index on test (id DESC);`- with eic=16 it doesnt seem to be\nbe able prefetch 16 blocks in advance? (e.g. highlight offset 557056\nbelow in some text editor and it's distance is far lower between that\nfadvise<->pread):\n\npread64(45, \"\\0\\0\\0\\0x\\305b\\3\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 0) = 8192\nfadvise64(45, 417792, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\0\\370\\330\\235\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 417792) = 8192\nfadvise64(45, 671744, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 237568, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\08`]\\5\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 671744) = 8192\nfadvise64(45, 491520, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 360448, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\0\\200\\357\\25\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 237568) = 8192\nfadvise64(45, 557056, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 106496, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\0\\240s\\325\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 491520) = 8192\nfadvise64(45, 401408, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 335872, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\0\\250\\233r\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 360448) = 8192\nfadvise64(45, 524288, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 352256, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\\0\\0\\0\\0\\240\\342\\6\\5\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n\\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 557056) = 8192\n\n-Jakub Wartak.\n\n[1] - https://www.postgresql.org/message-id/20240215201337.7amzw3hpvng7wphb%40awork3.anarazel.de\n[2] - https://www.postgresql.org/message-id/777e981c-bf0c-4eb9-a9e0-42d677e94327%40enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Mar 2024 09:20:30 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nThanks for looking at the patch!\n\n\nOn 3/1/24 09:20, Jakub Wartak wrote:\n> On Wed, Jan 24, 2024 at 7:13 PM Tomas Vondra\n> <[email protected]> wrote:\n> [\n>>\n>> (1) Melanie actually presented a very different way to implement this,\n>> relying on the StreamingRead API. So chances are this struct won't\n>> actually be used.\n> \n> Given lots of effort already spent on this and the fact that is thread\n> is actually two:\n> \n> a. index/table prefetching since Jun 2023 till ~Jan 2024\n> b. afterwards index/table prefetching with Streaming API, but there\n> are some doubts of whether it could happen for v17 [1]\n> \n> ... it would be pitty to not take benefits of such work (even if\n> Streaming API wouldn't be ready for this; although there's lots of\n> movement in the area), so I've played a little with with the earlier\n> implementation from [2] without streaming API as it already received\n> feedback, it demonstrated big benefits, and earlier it got attention\n> on pgcon unconference. Perhaps, some of those comment might be passed\n> later to the \"b\"-patch (once that's feasible):\n> \n\nTBH I don't have a clear idea what to do. It'd be cool to have at least\nsome benefits in v17, but I don't know how to do that in a way that\nwould be useful in the future.\n\nFor example, the v20240124 patch implements this in the executor, but\nbased on the recent discussions it seems that's not the right layer -\nthe index AM needs to have some control, and I'm not convinced it's\npossible to improve it in that direction (even ignoring the various\nissues we identified in the executor-based approach).\n\nI think it might be more practical to do this from the index AM, even if\nit has various limitations. Ironically, that's what I proposed at pgcon,\nbut mostly because it was the quick&dirty way to do this.\n\n> 1. v20240124-0001-Prefetch-heap-pages-during-index-scans.patch does\n> not apply cleanly anymore, due show_buffer_usage() being quite\n> recently refactored in 5de890e3610d5a12cdaea36413d967cf5c544e20 :\n> \n> patching file src/backend/commands/explain.c\n> Hunk #1 FAILED at 3568.\n> Hunk #2 FAILED at 3679.\n> 2 out of 2 hunks FAILED -- saving rejects to file\n> src/backend/commands/explain.c.rej\n> \n> 2. v2 applies (fixup), but it would nice to see that integrated into\n> main patch (it adds IndexOnlyPrefetchInfo) into one patch\n> \n\nYeah, but I think it was an old patch version, no point in rebasing that\nforever. Also, I'm not really convinced the executor-level approach is\nthe right path forward.\n\n> 3. execMain.c :\n> \n> + * XXX It might be possible to improve the prefetching code\n> to handle this\n> + * by \"walking back\" the TID queue, but it's not clear if\n> it's worth it.\n> \n> Shouldn't we just remove the XXX? The walking-back seems to be niche\n> so are fetches using cursors when looking at real world users queries\n> ? (support cases bias here when looking at peopel's pg_stat_activity)\n> \n> 4. Wouldn't it be better to leave PREFETCH_LRU_SIZE at static of 8,\n> but base PREFETCH_LRU_COUNT on effective_io_concurrency instead?\n> (allowing it to follow dynamically; the more prefetches the user wants\n> to perform, the more you spread them across shared LRUs and the more\n> memory for history is required?)\n> \n> + * XXX Maybe we could consider effective_cache_size when sizing the cache?\n> + * Not to size the cache for that, ofc, but maybe as a guidance of how many\n> + * heap pages it might keep. Maybe just a fraction fraction of the value,\n> + * say Max(8MB, effective_cache_size / max_connections) or something.\n> + */\n> +#define PREFETCH_LRU_SIZE 8 /* slots in one LRU */\n> +#define PREFETCH_LRU_COUNT 128 /* number of LRUs */\n> +#define PREFETCH_CACHE_SIZE (PREFETCH_LRU_SIZE *\n> PREFETCH_LRU_COUNT)\n> \n\nI don't see why would this be related to effective_io_concurrency? It's\nmerely about how many recently accessed pages we expect to find in the\npage cache. It's entirely separate from the prefetch distance.\n\n> BTW:\n> + * heap pages it might keep. Maybe just a fraction fraction of the value,\n> that's a duplicated \"fraction\" word over there.\n> \n> 5.\n> + * XXX Could it be harmful that we read the queue backwards?\n> Maybe memory\n> + * prefetching works better for the forward direction?\n> \n> I wouldn't care, we are optimizing I/O (and context-switching) which\n> weighs much more than memory access direction impact and Dilipi\n> earlier also expressed no concern, so maybe it could be also removed\n> (one less \"XXX\" to care about)\n> \n\nYeah, I think it's negligible. Probably a microoptimization we can\ninvestigate later, I don't want to complicate the code unnecessarily.\n\n> 6. in IndexPrefetchFillQueue()\n> \n> + while (!PREFETCH_QUEUE_FULL(prefetch))\n> + {\n> + IndexPrefetchEntry *entry\n> + = prefetch->next_cb(scan, direction, prefetch->data);\n> \n> If we are at it... that's a strange split and assignment not indented :^)\n> \n> 7. in IndexPrefetchComputeTarget()\n> \n> + * XXX We cap the target to plan_rows, becausse it's pointless to prefetch\n> + * more than we expect to use.\n> \n> That's a nice fact that's already in patch, so XXX isn't needed?\n> \n\nRight, which is why it's not a TODO/FIXME. But I think it's good to\npoint this out - I'm not 100% convinced we should be using plan_rows\nlike this (because what happens if the estimate happens to be wrong?).\n\n> 8.\n> + * XXX Maybe we should reduce the value with parallel workers?\n> \n> I was assuming it could be a good idea, but the same doesn't seem\n> (eic/actual_parallel_works_per_gather) to be performed for bitmap heap\n> scan prefetches, so no?\n> \n\nYeah, if we don't do that now, I'm not sure this patch should change\nthat behavior.\n\n> 9.\n> + /*\n> + * No prefetching for direct I/O.\n> + *\n> + * XXX Shouldn't we do prefetching even for direct I/O? We would only\n> + * pretend doing it now, ofc, because we'd not do posix_fadvise(), but\n> + * once the code starts loading into shared buffers, that'd work.\n> + */\n> + if ((io_direct_flags & IO_DIRECT_DATA) != 0)\n> + return 0;\n> \n> It's redundant (?) and could be removed as\n> PrefetchBuffer()->PrefetchSharedBuffer() already has this at line 571:\n> \n> 5 #ifdef USE_PREFETCH\n> 4 │ │ /*\n> 3 │ │ │* Try to initiate an asynchronous read. This\n> returns false in\n> 2 │ │ │* recovery if the relation file doesn't exist.\n> 1 │ │ │*/\n> 571 │ │ if ((io_direct_flags & IO_DIRECT_DATA) == 0 &&\n> 1 │ │ │ smgrprefetch(smgr_reln, forkNum, blockNum, 1))\n> 2 │ │ {\n> 3 │ │ │ result.initiated_io = true;\n> 4 │ │ }\n> 5 #endif> > > > > > > /* USE_PREFETCH */\n> \n\nYeah, I think it might be redundant. I think it allowed skipping a bunch\nthings without prefetching (like initialization of the prefetcher), but\nafter the reworks that's no longer true.\n\n> 11. in IndexPrefetchStats() and ExecReScanIndexScan()\n> \n> + * FIXME Should be only in debug builds, or something like that.\n> \n> + /* XXX Print some debug stats. Should be removed. */\n> + IndexPrefetchStats(indexScanDesc, node->iss_prefetch);\n> \n> Hmm, but it could be useful in tuning the real world systems, no? E.g.\n> recovery prefetcher gives some info through pg_stat_recovery_prefetch\n> view, but e.g. bitmap heap scans do not provide us with anything at\n> all. I don't have a strong opinion. Exposing such stuff would take\n> away your main doubt (XXX) from execPrefetch.c\n\nYou're right it'd be good to collect/expose such statistics, to help\nwith monitoring/tuning, etc. But I think there are better / more\nconvenient ways to do this - exposing that in EXPLAIN, and adding a\ncounter to pgstat_all_tables / pgstat_all_indexes.\n\n> ``auto-tuning/self-adjustment\". And if we are at it, we could think in\n> far future about adding new session GUC track_cachestat or EXPLAIN\n> (cachestat/prefetch, analyze) (this new syscall for Linux >= 6.5)\n> where we could present both index stats (as what IndexPrefetchStats()\n> does) *and* cachestat() results there for interested users. Of course\n> it would have to be generic enough for the bitmap heap scan case too.\n> Such insight would also allow fine tuning eic, PREFETCH_LRU_COUNT,\n> PREFETCH_QUEUE_HISTORY. Just an idea.\n> \n\nI haven't really thought about this, but I agree some auto-tuning would\nbe very helpful (assuming it's sufficiently reliable).\n\n> 12.\n> \n> + * XXX Maybe we should reduce the target in case this is\n> a parallel index\n> + * scan. We don't want to issue a multiple of\n> effective_io_concurrency.\n> \n> in IndexOnlyPrefetchCleanup() and IndexNext()\n> \n> + * XXX Maybe we should reduce the value with parallel workers?\n> \n> It's redundant XXX-comment (there are two for the same), as you it was\n> already there just before IndexPrefetchComputeTarget()\n> \n> 13. The previous bitmap prefetch code uses #ifdef USE_PREFETCH, maybe\n> it would make some sense to follow the consistency pattern , to avoid\n> adding implementation on platforms without prefetching ?\n> \n\nPerhaps, but I'm not sure how to do that with the executor-based\napproach, where essentially everything goes through the prefetch queue\n(except that the prefetch distance is 0). So the amount of code that\nwould be disabled by the ifdef would be tiny.\n\n> 14. The patch is missing documentation, so how about just this?\n> \n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -2527,7 +2527,8 @@ include_dir 'conf.d'\n> operations that any individual\n> <productname>PostgreSQL</productname> session\n> attempts to initiate in parallel. The allowed range is 1 to 1000,\n> or zero to disable issuance of asynchronous I/O requests. Currently,\n> - this setting only affects bitmap heap scans.\n> + this setting only enables prefetching for HEAP data blocks\n> when performing\n> + bitmap heap scans and index (only) scans.\n> </para>\n> \n> Some further tests, given data:\n> \n> CREATE TABLE test (id bigint, val bigint, str text);\n> ALTER TABLE test ALTER COLUMN str SET STORAGE EXTERNAL;\n> INSERT INTO test SELECT g, g, repeat(chr(65 + (10*random())::int),\n> 3000) FROM generate_series(1, 10000) g;\n> -- or INSERT INTO test SELECT x.r, x.r, repeat(chr(65 +\n> (10*random())::int), 3000) from (select 10000 * random() as r from\n> generate_series(1, 10000)) x;\n> VACUUM ANALYZE test;\n> CREATE INDEX on test (id) ;\n> \n\nIt's not clear to me what's the purpose of this test? Can you explain?\n\n> 1. the patch correctly detects sequential access (e.g. we issue up to\n> 6 fadvise() syscalls (8kB each) out and 17 preads() to heap fd for\n> query like `SELECT sum(val) FROM test WHERE id BETWEEN 10 AND 2000;`\n> -- offset of fadvise calls and pread match), so that's good.\n> \n> 2. Prefetching for TOASTed heap seems to be not implemented at all,\n> correct? (Is my assumption that we should go like this:\n> t_index->t->toast_idx->toast_heap)?, but I'm too newbie to actually\n> see the code path where it could be added - certainly it's not blocker\n> -- but maybe in commit message a list of improvements for future could\n> be listed?):\n> \n\nYes, that's true. I haven't thought about TOAST very much, but with\nprefetching happening in executor, that does not work. There'd need to\nbe some extra code for TOAST prefetching. I'm not sure how beneficial\nthat would be, considering most TOAST values tend to be stored on\nconsecutive heap pages.\n\n> 2024-02-29 11:45:14.259 CET [11098] LOG: index prefetch stats:\n> requests 1990 prefetches 17 (0.854271) skip cached 0 sequential 1973\n> 2024-02-29 11:45:14.259 CET [11098] STATEMENT: SELECT\n> md5(string_agg(md5(str),',')) FROM test WHERE id BETWEEN 10 AND 2000;\n> \n> fadvise64(37, 40960, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(50, \"\\0\\0\\0\\0\\350Jv\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n> \\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2998272) = 8192\n> pread64(49, \"\\0\\0\\0\\0@Hw\\1\\0\\0\\0\\0\\324\\5\\0\\t\\360\\37\\4 \\0\\0\\0\\0\\340\\237\n> \\0\\320\\237 \\0\"..., 8192, 40960) = 8192\n> pread64(50, \"\\0\\0\\0\\0\\2200v\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n> \\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2990080) = 8192\n> pread64(50, \"\\0\\0\\0\\08\\26v\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n> \\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2981888) = 8192\n> pread64(50, \"\\0\\0\\0\\0\\340\\373u\\1\\0\\0\\4\\0(\\0\\0\\10\\0 \\4\n> \\0\\0\\0\\0\\20\\230\\340\\17\\0\\224 \\10\"..., 8192, 2973696) = 8192\n> [..no fadvises for fd=50 which was pg_toast_rel..]\n> \n> 3. I'm not sure if I got good-enough results for DESCending index\n> `create index on test (id DESC);`- with eic=16 it doesnt seem to be\n> be able prefetch 16 blocks in advance? (e.g. highlight offset 557056\n> below in some text editor and it's distance is far lower between that\n> fadvise<->pread):\n> \n> pread64(45, \"\\0\\0\\0\\0x\\305b\\3\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 0) = 8192\n> fadvise64(45, 417792, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\0\\370\\330\\235\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 417792) = 8192\n> fadvise64(45, 671744, 8192, POSIX_FADV_WILLNEED) = 0\n> fadvise64(45, 237568, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\08`]\\5\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 671744) = 8192\n> fadvise64(45, 491520, 8192, POSIX_FADV_WILLNEED) = 0\n> fadvise64(45, 360448, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\0\\200\\357\\25\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 237568) = 8192\n> fadvise64(45, 557056, 8192, POSIX_FADV_WILLNEED) = 0\n> fadvise64(45, 106496, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\0\\240s\\325\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 491520) = 8192\n> fadvise64(45, 401408, 8192, POSIX_FADV_WILLNEED) = 0\n> fadvise64(45, 335872, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\0\\250\\233r\\4\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 360448) = 8192\n> fadvise64(45, 524288, 8192, POSIX_FADV_WILLNEED) = 0\n> fadvise64(45, 352256, 8192, POSIX_FADV_WILLNEED) = 0\n> pread64(45, \"\\0\\0\\0\\0\\240\\342\\6\\5\\0\\0\\4\\0\\370\\1\\0\\2\\0 \\4\n> \\0\\0\\0\\0\\300\\237t\\0\\200\\237t\\0\"..., 8192, 557056) = 8192\n> \n\nI'm not sure I understand these strace snippets. Can you elaborate a\nbit, explain what the strace log says?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:58:38 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On 2/15/24 21:30, Peter Geoghegan wrote:\n> On Thu, Feb 15, 2024 at 3:13 PM Andres Freund <[email protected]> wrote:\n>>> This is why I don't think that the tuples with lower page offset\n>>> numbers are in any way significant here. The significant part is\n>>> whether or not you'll actually need to visit more than one leaf page\n>>> in the first place (plus the penalty from not being able to reorder\n>>> the work across page boundaries in your initial v1 of prefetching).\n>>\n>> To me this your phrasing just seems to reformulate the issue.\n> \n> What I said to Tomas seems very obvious to me. I think that there\n> might have been some kind of miscommunication (not a real\n> disagreement). I was just trying to work through that.\n> \n>> In practical terms you'll have to wait for the full IO latency when fetching\n>> the table tuple corresponding to the first tid on a leaf page. Of course\n>> that's also the moment you had to visit another leaf page. Whether the stall\n>> is due to visit another leaf page or due to processing the first entry on such\n>> a leaf page is a distinction without a difference.\n> \n> I don't think anybody said otherwise?\n> \n>>>> That's certainly true / helpful, and it makes the \"first entry\" issue\n>>>> much less common. But the issue is still there. Of course, this says\n>>>> nothing about the importance of the issue - the impact may easily be so\n>>>> small it's not worth worrying about.\n>>>\n>>> Right. And I want to be clear: I'm really *not* sure how much it\n>>> matters. I just doubt that it's worth worrying about in v1 -- time\n>>> grows short. Although I agree that we should commit a v1 that leaves\n>>> the door open to improving matters in this area in v2.\n>>\n>> I somewhat doubt that it's realistic to aim for 17 at this point.\n> \n> That's a fair point. Tomas?\n> \n\nI think that's a fair assessment.\n\nTo me it seems doing the prefetching solely at the executor level is not\nreally workable. And if it can be made to work, there's far too many\nopen questions to do that in the last commitfest.\n\nI think the consensus is at least some of the logic/control needs to\nmove back to the index AM. Maybe there's some minimal part that we could\ndo for v17, even if it has various limitations, and then improve that in\nv18. Say, doing the leaf-page-at-a-time and passing a little bit of\ninformation from the index scan to drive this.\n\nBut I have very hard time figuring out what the MVP version should be,\nbecause I have very limited understanding on how much control the index\nAM ought to have :-( And it'd be a bit silly to do something in v17,\nonly to have to rip it out in v18 because it turned out to not get the\nsplit right.\n\n>> We seem to\n>> still be doing fairly fundamental architectual work. I think it might be the\n>> right thing even for 18 to go for the simpler only-a-single-leaf-page\n>> approach though.\n> \n> I definitely think it's a good idea to have that as a fall back\n> option. And to not commit ourselves to having something better than\n> that for v1 (though we probably should commit to making that possible\n> in v2).\n> \n\nYeah, I agree with that.\n\n>> I wonder if there are prerequisites that can be tackled for 17. One idea is to\n>> work on infrastructure to provide executor nodes with information about the\n>> number of tuples likely to be fetched - I suspect we'll trigger regressions\n>> without that in place.\n> \n> I don't think that there'll be regressions if we just take the simpler\n> only-a-single-leaf-page approach. At least it seems much less likely.\n> \n\nI'm sure we could pass additional information from the index scans to\nimprove that further. But I think the gradual ramp-up would deal with\nmost regressions. At least that's my experience from benchmarking the\nearly version.\n\nThe hard thing is what to do about cases where neither of this helps.\nThe example I keep thinking about is IOS - if we don't do prefetching,\nit's not hard to construct cases where regular index scan gets much\nfaster than IOS (with many not-all-visible pages). But we can't just\nprefetch all pages, because that'd hurt IOS cases with most pages fully\nvisible (when we don't need to actually access the heap).\n\nI managed to deal with this in the executor-level version, but I'm not\nsure how to do this if the control moves closer to the index AM.\n\n>> One way to *sometimes* process more than a single leaf page, without having to\n>> redesign kill_prior_tuple, would be to use the visibilitymap to check if the\n>> target pages are all-visible. If all the table pages on a leaf page are\n>> all-visible, we know that we don't need to kill index entries, and thus can\n>> move on to the next leaf page\n> \n> It's possible that we'll need a variety of different strategies.\n> nbtree already has two such strategies in _bt_killitems(), in a way.\n> Though its \"Modified while not pinned means hinting is not safe\" path\n> (LSN doesn't match canary value path) seems pretty naive. The\n> prefetching stuff might present us with a good opportunity to replace\n> that with something fundamentally better.\n> \n\nNo opinion.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Mar 2024 16:18:54 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 10:18 AM Tomas Vondra\n<[email protected]> wrote:\n> But I have very hard time figuring out what the MVP version should be,\n> because I have very limited understanding on how much control the index\n> AM ought to have :-( And it'd be a bit silly to do something in v17,\n> only to have to rip it out in v18 because it turned out to not get the\n> split right.\n\nI suspect that you're overestimating the difficulty of getting the\nlayering right (at least relative to the difficulty of everything\nelse).\n\nThe executor proper doesn't know anything about pins on leaf pages\n(and in reality nbtree usually doesn't hold any pins these days). All\nthe executor knows is that it had better not be possible for an\nin-flight index scan to get confused by concurrent TID recycling by\nVACUUM. When amgettuple/btgettuple is called, nbtree usually just\nreturns TIDs it collected from a just-scanned leaf page.\n\nThis sort of stuff already lives in the index AM. It seems to me that\neverything at the API and executor level can continue to work in\nessentially the same way as it always has, with only minimal revision\nto the wording around buffer pins (in fact that really should have\nhappened back in 2015, as part of commit 2ed5b87f). The hard part\nwill be figuring out how to make the physical index scan prefetch\noptimally, in a way that balances various considerations. These\ninclude:\n\n* Managing heap prefetch distance.\n\n* Avoiding making kill_prior_tuple significantly less effective\n(perhaps the new design could even make it more effective, in some\nscenarios, by holding onto multiple buffer pins based on a dynamic\nmodel).\n\n* Figuring out how many leaf pages it makes sense to read ahead of\naccessing the heap, since there is no fixed relationship between the\nnumber of leaf pages we need to scan to collect a given number of\ndistinct heap blocks that we need for prefetching. (This is made more\ncomplicated by things like LIMIT, but is actually an independent\nproblem.)\n\nSo I think that you need to teach index AMs to behave roughly as if\nmultiple leaf pages were read as one single leaf page, at least in\nterms of things like how the BTScanOpaqueData.currPos state is\nmanaged. I imagine that currPos will need to be filled with TIDs from\nmultiple index pages, instead of just one, with entries that are\norganized in a way that preserves the illusion of one continuous scan\nfrom the point of view of the executor proper. By the time we actually\nstart really returning TIDs via btgettuple, it looks like we scanned\none giant leaf page instead of several (the exact number of leaf pages\nscanned will probably have to be indeterminate, because it'll depend\non things like heap prefetch distance).\n\nThe good news (assuming that I'm right here) is that you don't need to\nhave specific answers to most of these questions in order to commit a\nv1 of index prefeteching. ISTM that all you really need is to have\nconfidence that the general approach that I've outlined is the right\napproach, long term (certainly not nothing, but I'm at least\nreasonably confident here).\n\n> The hard thing is what to do about cases where neither of this helps.\n> The example I keep thinking about is IOS - if we don't do prefetching,\n> it's not hard to construct cases where regular index scan gets much\n> faster than IOS (with many not-all-visible pages). But we can't just\n> prefetch all pages, because that'd hurt IOS cases with most pages fully\n> visible (when we don't need to actually access the heap).\n>\n> I managed to deal with this in the executor-level version, but I'm not\n> sure how to do this if the control moves closer to the index AM.\n\nThe reality is that nbtree already knows about index-only scans. It\nhas to, because it wouldn't be safe to drop the pin on a leaf page's\nbuffer when the scan is \"between pages\" in the specific case of\nindex-only scans (so the _bt_killitems code path used when\nkill_prior_tuple has index tuples to kill knows about index-only\nscans).\n\nI actually added commentary to the nbtree README that goes into TID\nrecycling by VACUUM not too long ago. This includes stuff about how\nLP_UNUSED items in the heap are considered dead to all index scans\n(which can actually try to look at a TID that just became LP_UNUSED in\nthe heap!), even though LP_UNUSED items don't prevent VACUUM from\nsetting heap pages all-visible. This seemed like the only way of\nexplaining the _bt_killitems IOS issue, that actually seemed to make\nsense.\n\nWhat you really want to do here is to balance costs and benefits.\nThat's just what's required. The fact that those costs and benefits\nspan multiple levels of abstractions makes it a bit awkward, but\ndoesn't (and can't) change the basic shape of the problem.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Mar 2024 12:47:32 -0500",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 3:58 PM Tomas Vondra\n<[email protected]> wrote:\n[..]\n> TBH I don't have a clear idea what to do. It'd be cool to have at least\n> some benefits in v17, but I don't know how to do that in a way that\n> would be useful in the future.\n>\n> For example, the v20240124 patch implements this in the executor, but\n> based on the recent discussions it seems that's not the right layer -\n> the index AM needs to have some control, and I'm not convinced it's\n> possible to improve it in that direction (even ignoring the various\n> issues we identified in the executor-based approach).\n>\n> I think it might be more practical to do this from the index AM, even if\n> it has various limitations. Ironically, that's what I proposed at pgcon,\n> but mostly because it was the quick&dirty way to do this.\n\n... that's a pity! :( Well, then let's just finish that subthread, I\ngave some explanations, but I'll try to take a look in future\nrevisions.\n\n> > 4. Wouldn't it be better to leave PREFETCH_LRU_SIZE at static of 8,\n> > but base PREFETCH_LRU_COUNT on effective_io_concurrency instead?\n> > (allowing it to follow dynamically; the more prefetches the user wants\n> > to perform, the more you spread them across shared LRUs and the more\n> > memory for history is required?)\n> >\n> > + * XXX Maybe we could consider effective_cache_size when sizing the cache?\n> > + * Not to size the cache for that, ofc, but maybe as a guidance of how many\n> > + * heap pages it might keep. Maybe just a fraction fraction of the value,\n> > + * say Max(8MB, effective_cache_size / max_connections) or something.\n> > + */\n> > +#define PREFETCH_LRU_SIZE 8 /* slots in one LRU */\n> > +#define PREFETCH_LRU_COUNT 128 /* number of LRUs */\n> > +#define PREFETCH_CACHE_SIZE (PREFETCH_LRU_SIZE *\n> > PREFETCH_LRU_COUNT)\n> >\n>\n> I don't see why would this be related to effective_io_concurrency? It's\n> merely about how many recently accessed pages we expect to find in the\n> page cache. It's entirely separate from the prefetch distance.\n\nWell, my thought was the higher eic is - the more I/O parallelism we\nare introducing - in such a case, the more requests we need to\nremember from the past to avoid prefetching the same (N * eic, where N\nwould be some multiplier)\n\n> > 7. in IndexPrefetchComputeTarget()\n> >\n> > + * XXX We cap the target to plan_rows, becausse it's pointless to prefetch\n> > + * more than we expect to use.\n> >\n> > That's a nice fact that's already in patch, so XXX isn't needed?\n> >\n>\n> Right, which is why it's not a TODO/FIXME.\n\nOH! That explains it to me. I've taken all of the XXXs as literally\nFIXME that you wanted to go away (things to be removed before the\npatch is considered mature).\n\n> But I think it's good to\n> point this out - I'm not 100% convinced we should be using plan_rows\n> like this (because what happens if the estimate happens to be wrong?).\n\nWell, somewhat similiar problematic pattern was present in different\ncodepath - get_actual_variable_endpoint() - see [1], 9c6ad5eaa95. So\nthe final fix was to get away without adding new GUC (which always an\noption...), but just introduce a sensible hard-limit (fence) and stick\nto the 100 heap visited pages limit. Here we could have similiar\nheuristics same from start: if (plan_rows <\nwe_have_already_visited_pages * avgRowsPerBlock) --> ignore plan_rows\nand rampup prefetches back to the full eic value.\n\n> > Some further tests, given data:\n> >\n> > CREATE TABLE test (id bigint, val bigint, str text);\n> > ALTER TABLE test ALTER COLUMN str SET STORAGE EXTERNAL;\n> > INSERT INTO test SELECT g, g, repeat(chr(65 + (10*random())::int),\n> > 3000) FROM generate_series(1, 10000) g;\n> > -- or INSERT INTO test SELECT x.r, x.r, repeat(chr(65 +\n> > (10*random())::int), 3000) from (select 10000 * random() as r from\n> > generate_series(1, 10000)) x;\n> > VACUUM ANALYZE test;\n> > CREATE INDEX on test (id) ;\n> >\n>\n> It's not clear to me what's the purpose of this test? Can you explain?\n\nIt's just schema&data preparation for the tests below:\n\n> >\n> > 2. Prefetching for TOASTed heap seems to be not implemented at all,\n> > correct? (Is my assumption that we should go like this:\n> > t_index->t->toast_idx->toast_heap)?, but I'm too newbie to actually\n> > see the code path where it could be added - certainly it's not blocker\n> > -- but maybe in commit message a list of improvements for future could\n> > be listed?):\n> >\n>\n> Yes, that's true. I haven't thought about TOAST very much, but with\n> prefetching happening in executor, that does not work. There'd need to\n> be some extra code for TOAST prefetching. I'm not sure how beneficial\n> that would be, considering most TOAST values tend to be stored on\n> consecutive heap pages.\n\nAssuming that in the above I've generated data using cyclic / random\nversion and I run:\n\nSELECT md5(string_agg(md5(str),',')) FROM test WHERE id BETWEEN 10 AND 2000;\n\n(btw: I wanted to use octet_length() at first instead of string_agg()\nbut that's not enough)\n\nwhere fd 45,54,55 correspond to :\n lrwx------ 1 postgres postgres 64 Mar 5 12:56 /proc/8221/fd/45 ->\n/tmp/blah/base/5/16384 // \"test\"\n lrwx------ 1 postgres postgres 64 Mar 5 12:56 /proc/8221/fd/54 ->\n/tmp/blah/base/5/16388 // \"pg_toast_16384_index\"\n lrwx------ 1 postgres postgres 64 Mar 5 12:56 /proc/8221/fd/55 ->\n/tmp/blah/base/5/16387 // \"pg_toast_16384\"\n\nI've got for the following data:\n- 83 pread64 and 83x fadvise() for random offsets for fd=45 - the main\nintent of this patch (main relation heap prefetching), works good\n- 54 pread64 calls for fd=54 (no favdises())\n- 1789 (!) calls to pread64 for fd=55 for RANDOM offsets (TOAST heap,\nno prefetch)\n\nso at least in theory it makes a lot of sense to prefetch TOAST too,\npattern looks like cyclic random:\n\n// pread(fd, \"\", blocksz, offset)\nfadvise64(45, 40960, 8192, POSIX_FADV_WILLNEED) = 0\npread64(55, \"\"..., 8192, 38002688) = 8192\npread64(55, \"\"..., 8192, 12034048) = 8192\npread64(55, \"\"..., 8192, 36560896) = 8192\npread64(55, \"\"..., 8192, 8871936) = 8192\npread64(55, \"\"..., 8192, 17965056) = 8192\npread64(55, \"\"..., 8192, 18710528) = 8192\npread64(55, \"\"..., 8192, 35635200) = 8192\npread64(55, \"\"..., 8192, 23379968) = 8192\npread64(55, \"\"..., 8192, 25141248) = 8192\npread64(55, \"\"..., 8192, 3457024) = 8192\npread64(55, \"\"..., 8192, 24633344) = 8192\npread64(55, \"\"..., 8192, 36462592) = 8192\npread64(55, \"\"..., 8192, 18120704) = 8192\npread64(55, \"\"..., 8192, 27066368) = 8192\npread64(45, \"\"..., 8192, 40960) = 8192\npread64(55, \"\"..., 8192, 2768896) = 8192\npread64(55, \"\"..., 8192, 10846208) = 8192\npread64(55, \"\"..., 8192, 30179328) = 8192\npread64(55, \"\"..., 8192, 7700480) = 8192\npread64(55, \"\"..., 8192, 38846464) = 8192\npread64(55, \"\"..., 8192, 1040384) = 8192\npread64(55, \"\"..., 8192, 10985472) = 8192\n\nIt's probably a separate feature (prefetching blocks from TOAST), but\nit could be mentioned that this patch is not doing that (I was\nassuming it could).\n\n> > 3. I'm not sure if I got good-enough results for DESCending index\n> > `create index on test (id DESC);`- with eic=16 it doesnt seem to be\n> > be able prefetch 16 blocks in advance? (e.g. highlight offset 557056\n> > below in some text editor and it's distance is far lower between that\n> > fadvise<->pread):\n> >\n[..]\n> >\n>\n> I'm not sure I understand these strace snippets. Can you elaborate a\n> bit, explain what the strace log says?\n\nset enable_seqscan to off;\nset enable_bitmapscan to off;\ndrop index test_id_idx;\ncreate index on test (id DESC); -- DESC one\nSELECT sum(val) FROM test WHERE id BETWEEN 10 AND 2000;\n\nOk, so cleaner output of strace -s 0 for PID doing that SELECT with\neic=16, annotated with [*]:\n\nlseek(45, 0, SEEK_END) = 688128\nlseek(47, 0, SEEK_END) = 212992\npread64(47, \"\"..., 8192, 172032) = 8192\npread64(45, \"\"..., 8192, 90112) = 8192\nfadvise64(45, 172032, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 172032) = 8192\nfadvise64(45, 319488, 8192, POSIX_FADV_WILLNEED) = 0 [*off 319488 start]\nfadvise64(45, 335872, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 319488) = 8192 [*off 319488,\nread, distance=1 fadvises]\nfadvise64(45, 466944, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 393216, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 335872) = 8192\nfadvise64(45, 540672, 8192, POSIX_FADV_WILLNEED) = 0 [*off 540672 start]\nfadvise64(45, 262144, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 466944) = 8192\nfadvise64(45, 491520, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 393216) = 8192\nfadvise64(45, 163840, 8192, POSIX_FADV_WILLNEED) = 0\nfadvise64(45, 385024, 8192, POSIX_FADV_WILLNEED) = 0\npread64(45, \"\"..., 8192, 540672) = 8192 [*off 540672,\nread, distance=4 fadvises]\nfadvise64(45, 417792, 8192, POSIX_FADV_WILLNEED) = 0\n[..]\nI was wondering why the distance never got >4 in such case for eic=16,\nit should spawn more fadvises calls, shouldn't it? (it was happening\nonly for DESC, in normal ASC index the prefetching distance easily\nachieves ~~ eic values) and I think today i've got the answer -- after\ndropping/creating DESC index I did NOT execute ANALYZE so probably the\nMin(..., plan_rows) was kicking in and preventing the full\nprefetching.\n\nHitting above, makes me think that the XXX for plan_rows , should\nreally be real-FIXME.\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/CAKZiRmznOwi0oaV%3D4PHOCM4ygcH4MgSvt8%3D5cu_vNCfc8FSUug%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 5 Mar 2024 14:00:12 +0100",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's an updated (and pretty fundamentally reworked) patch to add\nprefetching to regular index scans. I'm far happier with this approach\nthan with either of the two earlier ones, and I actually think it might\neven be easier to combine this with the streaming read (which the patch\ndoes not use at all for now). I feeling cautiously optimistic.\n\nThe patch is still WIP, but everything should be working fine (including\noptimizations like kill_prior_tuple etc.). The patch actually passes\n\"make check-world\" (even with valgrind) and I'm not aware of any bugs.\nThere are a couple limitations and things that need cleanup, ofc. Those\nare mentioned at the end of this message.\n\n\nThe index prefetching had two prior patch versions, with very different\napproaches, each having different drawbacks. The first one (posted\nshortly before pgcon 2023) did the prefetching at a very low level, in\neach index AM. We'd call amgettuple() -> btgettuple(), and that issued\nprefetches for \"future\" TIDs from the same leaf page (in the correct\nprefetch distance, etc).\n\nThat mostly worked ... sort of. Every index AM had to reimplement the\nlogic, but the main problem was that it had no idea what happened above\nthe index AM. So it regressed cases that are unlikely to benefit from\nprefetches - like IOS, where we don't need the heap page at all if it's\nall-visible. And if we disabled prefetching for IOS, it could easily\nlead to cases where regular index scan is much faster than IOS (which\nfor users would seem quite bizarre).\n\nWe'd either need to teach the index AM about visibility checks (seems it\nshould not need to know about that), or inject the information in some\nway, but then also cache the visibility check results (because checking\nvisibility map is not free, and doing it repeatedly can regresses the\n\"cached\" case of IOS).\n\nPerhaps that was solvable, but it felt uglier and uglier, and in the end\nmy conclusion was it's not the right place to do the prefetches. Why\nshould an index AM initiate prefetches against a heap? It seems the\nright place to do prefetches is somewhere higher, where we actually have\nthe information to decide if the heap page is needed. (I believe this\nuncertainty made it harder to adopt streaming read API too.)\n\nThis led to the second patch, which did pretty much everything in the\nexecutor. The Index(Only)Scans simply called index_getnext_tid() in a\nloop to fill a local \"queue\" driving the prefetching, and then also\nconsumed the TIDs from it again. The nice thing was this seemed to work\nwith any index AM as long as it had the amgettuple() callback.\n\nUnfortunately, this complete separation of prefetching from index AM\nturned out to be a problem. The ultimate issue that killed this was the\nkill_prior_tuple, which we use to \"remove\" pointers to provably dead\nheap tuples from the index early. With the single-tuple approach the\nindex AM processes the information before it unpins the leaf page, but\nwith a batch snapping multiple leaf pages, we can't rely on that - we\nmight have unpinned the page long before we get to process the list of\ntuples to kill.\n\n\nWe have discussed different ways to deal with this - an obvious option\nis to rework the index AMs to hold pins on all leaf pages needed by the\ncurrent batch. But despite the \"obviousness\" it's a pretty unattractive\noption. It would require a lot of complexity and reworks in each index\nAM to support this, which directly contradicts the primary benefit of\ndoing this in the executor - not having to do anything in the index AMs\nand working for all index AMs.\n\nAlso, locking/pinning resources accessed asynchronously seems like a\ngreat place for subtle bugs.\n\n\nHowever, I had a bit of a lightbulb moment at pgconf.dev, when talking\nto Andres about something only very remotely related, something to do\nwith accessing batches of items instead of individually.\n\nWhat if we didn't get the TIDs from the index one by one, but in larger\nbatches, and the index AM never gave us a batch spanning multiple leaf\npages? A sort of a \"contract\" for the API.\n\nYes, this requires extending the index AM. The existing amgettuple()\ncallback is not sufficient for that, because we don't know when leaf\npages change. Or will change, which makes it hard to communicate\ninformation about past tuples.\n\nThere's a fairly long comment in indexam.c before the chunk of new code,\ntrying to explain how this is supposed to work. There's also a lot of\nXXX comments scattered around, with open questions / ideas about various\nparts of this.\n\nBut let me share a brief overview here ...\n\nThe patch adds a new callback amgettuplebatch() which loads an array of\nitems (into IndexScanDesc). It also adds index_batch_getnext() and\nindex_batch_getnext_tid() wrappers to access the batch.\n\nThis means if we have loop reading tuples from an indexscan\n\n while ((tid = index_getnext_slot(scan, dir, slot)) != NULL)\n {\n ... process the slot ...\n }\n\nwe could replace it with something like\n\n while (index_batch_getnext(scan, dir))\n {\n while ((tid = index_batch_getnext_slot(scan, dir, slot)) != NULL)\n {\n ... process the slot ...\n }\n }\n\nObviously, nodeIndescan.c does that a bit differently, but I think the I\nidea is clear. For index-only scans it'd be more complicated, due to\nvisibility checks etc. but the overall idea is the same.\n\nFor kill_prior_tuple, the principle is about the same, except that we\ncollect information about which tuples to kill in the batch, and the AM\nonly gets the information before reading the next batch - at which point\nit simply adds them to the private list and kills them when switching to\nthe next leaf page.\n\nObviously, this requires some new code in the index AM - I don't think\nthere's a way around that, the index AM has to have a say in this one\nway or the other. Either it has to keep multiple leaf pages pinned, or\nit needs to generate batches in a way that works with a single pin.\n\nI've only done this for btree for now, but the amount of code needed is\npretty small - essentially I needed the btgettuplebatch, which is maybe\n20 lines plus comments, and then _bt_first_batch/_bt_next_batch, which\nare just simplified versions of _bt_first/_bt_next.\n\nThe _bt_first_batch/_bt_next_batch are a bit long, but there's a lot of\nredundancy and it shouldn't be hard to cut them down to ~1/2 with a bit\nof effort. I'm pretty sure other index AMs (e.g. hash) can do a very\nsimilar approach to implement this.\n\nA detail worth mentioning - the batches start small and gradually grow\nover time, up to some maximum size (the patch hardcodes these limits as\n8 and 64, at the moment). The reason are similar to why we do this for\nprefetching - not harming queries that only need a single row.\n\nThe changes to nodeIndexscan.c and nodeIndexonlyscan.c have a lot of\nduplicate code too. That's partially intentional - I wanted to retain\nthe ability to test the \"old\" code easily, so I added a GUC to switch\nbetween the two.\n\nFor plain indexscans it might even be possible to \"unite\" the two paths\nby tweaking index_getnext_slot to either get the TID from the index or\ndo the batch loop (with batching enabled). Not sure about IOS, we don't\nwant to repeat the visibility check in that case :-(\n\nActually, couldn't we have a per-batch cache of visibility checks? I\ndon't think we can get different answers to visibility checks for two\nTIDs (for the same block) within the same batch, right? It'd simplify\nthe code I think, and perhaps it'd be useful even without prefetching.\n\n\nI think the main priority is clarifying the boundary between indexam and\nthe AM code. Right now, it's a bit messy and not quite clear which code\nis responsible for which fields. Sometimes a field is set by indexam,\nbut then one random place in nbtsearch.c sets it too, etc.\n\n\nFinally, two things that might be an issue / I'm not quite sure about.\n\nFirstly, do we need to support mixing batched and non-batched calls?\nThat is, given an index scan, should it be possible to interleave calls\nto index_getnext_tid and index_batch_getnext/index_batch_getnext_tid?\n\nI'm pretty sure that doesn't work, at least not right now. Because with\nbatching the index AM does not have an exact idea \"where\" on the page we\nactually are / which item is \"current\". I believe it might be possible\nto improve this by \"synchronizing\" whenever we switch between the two\napproaches. But I'm not sure it's something we need/want to support. I\ncan't quite imagine why would I need this.\n\nThe other thing is mark/restore. At the moment this does not work, for\npretty much the same reason - the index AM has no idea what's the exact\n\"current\" item on the page, so mark/restore does unexpected things. In\nthe patch I \"fixed\" this by disabling batching/prefetching for plans\nwith EXEC_FLAG_MARK, so e.g. mergejoins won't benefit from this.\n\nIt did seem like an acceptable limitation to me, but now that I think\nabout it, if we could \"synchronize\" the position from the batch (if the\nindex AM requests it), I think this might work correctly.\n\nI'm yet to do a comprehensive benchmark, but the tests I've done during\ndevelopment suggest the gains are in line with what we saw for the\nearlier versions.\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Sat, 31 Aug 2024 22:37:31 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nhere's an updated version of this patch series, with a couple major\nimprovements:\n\n\n1) adding batching/prefetching to relevant built-in index AMs\n\nThis means btree, hash, gist and sp-gist, i.e. index types that can\nreturn tuples. For gin/brin it's irrelevant (it'd be more correct to\nexplicitly set amgetbatch to null, I guess).\n\nAnyway, those patches are fairly small, maybe 10kB each, with 150-300\nnew lines. And the patches are pretty similar, thanks to the fact that\nall the index AMs mirror btree (especially hash).\n\nThe main differences are in ordered scans in gist/spgist, where the\napproach is quite different, but not that much. There's also the\nbusiness of returning orderbyvals/orderbynulls, and index-only scans,\nbut that should work too, now.\n\n\n2) simplify / cleanup of the btree batching\n\nThere was a lot of duplication and copy-pasted code in the functions\nthat load the first/next batch, this version gets rid of that and\nreplaces this \"common\" code with _bt_copy_batch() utility function. The\nother index AMs have pretty much the same thing, but adjusted for the\nscan opaque struct specific for that index type.\n\nI'm not saying it's perfect as it is, but it's way better, IMHO.\n\n\n3) making mark/restore work for btree\n\nThis was one of the main limitations - the patch simply disabled\nbatching for plans requiring EXEC_FLAG_MARK, because of issues with\ndetermining the correct position on the page in markpos(). I suggested\nit should be possible to make this work by considering the batch index\nin those calls, and restoring the proper batch in restrpos(), and this\nupdated patch does exactly that.\n\nI haven't done any performance evaluation if batching helps in these\nplans - if we restore to a position we already visited, we may not need\nto prefetch those pages, it might even make things slow. Need some more\nthinking, I guess.\n\nAlso, I'm not quite happy with how the two layers interact. The index AM\nshould not know this much the implementation details of batching, so I\nplan to maybe replace those accesses with a function in indexam.c, or\nsomething like that.\n\nIt's still a bit rough, so I kept it in a separate patch.\n\n\n\nThis now passes \"make check-world\" with asserts, valgrind and all that.\nI still need to put it through some stress testing and benchmarking to\nsee how it performs.\n\nThe layering still needs some more work. I've been quite unhappy with\nhow how much the index AM needs to know about the \"implementation\ndetails\" of the batching, and how unclear it was which layer manages\nwhich fields. I think it's much better now - the goal is that:\n\n* indexam.c updates the scandesc->xs_batch fields, and knows nothing\nabout the internal state of the index AM\n\n* the AM can read scandesc->xs_batch data (perhaps by a function in\nindexam.c), but never updates it\n\nThere are still a couple places where this is violated (e.g. in the\nbtrestrpos which manipulates the batch index directly), but I believe\nthat's fairly easy to solve.\n\n\nFinally, I wrote that the basic contract that makes this possible is\n\"batch should never span multiple leaf pages\". I realized that's\nactually not quite correct - it's perfectly fine for the AM to return\nbatches spanning multiple leaf pages, as long as the AM knows to also\nkeep all the resources (pins, ...) until the next batch is requested.\n\nIt would also need to know how to handle kill_prior_tuples (which we now\naccumulate per batch, and process before returning the next one), and\nstuff like that.\n\nIt's just that with the restriction that a batch must not span multiple\nleaf pages, it's fairly trivial to make this work. The changes required\nby the current AM code are very limited, as demonstrated by the patches\nadding this to gist/spgist/hash.\n\nI can imagine the AMs being improved in this direction in the future. We\nalready have a place to keep track of this extra info - the scan opaque\nstruct. The AM could keep information about all the resources needed by\nthe last batch - in a way, we already do that, except that we need only\nexactly the same resources as for regular non-batched scans.\n\nThinking about this a bit more, we'd probably want to allow multiple\nin-flight batches. One of the shortcomings of the current approach with\na single batch is that as we're getting close to the end of the batch,\nwe can't issue prefetches. Only after we're done with that batch, we can\nprefetch more pages. Essentially, there are \"pipeline stall\". I imagine\nwe could allow reading \"future\" batches so that we can issue prefetches,\nand then eventually we'd process those.\n\nBut that would also require some ability to inform the index AM which\nbatches are no longer needed, and can be de-allocated. Hmmm, perhaps it\nwould be possible to make this work with just two batches, as long as\nthey are sized for the proper prefetch distance.\n\nIn any case, that would be a future patch. I'm only mentioning this to\nclarify that I believe the proposed approach does not really have the\n\"single leaf page\" restriction (the AM can do whatever it wants). And\nthat it could even be extended to handle multiple batches.\n\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Fri, 6 Sep 2024 23:49:43 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
},
{
"msg_contents": "Hi,\n\nHere's another version of this patch series, with a couple significant\nimprovements, mostly in the indexam.c and executor layers. The AM code\nremains almost untouched.\n\nI have focused on the simplification / cleanup of the executor code\n(nodeIndexscan and nodeIndexonlyscan). In the previous version there was\nquite a bit of duplicated code - both for the \"regular\" index scans and\nindex-only scans, the \"while getnext\" block was copied, calling either\nthe non-batched or batched functions.\n\nThat is now mostly gone. I managed to move 99% of the differences to the\nindexam.c layer, so that the executor simply calls index_getnext_tid()\nor index_getnext_slot(), and that decides *internally* whether to use\nthe batched version, or not. This means the only new function added to\nthe indexam API is index_batch_add(), which the index AMs use to add\nitems into the batch. For the executor the code remains the same.\n\nThe only exception is that index-only scans need a way to guide the\nprefetching based on the visibility map (we don't want to prefetch\nall-visible pages, because skipping those is the whole point of IOS).\nAnd we also want a way to share the VM check, so that it doesn't need to\nhappen twice. Because for fully-cached workloads this is too expensive.\n\nDoing the first part is trivial - we simply define a callback for the\nbatching, responsible for inspecting the VM and making a decision.\nThat's easy, and fairly clean. Passing the VM check result back is a bit\nawkward, though. The current patch deals with it by just executing the\ncallback again (which just returns the cached result), or doing the VM\ncheck locally (for non-batched version). It's not pretty, because it\nleaks knowledge of the batching into the executor.\n\nI'd appreciate ideas how to solve this in a nicer way.\n\nI've also split the nbtree changes into a separate patch. It used to be\nincluded in the first patch, but I've decided to keep it separate, just\nlike for the other AMs.\n\nI'm now fairly happy with both the executor layer and the (much smaller)\nindexam.c code, and I think it's in a good enough shape for a review.\n\nThe next item on my TODO is cleanup of the nbtree code, particularly the\nmark/restore part in patch 0003. So I'll work on that next. I also plan\nto get back to the index_batch_prefetch() code, which is not wrong but\nwould benefit from a bit of cleanup / clarification etc.\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Mon, 30 Sep 2024 23:16:25 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index prefetching"
}
] |
[
{
"msg_contents": "Hi All:\n\nI upgraded to postgres v15, and I am getting intermittent failures for\nsome of the bin regression tests when building on Windows 10. Example:\n\nperl vcregress.pl bincheck\n\nInstallation complete.\nt/001_initdb.pl .. ok\nAll tests successful.\nFiles=1, Tests=25, 12 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\nResult: PASS\nt/001_basic.pl ........... ok\nt/002_nonesuch.pl ........ 1/?\n# Failed test 'checking a non-existent database stderr /(?^:FATAL:\ndatabase \"qqq\" does not exist)/'\n# at t/002_nonesuch.pl line 25.\n# 'pg_amcheck: error: connection to server at\n\"127.0.0.1\", port 49393 failed: server closed the connection\nunexpectedly\n# This probably means the server terminated abnormally\n# before or while processing the request.\n# '\n# doesn't match '(?^:FATAL: database \"qqq\" does not exist)'\nt/002_nonesuch.pl ........ 97/? # Looks like you failed 1 test of 100.\nt/002_nonesuch.pl ........ Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/100 subtests\nt/003_check.pl ........... ok\nt/004_verify_heapam.pl ... ok\nt/005_opclass_damage.pl .. ok\n\nTest Summary Report\n-------------------\nt/002_nonesuch.pl (Wstat: 256 Tests: 100 Failed: 1)\n Failed test: 3\n Non-zero exit status: 1\nFiles=5, Tests=196, 86 wallclock secs ( 0.11 usr + 0.08 sys = 0.19 CPU)\nResult: FAIL\n...\n\nI see a similar failure on the build farm at:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-06-03%2020%3A03%3A07\n\nI have also received the same error in the pg_dump test as the build\nserver above. Are these errors expected? Are they due to the fact that\nwindows tests use SSPI? It seems to work correctly if I recreate all\nof the steps with an HBA that does not use SSPI.\n\nthanks,\nRussell\n\n\n",
"msg_date": "Thu, 8 Jun 2023 13:41:36 -0400",
"msg_from": "Russell Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "On 2023-06-08 Th 13:41, Russell Foster wrote:\n> Hi All:\n>\n> I upgraded to postgres v15, and I am getting intermittent failures for\n> some of the bin regression tests when building on Windows 10. Example:\n>\n> perl vcregress.pl bincheck\n>\n> Installation complete.\n> t/001_initdb.pl .. ok\n> All tests successful.\n> Files=1, Tests=25, 12 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\n> Result: PASS\n> t/001_basic.pl ........... ok\n> t/002_nonesuch.pl ........ 1/?\n> # Failed test 'checking a non-existent database stderr /(?^:FATAL:\n> database \"qqq\" does not exist)/'\n> # at t/002_nonesuch.pl line 25.\n> # 'pg_amcheck: error: connection to server at\n> \"127.0.0.1\", port 49393 failed: server closed the connection\n> unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # '\n> # doesn't match '(?^:FATAL: database \"qqq\" does not exist)'\n> t/002_nonesuch.pl ........ 97/? # Looks like you failed 1 test of 100.\n> t/002_nonesuch.pl ........ Dubious, test returned 1 (wstat 256, 0x100)\n> Failed 1/100 subtests\n> t/003_check.pl ........... ok\n> t/004_verify_heapam.pl ... ok\n> t/005_opclass_damage.pl .. ok\n>\n> Test Summary Report\n> -------------------\n> t/002_nonesuch.pl (Wstat: 256 Tests: 100 Failed: 1)\n> Failed test: 3\n> Non-zero exit status: 1\n> Files=5, Tests=196, 86 wallclock secs ( 0.11 usr + 0.08 sys = 0.19 CPU)\n> Result: FAIL\n> ...\n>\n> I see a similar failure on the build farm at:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-06-03%2020%3A03%3A07\n>\n> I have also received the same error in the pg_dump test as the build\n> server above. Are these errors expected? Are they due to the fact that\n> windows tests use SSPI? It seems to work correctly if I recreate all\n> of the steps with an HBA that does not use SSPI.\n>\n\nIn general you're better off using something like this\n\n\nset PG_TEST_USE_UNIX_SOCKETS=1\nset PG_REGRESS_SOCK_DIR=%LOCALAPPDATA%\\Local\\temp\n\n\nThat avoids several sorts of issues.\n\n\ncheers\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-08 Th 13:41, Russell Foster\n wrote:\n\n\nHi All:\n\nI upgraded to postgres v15, and I am getting intermittent failures for\nsome of the bin regression tests when building on Windows 10. Example:\n\nperl vcregress.pl bincheck\n\nInstallation complete.\nt/001_initdb.pl .. ok\nAll tests successful.\nFiles=1, Tests=25, 12 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\nResult: PASS\nt/001_basic.pl ........... ok\nt/002_nonesuch.pl ........ 1/?\n# Failed test 'checking a non-existent database stderr /(?^:FATAL:\ndatabase \"qqq\" does not exist)/'\n# at t/002_nonesuch.pl line 25.\n# 'pg_amcheck: error: connection to server at\n\"127.0.0.1\", port 49393 failed: server closed the connection\nunexpectedly\n# This probably means the server terminated abnormally\n# before or while processing the request.\n# '\n# doesn't match '(?^:FATAL: database \"qqq\" does not exist)'\nt/002_nonesuch.pl ........ 97/? # Looks like you failed 1 test of 100.\nt/002_nonesuch.pl ........ Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/100 subtests\nt/003_check.pl ........... ok\nt/004_verify_heapam.pl ... ok\nt/005_opclass_damage.pl .. ok\n\nTest Summary Report\n-------------------\nt/002_nonesuch.pl (Wstat: 256 Tests: 100 Failed: 1)\n Failed test: 3\n Non-zero exit status: 1\nFiles=5, Tests=196, 86 wallclock secs ( 0.11 usr + 0.08 sys = 0.19 CPU)\nResult: FAIL\n...\n\nI see a similar failure on the build farm at:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-06-03%2020%3A03%3A07\n\nI have also received the same error in the pg_dump test as the build\nserver above. Are these errors expected? Are they due to the fact that\nwindows tests use SSPI? It seems to work correctly if I recreate all\nof the steps with an HBA that does not use SSPI.\n\n\n\n\n\nIn general you're better off using something like this\n\n\nset PG_TEST_USE_UNIX_SOCKETS=1\nset PG_REGRESS_SOCK_DIR=%LOCALAPPDATA%\\Local\\temp\n\n\nThat avoids several sorts of issues.\n\n\ncheers\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 8 Jun 2023 15:33:51 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 3:33 PM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2023-06-08 Th 13:41, Russell Foster wrote:\n>\n> Hi All:\n>\n> I upgraded to postgres v15, and I am getting intermittent failures for\n> some of the bin regression tests when building on Windows 10. Example:\n>\n> perl vcregress.pl bincheck\n>\n> Installation complete.\n> t/001_initdb.pl .. ok\n> All tests successful.\n> Files=1, Tests=25, 12 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\n> Result: PASS\n> t/001_basic.pl ........... ok\n> t/002_nonesuch.pl ........ 1/?\n> # Failed test 'checking a non-existent database stderr /(?^:FATAL:\n> database \"qqq\" does not exist)/'\n> # at t/002_nonesuch.pl line 25.\n> # 'pg_amcheck: error: connection to server at\n> \"127.0.0.1\", port 49393 failed: server closed the connection\n> unexpectedly\n> # This probably means the server terminated abnormally\n> # before or while processing the request.\n> # '\n> # doesn't match '(?^:FATAL: database \"qqq\" does not exist)'\n> t/002_nonesuch.pl ........ 97/? # Looks like you failed 1 test of 100.\n> t/002_nonesuch.pl ........ Dubious, test returned 1 (wstat 256, 0x100)\n> Failed 1/100 subtests\n> t/003_check.pl ........... ok\n> t/004_verify_heapam.pl ... ok\n> t/005_opclass_damage.pl .. ok\n>\n> Test Summary Report\n> -------------------\n> t/002_nonesuch.pl (Wstat: 256 Tests: 100 Failed: 1)\n> Failed test: 3\n> Non-zero exit status: 1\n> Files=5, Tests=196, 86 wallclock secs ( 0.11 usr + 0.08 sys = 0.19 CPU)\n> Result: FAIL\n> ...\n>\n> I see a similar failure on the build farm at:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-06-03%2020%3A03%3A07\n>\n> I have also received the same error in the pg_dump test as the build\n> server above. Are these errors expected? Are they due to the fact that\n> windows tests use SSPI? It seems to work correctly if I recreate all\n> of the steps with an HBA that does not use SSPI.\n>\n>\n> In general you're better off using something like this\n>\n>\n> set PG_TEST_USE_UNIX_SOCKETS=1\n> set PG_REGRESS_SOCK_DIR=%LOCALAPPDATA%\\Local\\temp\n>\n>\n> That avoids several sorts of issues.\n>\n>\n> cheers\n>\n> andrew\n>\nThanks for responding! This does indeed work, but again it is no\nlonger using SSPI, nor the sockets that are used in the runtime. Plus\nthere is this scary comment in code:\n\n/*\n* We don't use Unix-domain sockets on Windows by default, even if the\n* build supports them. (See comment at remove_temp() for a reason.)\n* Override at your own risk.\n*/\n\nIs there some sort of race condition in the SSPI code that sometimes\ndoesn't gracefully finish/close the connection when the backend\ndecides to exit due to error?\n\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 07:49:52 -0400",
"msg_from": "Russell Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 07:49:52AM -0400, Russell Foster wrote:\n> /*\n> * We don't use Unix-domain sockets on Windows by default, even if the\n> * build supports them. (See comment at remove_temp() for a reason.)\n> * Override at your own risk.\n> */\n> \n> Is there some sort of race condition in the SSPI code that sometimes\n> doesn't gracefully finish/close the connection when the backend\n> decides to exit due to error?\n\nNo. remove_temp() is part of test driver \"pg_regress\". Non-test usage is\nunaffected. Even for test usage, folks have reported no failures from the\ncause mentioned in the remove_temp() comment.\n\n\n",
"msg_date": "Thu, 27 Jul 2023 19:17:15 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "Hello,\n\n28.07.2023 05:17, Noah Misch wrote:\n> On Tue, Jun 20, 2023 at 07:49:52AM -0400, Russell Foster wrote:\n>> /*\n>> * We don't use Unix-domain sockets on Windows by default, even if the\n>> * build supports them. (See comment at remove_temp() for a reason.)\n>> * Override at your own risk.\n>> */\n>>\n>> Is there some sort of race condition in the SSPI code that sometimes\n>> doesn't gracefully finish/close the connection when the backend\n>> decides to exit due to error?\n> No. remove_temp() is part of test driver \"pg_regress\". Non-test usage is\n> unaffected. Even for test usage, folks have reported no failures from the\n> cause mentioned in the remove_temp() comment.\n\nIt seems to me that it's just another manifestation of bug #16678 ([1]).\nSee also commits 6051857fc and 29992a6a5.\n\n[1] https://www.postgresql.org/message-id/flat/16678-253e48d34dc0c376%40postgresql.org\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 28 Jul 2023 07:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 07:00:01AM +0300, Alexander Lakhin wrote:\n> 28.07.2023 05:17, Noah Misch wrote:\n> >On Tue, Jun 20, 2023 at 07:49:52AM -0400, Russell Foster wrote:\n> >>/*\n> >>* We don't use Unix-domain sockets on Windows by default, even if the\n> >>* build supports them. (See comment at remove_temp() for a reason.)\n> >>* Override at your own risk.\n> >>*/\n> >>\n> >>Is there some sort of race condition in the SSPI code that sometimes\n> >>doesn't gracefully finish/close the connection when the backend\n> >>decides to exit due to error?\n> >No. remove_temp() is part of test driver \"pg_regress\". Non-test usage is\n> >unaffected. Even for test usage, folks have reported no failures from the\n> >cause mentioned in the remove_temp() comment.\n> \n> It seems to me that it's just another manifestation of bug #16678 ([1]).\n> See also commits 6051857fc and 29992a6a5.\n> \n> [1] https://www.postgresql.org/message-id/flat/16678-253e48d34dc0c376%40postgresql.org\n\nThat was about a bug that appears when using TCP sockets. The remove_temp()\ncomment involves code that doesn't run when using TCP sockets. I don't think\nthey can be manifestations of the same phenomenon.\n\n\n",
"msg_date": "Fri, 28 Jul 2023 04:42:14 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "28.07.2023 14:42, Noah Misch wrpte:\n> That was about a bug that appears when using TCP sockets. ...\n\nYes, and according to the failed test output, TCP sockets were used:\n\n# 'pg_amcheck: error: connection to server at\n\"127.0.0.1\", port 49393 failed: server closed the connection\nunexpectedly\n# This probably means the server terminated abnormally\n# before or while processing the request.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 28 Jul 2023 16:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 04:00:00PM +0300, Alexander Lakhin wrote:\n> 28.07.2023 14:42, Noah Misch wrpte:\n> >That was about a bug that appears when using TCP sockets. ...\n> \n> Yes, and according to the failed test output, TCP sockets were used:\n> \n> #������������������ 'pg_amcheck: error: connection to server at\n> \"127.0.0.1\", port 49393 failed: server closed the connection\n> unexpectedly\n> #������ This probably means the server terminated abnormally\n> #������ before or while processing the request.\n\nI think we were talking about different details. Agreed, bug #16678 probably\ndid cause the failure in the original post. I was saying that bug has no\nconnection to the \"scary comment\", though.\n\n\n",
"msg_date": "Sat, 29 Jul 2023 04:24:51 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres v15 windows bincheck regression test failures"
}
] |
[
{
"msg_contents": "Pushing SELECT statements at socket speeds with prepared statements is a\nsynthetic benchmark that normally demos big pgbench numbers. My benchmark\nfarm moved to Ubuntu 23.04/kernel 6.2.0-20 last month, and that test is\nbadly broken on the system PG15 at larger core counts, with as much as an\n85% drop from expectations. Since this is really just a benchmark workload\nthe user impact is very narrow, probably zero really, but as the severity\nof the problem is high we should get to the bottom of what's going on.\n\nFirst round of profile data suggests the lost throughput is going here:\nOverhead Shared Object Symbol\n 74.34% [kernel] [k] osq_lock\n 2.26% [kernel] [k] mutex_spin_on_owner\n\nWhile I'd like to just say this is a Linux issue and that's early adopter\nlife with non-LTS Ubuntu releases, that doesn't explain why a PGDG PG14\nworks perfectly on the same systems?\n\nQuick test to find if you're impacted: on the server and using sockets,\nrun a 10 second SELECT test with/without preparation using 1 or 2\nclients/[core|thread] and see if preparation is the slower result. Here's\na PGDG PG14 on port 5434 as a baseline, next to Ubuntu 23.04's regular\nPG15, all using the PG15 pgbench on AMD 5950X:\n\n$ pgbench -i -s 100 pgbench -p 5434\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5434 pgbench\npgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\ntps = 1058195.197298 (without initial connection time)\n$ pgbench -S -T 10 -c 32 -j 32 -p 5434 pgbench\npgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\ntps = 553120.142503 (without initial connection time)\n\n$ pgbench -i -s 100 pgbench\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared pgbench\npgbench (15.3 (Ubuntu 15.3-0ubuntu0.23.04.1))\ntps = 170952.097609 (without initial connection time)\n$ pgbench -S -T 10 -c 32 -j 32 pgbench\npgbench (15.3 (Ubuntu 15.3-0ubuntu0.23.04.1))\ntps = 314585.347022 (without initial connection time)\n\nConnecting over sockets with preparation is usually a cheat code that lets\nnewer/bigger processors clear a million TPS like I did here. I don't think\nthat reflects any real use case given the unpopularity of preparation in\nORMs, plus needing a local sockets connection to reach top rates.\n\nAttached are full scaling graphs for all 4 combinations on this AMD 32\nthread 5950X, and an Intel i5-13600K with 20 threads and similar impact.\nThe regular, unprepared sockets peak speeds took a solid hit in PG15 from\nthis issue too. I could use some confirmation of where this happens from\nother tester's hardware and Linux kernels.\n\nFor completeness sake, peaking at \"perf top\" shows the hottest code\nsections for the bad results are:\n\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared pgbench\npgbench (15.3 (Ubuntu 15.3-0ubuntu0.23.04.1))\ntps = 170952.097609 (without initial connection time)\nOverhead Shared Object Symbol\n 74.34% [kernel] [k] osq_lock\n 2.26% [kernel] [k] mutex_spin_on_owner\n 0.40% postgres [.] _bt_compare\n 0.27% libc.so.6 [.] __dcigettext\n 0.24% postgres [.] PostgresMain\n\n$ pgbench -S -T 10 -c 32 -j 32 pgbench\npgbench (15.3 (Ubuntu 15.3-0ubuntu0.23.04.1))\ntps = 314585.347022 (without initial connection time)\n 36.24% [kernel] [k] osq_lock\n 2.73% [kernel] [k] mutex_spin_on_owner\n 1.41% postgres [.] base_yyparse\n 0.73% postgres [.] _bt_compare\n 0.70% postgres [.] hash_search_with_hash_value\n 0.62% postgres [.] core_yylex\n\nHere's what good ones look like:\n\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5434 pgbench\npgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\ntps = 1058195.197298 (without initial connection time)\nOverhead Shared Object Symbol\n 2.37% postgres [.] _bt_compare\n 2.07% [kernel] [k] psi_group_change\n 1.42% postgres [.] PostgresMain\n 1.31% postgres [.] hash_search_with_hash_value\n 1.08% [kernel] [k] __update_load_avg_se\n\n$ pgbench -S -T 10 -c 32 -j 32 -p 5434 pgbench\npgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\ntps = 553120.142503 (without initial connection time)\n 2.35% postgres [.] base_yyparse\n 1.37% postgres [.] _bt_compare\n 1.11% postgres [.] core_yylex\n 1.09% [kernel] [k] psi_group_change\n 0.99% postgres [.] hash_search_with_hash_value\n\nThere's been plenty of recent chatter on LKML about *osq_lock*, in January\nIntel reported a 20% benchmark regression on UnixBench that might be\nrelated. Work is still ongoing this week:\n\nhttps://lore.kernel.org/linux-mm/[email protected]/\nhttps://lkml.org/lkml/2023/6/6/706\n\nSeems time to join that party! Probably going to roll back the Intel\nsystem to 22.04 just so I can finish 16b1 tests on schedule on that one.\n(I only moved to 23.04 to get a major update to AMD's pstate kernel driver,\nwhich went great until hitting this test) Also haven't checked yet if the\nPGDG PG15 is any different from the stock Ubuntu one; wanted to get this\nreport out first.\n\n--\nGreg Smith [email protected]\nDirector of Open Source Strategy",
"msg_date": "Thu, 8 Jun 2023 15:08:57 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Major pgbench synthetic SELECT workload regression, Ubuntu 23.04+PG15"
},
{
"msg_contents": "Gregory Smith <[email protected]> writes:\n> Pushing SELECT statements at socket speeds with prepared statements is a\n> synthetic benchmark that normally demos big pgbench numbers. My benchmark\n> farm moved to Ubuntu 23.04/kernel 6.2.0-20 last month, and that test is\n> badly broken on the system PG15 at larger core counts, with as much as an\n> 85% drop from expectations.\n> ... I could use some confirmation of where this happens from\n> other tester's hardware and Linux kernels.\n\nFWIW, I can't reproduce any such effect with PG HEAD on RHEL 8.8\n(4.18.0 kernel) or Fedora 37 (6.2.14 kernel). Admittedly this is\nwith less-beefy hardware than you're using, but your graphs say\nthis should be obvious with even a dozen clients, and I don't\nsee that. I'm getting results like\n\n$ pgbench -S -T 10 -c 16 -j 16 -M prepared pgbench\ntps = 472503.628370 (without initial connection time)\n$ pgbench -S -T 10 -c 16 -j 16 pgbench\ntps = 297844.301319 (without initial connection time)\n\nwhich is about what I'd expect.\n\nCould it be that the breakage is Ubuntu-specific? Seems unlikely,\nbut ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Jun 2023 15:52:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 3:09 PM Gregory Smith <[email protected]> wrote:\n> Pushing SELECT statements at socket speeds with prepared statements is a synthetic benchmark that normally demos big pgbench numbers. My benchmark farm moved to Ubuntu 23.04/kernel 6.2.0-20 last month, and that test is badly broken on the system PG15 at larger core counts, with as much as an 85% drop from expectations.\n> Attached are full scaling graphs for all 4 combinations on this AMD 32 thread 5950X, and an Intel i5-13600K with 20 threads and similar impact. The regular, unprepared sockets peak speeds took a solid hit in PG15 from this issue too. I could use some confirmation of where this happens from other tester's hardware and Linux kernels.\n\nSince it doesn't look like you included results on pre-23x Ubuntu, I\nthought I would reply with my own results using your example. I also\nhave a 32 thread AMD 5950X but am on Ubuntu 22.10 (kernel 5.19). I did\nnot see the regression you mention.\n\nHEAD\n pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5432 pgbench\n tps = 837819.220854 (without initial connection time)\n\n pgbench -S -T 10 -c 32 -j 32 -M simple -p 5432 pgbench\n tps = 576845.930845 (without initial connection time)\n\nREL_15_STABLE\n pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5432 pgbench\n tps = 794380.991666 (without initial connection time)\n\n pgbench -S -T 10 -c 32 -j 32 -M simple -p 5432 pgbench\n tps = 534358.379838 (without initial connection time)\n\n- Melanie\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:26:30 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 15:08:57 -0400, Gregory Smith wrote:\n> Pushing SELECT statements at socket speeds with prepared statements is a\n> synthetic benchmark that normally demos big pgbench numbers. My benchmark\n> farm moved to Ubuntu 23.04/kernel 6.2.0-20 last month, and that test is\n> badly broken on the system PG15 at larger core counts, with as much as an\n> 85% drop from expectations. Since this is really just a benchmark workload\n> the user impact is very narrow, probably zero really, but as the severity\n> of the problem is high we should get to the bottom of what's going on.\n\n\n> First round of profile data suggests the lost throughput is going here:\n> Overhead Shared Object Symbol\n> 74.34% [kernel] [k] osq_lock\n> 2.26% [kernel] [k] mutex_spin_on_owner\n\nCould you get a profile with call graphs? We need to know what leads to all\nthose osq_lock calls.\n\nperf record --call-graph dwarf -a sleep 1\n\nor such should do the trick, if run while the workload is running.\n\n\n> Quick test to find if you're impacted: on the server and using sockets,\n> run a 10 second SELECT test with/without preparation using 1 or 2\n> clients/[core|thread] and see if preparation is the slower result. Here's\n> a PGDG PG14 on port 5434 as a baseline, next to Ubuntu 23.04's regular\n> PG15, all using the PG15 pgbench on AMD 5950X:\n\nI think it's unwise to compare builds of such different vintage. The compiler\noptions and compiler version can have substantial effects.\n\n\n> $ pgbench -i -s 100 pgbench -p 5434\n> $ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5434 pgbench\n> pgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\n> tps = 1058195.197298 (without initial connection time)\n\nI recommend also using -P1. Particularly when using unix sockets, the\nspecifics of how client threads and server threads are scheduled plays a huge\nrole. How large a role can change significantly between runs and between\nfairly minor changes to how things are executed (e.g. between major PG\nversions).\n\nE.g. on my workstation (two sockets, 10 cores/20 threads each), with 32\nclients, performance changes back and forth between ~600k and ~850k. Whereas\nwith 42 clients, it's steadily at 1.1M, with little variance.\n\nI also have seen very odd behaviour on larger machines when\n/proc/sys/kernel/sched_autogroup_enabled is set to 1.\n\n\n> There's been plenty of recent chatter on LKML about *osq_lock*, in January\n> Intel reported a 20% benchmark regression on UnixBench that might be\n> related. Work is still ongoing this week:\n\nI've seen such issues in the past, primarily due to contention internal to\ncgroups, when the memory controller is enabled. IIRC that could be alleviated\nto a substantial degree with cgroup.memory=nokmem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 15:18:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression, Ubuntu\n 23.04+PG15"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 15:18:07 -0700, Andres Freund wrote:\n> E.g. on my workstation (two sockets, 10 cores/20 threads each), with 32\n> clients, performance changes back and forth between ~600k and ~850k. Whereas\n> with 42 clients, it's steadily at 1.1M, with little variance.\n\nFWIW, this is with linux 6.2.12, compiled by myself though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Jun 2023 15:23:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression, Ubuntu\n 23.04+PG15"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 6:18 PM Andres Freund <[email protected]> wrote:\n\n> Could you get a profile with call graphs? We need to know what leads to all\n> those osq_lock calls.\n> perf record --call-graph dwarf -a sleep 1\n> or such should do the trick, if run while the workload is running.\n>\n\nI'm doing something wrong because I can't find the slow part in the perf\ndata; I'll get back to you on this one.\n\n\n> I think it's unwise to compare builds of such different vintage. The\n> compiler\n> options and compiler version can have substantial effects.\n>\nI recommend also using -P1. Particularly when using unix sockets, the\n> specifics of how client threads and server threads are scheduled plays a\n> huge\n> role.\n\n\nFair suggestions, those graphs come out of pgbench-tools where I profile\nall the latency, fast results for me are ruler flat. It's taken me several\ngenerations of water cooling experiments to reach that point, but even that\nonly buys me 10 seconds before I can overload a CPU to higher latency with\ntougher workloads. Here's a few seconds of slightly updated examples, now\nwith matching PGDG sourced 14+15 on the 5950X and with\nsched_autogroup_enabled=0 too:\n\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5434 -P 1 pgbench\npgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))\nprogress: 1.0 s, 1032929.3 tps, lat 0.031 ms stddev 0.004\nprogress: 2.0 s, 1051239.0 tps, lat 0.030 ms stddev 0.001\nprogress: 3.0 s, 1047528.9 tps, lat 0.030 ms stddev 0.008...\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5432 -P 1 pgbench\npgbench (15.3 (Ubuntu 15.3-1.pgdg23.04+1))\nprogress: 1.0 s, 171816.4 tps, lat 0.184 ms stddev 0.029, 0 failed\nprogress: 2.0 s, 173501.0 tps, lat 0.184 ms stddev 0.024, 0 failed...\n\nOn the slow runs it will even do this, watch my 5950X accomplish 0 TPS for\na second!\n\nprogress: 38.0 s, 177376.9 tps, lat 0.180 ms stddev 0.039, 0 failed\nprogress: 39.0 s, 35861.5 tps, lat 0.181 ms stddev 0.032, 0 failed\nprogress: 40.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failed\nprogress: 41.0 s, 222.1 tps, lat 304.500 ms stddev 741.413, 0 failed\nprogress: 42.0 s, 101199.6 tps, lat 0.530 ms stddev 18.862, 0 failed\nprogress: 43.0 s, 98286.9 tps, lat 0.328 ms stddev 8.156, 0 failed\n\nGonna have to measure seconds/transaction if this gets any worse.\n\n\n> I've seen such issues in the past, primarily due to contention internal to\n> cgroups, when the memory controller is enabled. IIRC that could be\n> alleviated\n> to a substantial degree with cgroup.memory=nokmem.\n>\n\nI cannot express on-list how much I dislike everything about the cgroups\ncode. Let me dig up the right call graph data first and will know more\nthen. The thing that keeps me from chasing kernel tuning too hard is\nseeing the PG14 go perfectly every time. This is a really weird one. All\nthe suggestions much appreciated.\n\nOn Thu, Jun 8, 2023 at 6:18 PM Andres Freund <[email protected]> wrote:Could you get a profile with call graphs? We need to know what leads to all\nthose osq_lock calls.\nperf record --call-graph dwarf -a sleep 1\nor such should do the trick, if run while the workload is running.I'm doing something wrong because I can't find the slow part in the perf data; I'll get back to you on this one. I think it's unwise to compare builds of such different vintage. The compiler\noptions and compiler version can have substantial effects. \nI recommend also using -P1. Particularly when using unix sockets, the\nspecifics of how client threads and server threads are scheduled plays a huge\nrole.Fair suggestions, those graphs come out of pgbench-tools where I profile all the latency, fast results for me are ruler flat. It's taken me several generations of water cooling experiments to reach that point, but even that only buys me 10 seconds before I can overload a CPU to higher latency with tougher workloads. Here's a few seconds of slightly updated examples, now with matching PGDG sourced 14+15 on the 5950X and with sched_autogroup_enabled=0 too:$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5434 -P 1 pgbenchpgbench (14.8 (Ubuntu 14.8-1.pgdg23.04+1))progress: 1.0 s, 1032929.3 tps, lat 0.031 ms stddev 0.004progress: 2.0 s, 1051239.0 tps, lat 0.030 ms stddev 0.001progress: 3.0 s, 1047528.9 tps, lat 0.030 ms stddev 0.008...$ pgbench -S -T 10 -c 32 -j 32 -M prepared -p 5432 -P 1 pgbenchpgbench (15.3 (Ubuntu 15.3-1.pgdg23.04+1))progress: 1.0 s, 171816.4 tps, lat 0.184 ms stddev 0.029, 0 failedprogress: 2.0 s, 173501.0 tps, lat 0.184 ms stddev 0.024, 0 failed...On the slow runs it will even do this, watch my 5950X accomplish 0 TPS for a second!progress: 38.0 s, 177376.9 tps, lat 0.180 ms stddev 0.039, 0 failedprogress: 39.0 s, 35861.5 tps, lat 0.181 ms stddev 0.032, 0 failedprogress: 40.0 s, 0.0 tps, lat 0.000 ms stddev 0.000, 0 failedprogress: 41.0 s, 222.1 tps, lat 304.500 ms stddev 741.413, 0 failedprogress: 42.0 s, 101199.6 tps, lat 0.530 ms stddev 18.862, 0 failedprogress: 43.0 s, 98286.9 tps, lat 0.328 ms stddev 8.156, 0 failedGonna have to measure seconds/transaction if this gets any worse. I've seen such issues in the past, primarily due to contention internal to\ncgroups, when the memory controller is enabled. IIRC that could be alleviated\nto a substantial degree with cgroup.memory=nokmem.I cannot express on-list how much I dislike everything about the cgroups code. Let me dig up the right call graph data first and will know more then. The thing that keeps me from chasing kernel tuning too hard is seeing the PG14 go perfectly every time. This is a really weird one. All the suggestions much appreciated.",
"msg_date": "Thu, 8 Jun 2023 20:20:18 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-08 20:20:18 -0400, Gregory Smith wrote:\n> On Thu, Jun 8, 2023 at 6:18 PM Andres Freund <[email protected]> wrote:\n> \n> > Could you get a profile with call graphs? We need to know what leads to all\n> > those osq_lock calls.\n> > perf record --call-graph dwarf -a sleep 1\n> > or such should do the trick, if run while the workload is running.\n> >\n> \n> I'm doing something wrong because I can't find the slow part in the perf\n> data; I'll get back to you on this one.\n\nYou might need to add --no-children to the perf report invocation, otherwise\nit'll show you the call graph inverted.\n\n- Andres\n\n\n",
"msg_date": "Thu, 8 Jun 2023 18:21:47 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression, Ubuntu\n 23.04+PG15"
},
{
"msg_contents": "Let me start with the happy ending to this thread:\n\n$ pgbench -S -T 10 -c 32 -j 32 -M prepared -P 1 pgbench\npgbench (15.3 (Ubuntu 15.3-1.pgdg23.04+1))\nprogress: 1.0 s, 1015713.0 tps, lat 0.031 ms stddev 0.007, 0 failed\nprogress: 2.0 s, 1083780.4 tps, lat 0.029 ms stddev 0.007, 0 failed...\nprogress: 8.0 s, 1084574.1 tps, lat 0.029 ms stddev 0.001, 0 failed\nprogress: 9.0 s, 1082665.1 tps, lat 0.029 ms stddev 0.001, 0 failed\ntps = 1077739.910163 (without initial connection time)\n\nWhich even seems a whole 0.9% faster than 14 on this hardware! The wonders\nnever cease.\n\nOn Thu, Jun 8, 2023 at 9:21 PM Andres Freund <[email protected]> wrote:\n\n> You might need to add --no-children to the perf report invocation,\n> otherwise\n> it'll show you the call graph inverted.\n>\n\nMy problem was not writing kernel symbols out, I was only getting addresses\nfor some reason. This worked:\n\n sudo perf record -g --call-graph dwarf -d --phys-data -a sleep 1\n perf report --stdio\n\nAnd once I looked at the stack trace I immediately saw the problem, fixed\nthe config option, and this report is now closed as PEBKAC on my part.\nSomehow I didn't notice the 15 installs on both systems had\nlog_min_duration_statement=0, and that's why the performance kept dropping\n*only* on the fastest runs.\n\nWhat I've learned today then is that if someone sees osq_lock in simple\nperf top out on oddly slow server, it's possible they are overloading a\ndevice writing out log file data, and leaving out the boring parts the call\ntrace you might see is:\n\nEmitErrorReport\n __GI___libc_write\n ksys_write\n __fdget_pos\n mutex_lock\n __mutex_lock_slowpath\n __mutex_lock.constprop.0\n 71.20% osq_lock\n\nEveryone was stuck trying to find the end of the log file to write to it,\nand that was the entirety of the problem. Hope that call trace and info\nhelps out some future goofball making the same mistake. I'd wager this\nwill come up again.\n\nThanks to everyone who helped out and I'm looking forward to PG16 testing\nnow that I have this rusty, embarrassing warm-up out of the way.\n\n--\nGreg Smith [email protected]\nDirector of Open Source Strategy\n\nLet me start with the happy ending to this thread:$ pgbench -S -T 10 -c 32 -j 32 -M prepared -P 1 pgbenchpgbench (15.3 (Ubuntu 15.3-1.pgdg23.04+1))progress: 1.0 s, 1015713.0 tps, lat 0.031 ms stddev 0.007, 0 failedprogress: 2.0 s, 1083780.4 tps, lat 0.029 ms stddev 0.007, 0 failed...progress: 8.0 s, 1084574.1 tps, lat 0.029 ms stddev 0.001, 0 failedprogress: 9.0 s, 1082665.1 tps, lat 0.029 ms stddev 0.001, 0 failedtps = 1077739.910163 (without initial connection time)Which even seems a whole 0.9% faster than 14 on this hardware! The wonders never cease.On Thu, Jun 8, 2023 at 9:21 PM Andres Freund <[email protected]> wrote:You might need to add --no-children to the perf report invocation, otherwise\nit'll show you the call graph inverted.My problem was not writing kernel symbols out, I was only getting addresses for some reason. This worked: sudo perf record -g --call-graph dwarf -d --phys-data -a sleep 1 perf report --stdioAnd once I looked at the stack trace I immediately saw the problem, fixed the config option, and this report is now closed as PEBKAC on my part. Somehow I didn't notice the 15 installs on both systems had log_min_duration_statement=0, and that's why the performance kept dropping *only* on the fastest runs.What I've learned today then is that if someone sees osq_lock in simple perf top out on oddly slow server, it's possible they are overloading a device writing out log file data, and leaving out the boring parts the call trace you might see is:EmitErrorReport __GI___libc_write ksys_write __fdget_pos mutex_lock __mutex_lock_slowpath __mutex_lock.constprop.0 71.20% osq_lockEveryone was stuck trying to find the end of the log file to write to it, and that was the entirety of the problem. Hope that call trace and info helps out some future goofball making the same mistake. I'd wager this will come up again.Thanks to everyone who helped out and I'm looking forward to PG16 testing now that I have this rusty, embarrassing warm-up out of the way.--Greg Smith [email protected] of Open Source Strategy",
"msg_date": "Fri, 9 Jun 2023 03:27:51 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 12:28 AM Gregory Smith <[email protected]> wrote:\n>\n> Let me start with the happy ending to this thread:\n\nPhew! I'm sure everyone would be relieved to know this was a false alarm.\n\n> On Thu, Jun 8, 2023 at 9:21 PM Andres Freund <[email protected]> wrote:\n>>\n>> You might need to add --no-children to the perf report invocation, otherwise\n>> it'll show you the call graph inverted.\n>\n>\n> My problem was not writing kernel symbols out, I was only getting addresses for some reason. This worked:\n>\n> sudo perf record -g --call-graph dwarf -d --phys-data -a sleep 1\n> perf report --stdio\n\nThere is no mention of perf or similar utilities in pgbench-tools\ndocs. I'm guessing Linux is the primary platform pgbench-tools gets\nused on most. If so, I think it'd be useful to mention these tools and\nsnippets in there to make others lives easier, when they find\nthemselves scratching heads.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Fri, 9 Jun 2023 01:05:43 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 4:06 AM Gurjeet Singh <[email protected]> wrote:\n\n> There is no mention of perf or similar utilities in pgbench-tools\n> docs. I'm guessing Linux is the primary platform pgbench-tools gets\n> used on most. If so, I think it'd be useful to mention these tools and\n> snippets in there to make others lives easier.\n\n\nThat's a good idea. I've written out guides multiple times for customers\nwho are crashing and need to learn about stack traces, to help them becomes\nself-sufficient with the troubleshooting parts it's impractical for me to\ndo for them. If I can talk people through gdb, I can teach them perf. I\nhave a lot of time to work on pgbench-tools set aside this summer, gonna\nfinally deprecate the gnuplot backend and make every graph as nice as the\nones I shared here.\n\nI haven't been aggressive about pushing perf because a lot of customers at\nCrunchy--a disproportionately larger number than typical I suspect--have\noperations restrictions that just don't allow DBAs direct access to a\nserver's command line. So perf commands are just out of reach before we\neven get to the permissions it requires. I may have to do something really\nwild to help them, like see if the right permissions setup would allow\nPL/python3 or similar to orchestrate a perf session in a SQL function.\n\nOn Fri, Jun 9, 2023 at 4:06 AM Gurjeet Singh <[email protected]> wrote:\nThere is no mention of perf or similar utilities in pgbench-tools\ndocs. I'm guessing Linux is the primary platform pgbench-tools gets\nused on most. If so, I think it'd be useful to mention these tools and\nsnippets in there to make others lives easier.That's a good idea. I've written out guides multiple times for customers who are crashing and need to learn about stack traces, to help them becomes self-sufficient with the troubleshooting parts it's impractical for me to do for them. If I can talk people through gdb, I can teach them perf. I have a lot of time to work on pgbench-tools set aside this summer, gonna finally deprecate the gnuplot backend and make every graph as nice as the ones I shared here.I haven't been aggressive about pushing perf because a lot of customers at Crunchy--a disproportionately larger number than typical I suspect--have operations restrictions that just don't allow DBAs direct access to a server's command line. So perf commands are just out of reach before we even get to the permissions it requires. I may have to do something really wild to help them, like see if the right permissions setup would allow PL/python3 or similar to orchestrate a perf session in a SQL function.",
"msg_date": "Fri, 9 Jun 2023 08:52:33 -0400",
"msg_from": "Gregory Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 3:28 AM Gregory Smith <[email protected]> wrote:\n> On Thu, Jun 8, 2023 at 9:21 PM Andres Freund <[email protected]> wrote:\n>>\n>> You might need to add --no-children to the perf report invocation, otherwise\n>> it'll show you the call graph inverted.\n>\n>\n> My problem was not writing kernel symbols out, I was only getting addresses for some reason. This worked:\n>\n> sudo perf record -g --call-graph dwarf -d --phys-data -a sleep 1\n> perf report --stdio\n\nDo you know why using phys-data would have solved the problem in your\nparticular setup? I find figuring out what perf options I need\nmystifying.\nI end up trying random things from\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf, the perf man\npage, and https://www.brendangregg.com/perf.html\n\nThe pg wiki page actually has a lot of detail. If you think your\nparticular problem is something others would encounter, it could be\ngood to add it there.\n\nFWIW, I think it is helpful to have hackers threads like this where\npeople work through unexplained performance results with others in the\ncommunity.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:28:05 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major pgbench synthetic SELECT workload regression,\n Ubuntu 23.04+PG15"
}
] |
[
{
"msg_contents": "Hi,\n\nSomeone made https://git.postgresql.org/ depressed\n\nAfter not being able to pull, I dropped and tried to clone again:\n\n\"\"\"\njcasanov@DangerBox:/opt/pgdg$ git clone\nhttps://git.postgresql.org/git/postgresql.git\nClonando en 'postgresql'...\nfatal: unable to access\n'https://git.postgresql.org/git/postgresql.git/': Empty reply from\nserver\n\"\"\"\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 8 Jun 2023 19:02:34 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "git.postgresql.org seems to be down"
},
{
"msg_contents": "On 6/8/23 20:02, Jaime Casanova wrote:\n> Hi,\n> \n> Someone made https://git.postgresql.org/ depressed\n> \n> After not being able to pull, I dropped and tried to clone again:\n\n\nTry now\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Thu, 8 Jun 2023 20:22:49 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: git.postgresql.org seems to be down"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:22 PM Joe Conway <[email protected]> wrote:\n>\n> On 6/8/23 20:02, Jaime Casanova wrote:\n> > Hi,\n> >\n> > Someone made https://git.postgresql.org/ depressed\n> >\n> > After not being able to pull, I dropped and tried to clone again:\n>\n>\n> Try now\n>\n\nIt's up again... thanks!\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 8 Jun 2023 19:37:13 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: git.postgresql.org seems to be down"
}
] |
[
{
"msg_contents": "Hi,\n\nIn logical decoding, we don't need to collect decoded changes of\naborted transactions. While streaming changes, we can detect\nconcurrent abort of the (sub)transaction but there is no mechanism to\nskip decoding changes of transactions that are known to already be\naborted. With the attached WIP patch, we check CLOG when decoding the\ntransaction for the first time. If it's already known to be aborted,\nwe skip collecting decoded changes of such transactions. That way,\nwhen the logical replication is behind or restarts, we don't need to\ndecode large transactions that already aborted, which helps improve\nthe decoding performance.\n\nFeedback is very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 9 Jun 2023 14:16:44 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 14:16:44 +0900, Masahiko Sawada wrote:\n> In logical decoding, we don't need to collect decoded changes of\n> aborted transactions. While streaming changes, we can detect\n> concurrent abort of the (sub)transaction but there is no mechanism to\n> skip decoding changes of transactions that are known to already be\n> aborted. With the attached WIP patch, we check CLOG when decoding the\n> transaction for the first time. If it's already known to be aborted,\n> we skip collecting decoded changes of such transactions. That way,\n> when the logical replication is behind or restarts, we don't need to\n> decode large transactions that already aborted, which helps improve\n> the decoding performance.\n\nIt's very easy to get uses of TransactionIdDidAbort() wrong. For one, it won't\nreturn true when a transaction was implicitly aborted due to a crash /\nrestart. You're also supposed to use it only after a preceding\nTransactionIdIsInProgress() call.\n\nI'm not sure there are issues with not checking TransactionIdIsInProgress()\nfirst in this case, but I'm also not sure there aren't.\n\nA separate issue is that TransactionIdDidAbort() can end up being very slow if\na lot of transactions are in progress concurrently. As soon as the clog\nbuffers are extended all time is spent copying pages from the kernel\npagecache. I'd not at all be surprised if this changed causes a substantial\nslowdown in workloads with lots of small transactions, where most transactions\ncommit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Jun 2023 13:31:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Sun, Jun 11, 2023 at 5:31 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-06-09 14:16:44 +0900, Masahiko Sawada wrote:\n> > In logical decoding, we don't need to collect decoded changes of\n> > aborted transactions. While streaming changes, we can detect\n> > concurrent abort of the (sub)transaction but there is no mechanism to\n> > skip decoding changes of transactions that are known to already be\n> > aborted. With the attached WIP patch, we check CLOG when decoding the\n> > transaction for the first time. If it's already known to be aborted,\n> > we skip collecting decoded changes of such transactions. That way,\n> > when the logical replication is behind or restarts, we don't need to\n> > decode large transactions that already aborted, which helps improve\n> > the decoding performance.\n>\n\nThank you for the comment.\n\n> It's very easy to get uses of TransactionIdDidAbort() wrong. For one, it won't\n> return true when a transaction was implicitly aborted due to a crash /\n> restart. You're also supposed to use it only after a preceding\n> TransactionIdIsInProgress() call.\n>\n> I'm not sure there are issues with not checking TransactionIdIsInProgress()\n> first in this case, but I'm also not sure there aren't.\n\nYeah, it seems to be better to use !TransactionIdDidCommit() with a\npreceding TransactionIdIsInProgress() check.\n\n>\n> A separate issue is that TransactionIdDidAbort() can end up being very slow if\n> a lot of transactions are in progress concurrently. As soon as the clog\n> buffers are extended all time is spent copying pages from the kernel\n> pagecache. I'd not at all be surprised if this changed causes a substantial\n> slowdown in workloads with lots of small transactions, where most transactions\n> commit.\n>\n\nIndeed. So it should check the transaction status less frequently. It\ndoesn't benefit much even if we can skip collecting decoded changes of\nsmall transactions. Another idea is that we check the status of only\nlarge transactions. That is, when the size of decoded changes of an\naborted transaction exceeds logical_decoding_work_mem, we mark it as\naborted , free its changes decoded so far, and skip further\ncollection.\n\nRegards\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 17:35:45 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 2:06 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Sun, Jun 11, 2023 at 5:31 AM Andres Freund <[email protected]> wrote:\n> >\n> > A separate issue is that TransactionIdDidAbort() can end up being very slow if\n> > a lot of transactions are in progress concurrently. As soon as the clog\n> > buffers are extended all time is spent copying pages from the kernel\n> > pagecache. I'd not at all be surprised if this changed causes a substantial\n> > slowdown in workloads with lots of small transactions, where most transactions\n> > commit.\n> >\n>\n> Indeed. So it should check the transaction status less frequently. It\n> doesn't benefit much even if we can skip collecting decoded changes of\n> small transactions. Another idea is that we check the status of only\n> large transactions. That is, when the size of decoded changes of an\n> aborted transaction exceeds logical_decoding_work_mem, we mark it as\n> aborted , free its changes decoded so far, and skip further\n> collection.\n>\n\nYour idea might work for large transactions but I have not come across\nreports where this is reported as a problem. Do you see any such\nreports and can we see how much is the benefit with large\ntransactions? Because we do have the handling of concurrent aborts\nduring sys table scans and that might help sometimes for large\ntransactions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Jun 2023 16:20:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 7:50 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Jun 13, 2023 at 2:06 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Sun, Jun 11, 2023 at 5:31 AM Andres Freund <[email protected]> wrote:\n> > >\n> > > A separate issue is that TransactionIdDidAbort() can end up being very slow if\n> > > a lot of transactions are in progress concurrently. As soon as the clog\n> > > buffers are extended all time is spent copying pages from the kernel\n> > > pagecache. I'd not at all be surprised if this changed causes a substantial\n> > > slowdown in workloads with lots of small transactions, where most transactions\n> > > commit.\n> > >\n> >\n> > Indeed. So it should check the transaction status less frequently. It\n> > doesn't benefit much even if we can skip collecting decoded changes of\n> > small transactions. Another idea is that we check the status of only\n> > large transactions. That is, when the size of decoded changes of an\n> > aborted transaction exceeds logical_decoding_work_mem, we mark it as\n> > aborted , free its changes decoded so far, and skip further\n> > collection.\n> >\n>\n> Your idea might work for large transactions but I have not come across\n> reports where this is reported as a problem. Do you see any such\n> reports and can we see how much is the benefit with large\n> transactions? Because we do have the handling of concurrent aborts\n> during sys table scans and that might help sometimes for large\n> transactions.\n\nI've heard there was a case where a user had 29 million deletes in a\nsingle transaction with each one wrapped in a savepoint and rolled it\nback, which led to 11TB of spill files. If decoding such a large\ntransaction fails for some reasons (e.g. a disk full), it would try\ndecoding the same transaction again and again.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 11:41:52 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 8:12 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jun 15, 2023 at 7:50 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Jun 13, 2023 at 2:06 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Sun, Jun 11, 2023 at 5:31 AM Andres Freund <[email protected]> wrote:\n> > > >\n> > > > A separate issue is that TransactionIdDidAbort() can end up being very slow if\n> > > > a lot of transactions are in progress concurrently. As soon as the clog\n> > > > buffers are extended all time is spent copying pages from the kernel\n> > > > pagecache. I'd not at all be surprised if this changed causes a substantial\n> > > > slowdown in workloads with lots of small transactions, where most transactions\n> > > > commit.\n> > > >\n> > >\n> > > Indeed. So it should check the transaction status less frequently. It\n> > > doesn't benefit much even if we can skip collecting decoded changes of\n> > > small transactions. Another idea is that we check the status of only\n> > > large transactions. That is, when the size of decoded changes of an\n> > > aborted transaction exceeds logical_decoding_work_mem, we mark it as\n> > > aborted , free its changes decoded so far, and skip further\n> > > collection.\n> > >\n> >\n> > Your idea might work for large transactions but I have not come across\n> > reports where this is reported as a problem. Do you see any such\n> > reports and can we see how much is the benefit with large\n> > transactions? Because we do have the handling of concurrent aborts\n> > during sys table scans and that might help sometimes for large\n> > transactions.\n>\n> I've heard there was a case where a user had 29 million deletes in a\n> single transaction with each one wrapped in a savepoint and rolled it\n> back, which led to 11TB of spill files. If decoding such a large\n> transaction fails for some reasons (e.g. a disk full), it would try\n> decoding the same transaction again and again.\n>\n\nI was thinking why the existing handling of concurrent aborts doesn't\nhandle such a case and it seems that we check that only on catalog\naccess. However, in your case, the user probably is accessing the same\nrelation without any concurrent DDL on the same table, so it would\njust be a cache look-up for catalogs. Your idea of checking aborts\nevery logical_decoding_work_mem should work for such cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Jun 2023 14:37:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Fri, Jun 9, 2023 at 10:47 AM Masahiko Sawada <[email protected]> wrote:\n>\n> Hi,\n>\n> In logical decoding, we don't need to collect decoded changes of\n> aborted transactions. While streaming changes, we can detect\n> concurrent abort of the (sub)transaction but there is no mechanism to\n> skip decoding changes of transactions that are known to already be\n> aborted. With the attached WIP patch, we check CLOG when decoding the\n> transaction for the first time. If it's already known to be aborted,\n> we skip collecting decoded changes of such transactions. That way,\n> when the logical replication is behind or restarts, we don't need to\n> decode large transactions that already aborted, which helps improve\n> the decoding performance.\n>\n+1 for the idea of checking the transaction status only when we need\nto flush it to the disk or send it downstream (if streaming in\nprogress is enabled). Although this check is costly since we are\nplanning only for large transactions then it is worth it if we can\noccasionally avoid disk or network I/O for the aborted transactions.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Jun 2023 09:08:57 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Fri, Jun 23, 2023 at 12:39 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Jun 9, 2023 at 10:47 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > In logical decoding, we don't need to collect decoded changes of\n> > aborted transactions. While streaming changes, we can detect\n> > concurrent abort of the (sub)transaction but there is no mechanism to\n> > skip decoding changes of transactions that are known to already be\n> > aborted. With the attached WIP patch, we check CLOG when decoding the\n> > transaction for the first time. If it's already known to be aborted,\n> > we skip collecting decoded changes of such transactions. That way,\n> > when the logical replication is behind or restarts, we don't need to\n> > decode large transactions that already aborted, which helps improve\n> > the decoding performance.\n> >\n> +1 for the idea of checking the transaction status only when we need\n> to flush it to the disk or send it downstream (if streaming in\n> progress is enabled). Although this check is costly since we are\n> planning only for large transactions then it is worth it if we can\n> occasionally avoid disk or network I/O for the aborted transactions.\n>\n\nThanks.\n\nI've attached the updated patch. With this patch, we check the\ntransaction status for only large-transactions when eviction. For\nregression test purposes, I disable this transaction status check when\nlogical_replication_mode is set to 'immediate'.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 3 Jul 2023 10:45:52 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 07:16, Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Jun 23, 2023 at 12:39 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Fri, Jun 9, 2023 at 10:47 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > In logical decoding, we don't need to collect decoded changes of\n> > > aborted transactions. While streaming changes, we can detect\n> > > concurrent abort of the (sub)transaction but there is no mechanism to\n> > > skip decoding changes of transactions that are known to already be\n> > > aborted. With the attached WIP patch, we check CLOG when decoding the\n> > > transaction for the first time. If it's already known to be aborted,\n> > > we skip collecting decoded changes of such transactions. That way,\n> > > when the logical replication is behind or restarts, we don't need to\n> > > decode large transactions that already aborted, which helps improve\n> > > the decoding performance.\n> > >\n> > +1 for the idea of checking the transaction status only when we need\n> > to flush it to the disk or send it downstream (if streaming in\n> > progress is enabled). Although this check is costly since we are\n> > planning only for large transactions then it is worth it if we can\n> > occasionally avoid disk or network I/O for the aborted transactions.\n> >\n>\n> Thanks.\n>\n> I've attached the updated patch. With this patch, we check the\n> transaction status for only large-transactions when eviction. For\n> regression test purposes, I disable this transaction status check when\n> logical_replication_mode is set to 'immediate'.\n\nMay be there is some changes that are missing in the patch, which is\ngiving the following errors:\nreorderbuffer.c: In function ‘ReorderBufferCheckTXNAbort’:\nreorderbuffer.c:3584:22: error: ‘logical_replication_mode’ undeclared\n(first use in this function)\n 3584 | if (unlikely(logical_replication_mode ==\nLOGICAL_REP_MODE_IMMEDIATE))\n | ^~~~~~~~~~~~~~~~~~~~~~~~\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Oct 2023 15:54:17 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Tue, 3 Oct 2023 at 15:54, vignesh C <[email protected]> wrote:\n>\n> On Mon, 3 Jul 2023 at 07:16, Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Jun 23, 2023 at 12:39 PM Dilip Kumar <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 9, 2023 at 10:47 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > In logical decoding, we don't need to collect decoded changes of\n> > > > aborted transactions. While streaming changes, we can detect\n> > > > concurrent abort of the (sub)transaction but there is no mechanism to\n> > > > skip decoding changes of transactions that are known to already be\n> > > > aborted. With the attached WIP patch, we check CLOG when decoding the\n> > > > transaction for the first time. If it's already known to be aborted,\n> > > > we skip collecting decoded changes of such transactions. That way,\n> > > > when the logical replication is behind or restarts, we don't need to\n> > > > decode large transactions that already aborted, which helps improve\n> > > > the decoding performance.\n> > > >\n> > > +1 for the idea of checking the transaction status only when we need\n> > > to flush it to the disk or send it downstream (if streaming in\n> > > progress is enabled). Although this check is costly since we are\n> > > planning only for large transactions then it is worth it if we can\n> > > occasionally avoid disk or network I/O for the aborted transactions.\n> > >\n> >\n> > Thanks.\n> >\n> > I've attached the updated patch. With this patch, we check the\n> > transaction status for only large-transactions when eviction. For\n> > regression test purposes, I disable this transaction status check when\n> > logical_replication_mode is set to 'immediate'.\n>\n> May be there is some changes that are missing in the patch, which is\n> giving the following errors:\n> reorderbuffer.c: In function ‘ReorderBufferCheckTXNAbort’:\n> reorderbuffer.c:3584:22: error: ‘logical_replication_mode’ undeclared\n> (first use in this function)\n> 3584 | if (unlikely(logical_replication_mode ==\n> LOGICAL_REP_MODE_IMMEDIATE))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~\n\nWith no update to the thread and the compilation still failing I'm\nmarking this as returned with feedback. Please feel free to resubmit\nto the next CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:17:59 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 12:48 AM vignesh C <[email protected]> wrote:\n>\n> On Tue, 3 Oct 2023 at 15:54, vignesh C <[email protected]> wrote:\n> >\n> > On Mon, 3 Jul 2023 at 07:16, Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Fri, Jun 23, 2023 at 12:39 PM Dilip Kumar <[email protected]> wrote:\n> > > >\n> > > > On Fri, Jun 9, 2023 at 10:47 AM Masahiko Sawada <[email protected]> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > In logical decoding, we don't need to collect decoded changes of\n> > > > > aborted transactions. While streaming changes, we can detect\n> > > > > concurrent abort of the (sub)transaction but there is no mechanism to\n> > > > > skip decoding changes of transactions that are known to already be\n> > > > > aborted. With the attached WIP patch, we check CLOG when decoding the\n> > > > > transaction for the first time. If it's already known to be aborted,\n> > > > > we skip collecting decoded changes of such transactions. That way,\n> > > > > when the logical replication is behind or restarts, we don't need to\n> > > > > decode large transactions that already aborted, which helps improve\n> > > > > the decoding performance.\n> > > > >\n> > > > +1 for the idea of checking the transaction status only when we need\n> > > > to flush it to the disk or send it downstream (if streaming in\n> > > > progress is enabled). Although this check is costly since we are\n> > > > planning only for large transactions then it is worth it if we can\n> > > > occasionally avoid disk or network I/O for the aborted transactions.\n> > > >\n> > >\n> > > Thanks.\n> > >\n> > > I've attached the updated patch. With this patch, we check the\n> > > transaction status for only large-transactions when eviction. For\n> > > regression test purposes, I disable this transaction status check when\n> > > logical_replication_mode is set to 'immediate'.\n> >\n> > May be there is some changes that are missing in the patch, which is\n> > giving the following errors:\n> > reorderbuffer.c: In function ‘ReorderBufferCheckTXNAbort’:\n> > reorderbuffer.c:3584:22: error: ‘logical_replication_mode’ undeclared\n> > (first use in this function)\n> > 3584 | if (unlikely(logical_replication_mode ==\n> > LOGICAL_REP_MODE_IMMEDIATE))\n> > | ^~~~~~~~~~~~~~~~~~~~~~~~\n>\n> With no update to the thread and the compilation still failing I'm\n> marking this as returned with feedback. Please feel free to resubmit\n> to the next CF when there is a new version of the patch.\n>\n\nI resumed working on this item. I've attached the new version patch.\n\nI rebased the patch to the current HEAD and updated comments and\ncommit messages. The patch is straightforward and I'm somewhat\nsatisfied with it, but I'm thinking of adding some tests for it.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Feb 2024 14:49:55 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 3:17 PM Masahiko Sawada <[email protected]>\nwrote:\n\n>\n> I resumed working on this item. I've attached the new version patch.\n>\n> I rebased the patch to the current HEAD and updated comments and\n> commit messages. The patch is straightforward and I'm somewhat\n> satisfied with it, but I'm thinking of adding some tests for it.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\n\nI just had a look at the patch, the patch no longer applies because of a\nremoval of a header in a recent commit. Overall the patch looks fine, and I\ndidn't find any issues. Some cosmetic comments:\nin ReorderBufferCheckTXNAbort()\n+ /* Quick return if we've already knew the transaction status */\n+ if (txn->aborted)\n+ return true;\n\nknew/know\n\n/*\n+ * If logical_replication_mode is \"immediate\", we don't check the\n+ * transaction status so the caller always process this transaction.\n+ */\n+ if (debug_logical_replication_streaming ==\nDEBUG_LOGICAL_REP_STREAMING_IMMEDIATE)\n+ return false;\n\n/process/processes\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Fri, Mar 15, 2024 at 3:17 PM Masahiko Sawada <[email protected]> wrote:\n\nI resumed working on this item. I've attached the new version patch.\n\nI rebased the patch to the current HEAD and updated comments and\ncommit messages. The patch is straightforward and I'm somewhat\nsatisfied with it, but I'm thinking of adding some tests for it.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.comI just had a look at the patch, the patch no longer applies because of a removal of a header in a recent commit. Overall the patch looks fine, and I didn't find any issues. Some cosmetic comments:in ReorderBufferCheckTXNAbort()+\t/* Quick return if we've already knew the transaction status */+\tif (txn->aborted)+\t\treturn true;knew/know/*+\t * If logical_replication_mode is \"immediate\", we don't check the+\t * transaction status so the caller always process this transaction.+\t */+\tif (debug_logical_replication_streaming == DEBUG_LOGICAL_REP_STREAMING_IMMEDIATE)+\t\treturn false;/process/processesregards,Ajin CherianFujitsu Australia",
"msg_date": "Fri, 15 Mar 2024 15:20:51 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 1:21 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Fri, Mar 15, 2024 at 3:17 PM Masahiko Sawada <[email protected]> wrote:\n>>\n>>\n>> I resumed working on this item. I've attached the new version patch.\n>>\n>> I rebased the patch to the current HEAD and updated comments and\n>> commit messages. The patch is straightforward and I'm somewhat\n>> satisfied with it, but I'm thinking of adding some tests for it.\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> Amazon Web Services: https://aws.amazon.com\n>\n>\n> I just had a look at the patch, the patch no longer applies because of a removal of a header in a recent commit. Overall the patch looks fine, and I didn't find any issues. Some cosmetic comments:\n\nThank you for your review comments.\n\n> in ReorderBufferCheckTXNAbort()\n> + /* Quick return if we've already knew the transaction status */\n> + if (txn->aborted)\n> + return true;\n>\n> knew/know\n\nMaybe it should be \"known\"?\n\n>\n> /*\n> + * If logical_replication_mode is \"immediate\", we don't check the\n> + * transaction status so the caller always process this transaction.\n> + */\n> + if (debug_logical_replication_streaming == DEBUG_LOGICAL_REP_STREAMING_IMMEDIATE)\n> + return false;\n>\n> /process/processes\n>\n\nFixed.\n\nIn addition to these changes, I've made some changes to the latest\npatch. Here is the summary:\n\n- Use txn_flags field to record the transaction status instead of two\n'committed' and 'aborted' flags.\n- Add regression tests.\n- Update commit message.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Mar 2024 17:49:41 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 7:50 PM Masahiko Sawada <[email protected]>\nwrote:\n\n>\n> In addition to these changes, I've made some changes to the latest\n> patch. Here is the summary:\n>\n> - Use txn_flags field to record the transaction status instead of two\n> 'committed' and 'aborted' flags.\n> - Add regression tests.\n> - Update commit message.\n>\n> Regards,\n>\n>\nHi Sawada-san,\n\nThanks for the updated patch. Some comments:\n\n1.\n+ * already aborted, we discards all changes accumulated so far and ignore\n+ * future changes, and return true. Otherwise return false.\n\nwe discards/we discard\n\n2. In function ReorderBufferCheckTXNAbort(): I haven't tested this but I\nwonder how prepared transactions would be considered, they are neither\ncommitted, nor in progress.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Mon, Mar 18, 2024 at 7:50 PM Masahiko Sawada <[email protected]> wrote:\nIn addition to these changes, I've made some changes to the latest\npatch. Here is the summary:\n\n- Use txn_flags field to record the transaction status instead of two\n'committed' and 'aborted' flags.\n- Add regression tests.\n- Update commit message.\n\nRegards,\nHi Sawada-san,Thanks for the updated patch. Some comments:1. + * already aborted, we discards all changes accumulated so far and ignore+ * future changes, and return true. Otherwise return false.we discards/we discard2. In function ReorderBufferCheckTXNAbort(): I haven't tested this but I wonder how prepared transactions would be considered, they are neither committed, nor in progress.regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 27 Mar 2024 22:49:14 +1100",
"msg_from": "Ajin Cherian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 8:49 PM Ajin Cherian <[email protected]> wrote:\n>\n>\n>\n> On Mon, Mar 18, 2024 at 7:50 PM Masahiko Sawada <[email protected]> wrote:\n>>\n>>\n>> In addition to these changes, I've made some changes to the latest\n>> patch. Here is the summary:\n>>\n>> - Use txn_flags field to record the transaction status instead of two\n>> 'committed' and 'aborted' flags.\n>> - Add regression tests.\n>> - Update commit message.\n>>\n>> Regards,\n>>\n>\n> Hi Sawada-san,\n>\n> Thanks for the updated patch. Some comments:\n\nThank you for the view comments!\n\n>\n> 1.\n> + * already aborted, we discards all changes accumulated so far and ignore\n> + * future changes, and return true. Otherwise return false.\n>\n> we discards/we discard\n\nWill fix it.\n\n>\n> 2. In function ReorderBufferCheckTXNAbort(): I haven't tested this but I wonder how prepared transactions would be considered, they are neither committed, nor in progress.\n\nThe transaction that is prepared but not resolved yet is considered as\nin-progress.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 21:22:43 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
},
{
"msg_contents": "Hi, here are some review comments for your patch v4-0001.\n\n======\ncontrib/test_decoding/sql/stats.sql\n\n1.\nHuh? The test fails because the \"expected results\" file for these new\ntests is missing from the patch.\n\n======\n.../replication/logical/reorderbuffer.c\n\n2.\n static void ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\n- bool txn_prepared);\n+ bool txn_prepared, bool mark_streamed);\n\nIIUC this new 'mark_streamed' parameter is more like a prerequisite\nfor the other conditions to decide to mark the tx as streamed -- i.e.\nit is more like 'can_mark_streamed', so I felt the name should be\nchanged to be like that (everywhere it is used).\n\n~~~\n\n3. ReorderBufferTruncateTXN\n\n- * 'txn_prepared' indicates that we have decoded the transaction at prepare\n- * time.\n+ * If mark_streamed is true, we could mark the transaction as streamed.\n+ *\n+ * 'streaming_txn' indicates that the given transaction is a\nstreaming transaction.\n */\n static void\n-ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\nbool txn_prepared)\n+ReorderBufferTruncateTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\nbool txn_prepared,\n+ bool mark_streamed)\n\n~\n\nWhat's that new comment about 'streaming_txn' for? It seemed unrelated\nto the patch code.\n\n~~~\n\n4.\n/*\n* Mark the transaction as streamed.\n*\n* The top-level transaction, is marked as streamed always, even if it\n* does not contain any changes (that is, when all the changes are in\n* subtransactions).\n*\n* For subtransactions, we only mark them as streamed when there are\n* changes in them.\n*\n* We do it this way because of aborts - we don't want to send aborts for\n* XIDs the downstream is not aware of. And of course, it always knows\n* about the toplevel xact (we send the XID in all messages), but we never\n* stream XIDs of empty subxacts.\n*/\nif (mark_streamed && (!txn_prepared) &&\n(rbtxn_is_toptxn(txn) || (txn->nentries_mem != 0)))\ntxn->txn_flags |= RBTXN_IS_STREAMED;\n\n~~\n\nWith the patch introduction of the new parameter, I felt this code\nmight be better if it was refactored as follows:\n\n/* Mark the transaction as streamed, if appropriate. */\nif (can_mark_streamed)\n{\n /*\n ... large comment\n */\n if ((!txn_prepared) && (rbtxn_is_toptxn(txn) || (txn->nentries_mem != 0)))\n txn->txn_flags |= RBTXN_IS_STREAMED;\n}\n\n~~~\n\n5. ReorderBufferPrepare\n\n- if (txn->concurrent_abort && !rbtxn_is_streamed(txn))\n+ if (!txn_aborted && rbtxn_did_abort(txn) && !rbtxn_is_streamed(txn))\n rb->prepare(rb, txn, txn->final_lsn);\n\n~\n\nMaybe I misunderstood this logic, but won't a \"concurrent abort\" cause\nyour new Assert added in ReorderBufferProcessTXN to fail?\n\n+ /* Update transaction status */\n+ Assert((curtxn->txn_flags & (RBTXN_COMMITTED | RBTXN_ABORTED)) == 0);\n\n~~~\n\n6. ReorderBufferCheckTXNAbort\n\n+ /* Check the transaction status using CLOG lookup */\n+ if (TransactionIdIsInProgress(txn->xid))\n+ return false;\n+\n+ if (TransactionIdDidCommit(txn->xid))\n+ {\n+ /*\n+ * Remember the transaction is committed so that we can skip CLOG\n+ * check next time, avoiding the pressure on CLOG lookup.\n+ */\n+ txn->txn_flags |= RBTXN_COMMITTED;\n+ return false;\n+ }\n\nIIUC the purpose of the TransactionIdDidCommit() was to avoid the\noverhead of calling the TransactionIdIsInProgress(). So, shouldn't the\norder of these checks be swapped? Otherwise, there might be 1 extra\nunnecessary call to TransactionIdIsInProgress() next time.\n\n======\nsrc/include/replication/reorderbuffer.h\n\n7.\n #define RBTXN_PREPARE 0x0040\n #define RBTXN_SKIPPED_PREPARE 0x0080\n #define RBTXN_HAS_STREAMABLE_CHANGE 0x0100\n+#define RBTXN_COMMITTED 0x0200\n+#define RBTXN_ABORTED 0x0400\n\nFor consistency with the existing bitmask names, I guess these should be named:\n- RBTXN_COMMITTED --> RBTXN_IS_COMMITTED\n- RBTXN_ABORTED --> RBTXN_IS_ABORTED\n\n~~~\n\n8.\nSimilarly, IMO the macros should have the same names as the bitmasks,\nlike the other nearby ones generally seem to.\n\nrbtxn_did_commit --> rbtxn_is_committed\nrbtxn_did_abort --> rbtxn_is_aborted\n\n======\n\n9.\nAlso, attached is a top-up patch for other cosmetic nitpicks:\n- comment wording\n- typos in comments\n- excessive or missing blank lines\n- etc.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 12 Jun 2024 12:41:02 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Skip collecting decoded changes of already-aborted transactions"
}
] |
[
{
"msg_contents": "Hackers,\n\nWhile updating pgAudit for PG16 I found the following (from our \nperspective) regression.\n\nIn prior versions of Postgres, views were listed in rangeTabls when \nExecutorCheckPerms_hook() was called but in PG16 the views are no longer \nin this list. The permissions have been broken out into permInfos as of \na61b1f748 and this list does include the view.\n\nIt seems the thing to do here would be to scan permInfos instead, which \nworks fine except that we also need access to rellockmode, which is only \nincluded in rangeTabls. We can add a scan of rangeTabls to get \nrellockmode when needed and we might be better off overall since \npermInfos will generally have fewer entries. I have not implemented this \nyet but it seems like it will work.\n\n From reading the discussion it appears this change to rangeTabls was \nintentional, but I wonder if I am missing something. For instance, is \nthere a better way to get rellockmode when scanning permInfos?\n\nIt seems unlikely that we are the only ones using rangeTabls in an \nextension, so others might benefit from having an answer to this on list.\n\nThanks,\n-David\n\n\n",
"msg_date": "Fri, 9 Jun 2023 11:28:48 +0300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/9/23 11:28, David Steele wrote:\n> \n> It seems the thing to do here would be to scan permInfos instead, which \n> works fine except that we also need access to rellockmode, which is only \n> included in rangeTabls. We can add a scan of rangeTabls to get \n> rellockmode when needed and we might be better off overall since \n> permInfos will generally have fewer entries. I have not implemented this \n> yet but it seems like it will work.\n\nI implemented this and it does work, but it was not as straight forward \nas I would have liked. To make the relationship from RTEPermissionInfo \nback to RangeTblEntry I was forced to generate my own perminfoindex.\n\nThis was not hard to do but seems a bit fragile. Perhaps we need an \nrteindex in RTEPermissionInfo? This would also avoid the need to scan \nrangeTabls.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 9 Jun 2023 12:04:55 +0300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "Hi David,\n\nOn Fri, Jun 9, 2023 at 17:28 David Steele <[email protected]> wrote:\n\n> Hackers,\n>\n> While updating pgAudit for PG16 I found the following (from our\n> perspective) regression.\n>\n> In prior versions of Postgres, views were listed in rangeTabls when\n> ExecutorCheckPerms_hook() was called but in PG16 the views are no longer\n> in this list.\n\n\nI’m not exactly sure how pgAudit’s code is searching for view relations in\nthe range table, but if the code involves filtering on rtekind ==\nRTE_RELATION, then yes, such code won’t find views anymore. That’s because\nthe rewriter no longer adds extraneous RTE_RELATION RTEs for views into the\nrange table. Views are still there, it’s just that their RTEs are of kind\nRTE_SUBQUERY, but they do contain some RELATION fields like relid,\nrellockmode, etc. So an extension hook’s relation RTE filtering code\nshould also consider relid, not just rtekind.\n\nI’m away from a computer atm, so I am not able to easily copy-paste an\nexample of that from the core code, but maybe you can search for code sites\nthat need to filter out relation RTEs from the range table.\n\nPerhaps, we are missing a comment near the hook definition mentioning this\ndetail about views.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nHi David,On Fri, Jun 9, 2023 at 17:28 David Steele <[email protected]> wrote:Hackers,\n\nWhile updating pgAudit for PG16 I found the following (from our \nperspective) regression.\n\nIn prior versions of Postgres, views were listed in rangeTabls when \nExecutorCheckPerms_hook() was called but in PG16 the views are no longer \nin this list.I’m not exactly sure how pgAudit’s code is searching for view relations in the range table, but if the code involves filtering on rtekind == RTE_RELATION, then yes, such code won’t find views anymore. That’s because the rewriter no longer adds extraneous RTE_RELATION RTEs for views into the range table. Views are still there, it’s just that their RTEs are of kind RTE_SUBQUERY, but they do contain some RELATION fields like relid, rellockmode, etc. So an extension hook’s relation RTE filtering code should also consider relid, not just rtekind.I’m away from a computer atm, so I am not able to easily copy-paste an example of that from the core code, but maybe you can search for code sites that need to filter out relation RTEs from the range table.Perhaps, we are missing a comment near the hook definition mentioning this detail about views.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Jun 2023 20:25:32 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "Hi Amit,\n\nOn 6/9/23 14:25, Amit Langote wrote:\n> On Fri, Jun 9, 2023 at 17:28 David Steele <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> In prior versions of Postgres, views were listed in rangeTabls when\n> ExecutorCheckPerms_hook() was called but in PG16 the views are no\n> longer\n> in this list.\n> \n> I’m not exactly sure how pgAudit’s code is searching for view relations \n> in the range table, but if the code involves filtering on rtekind == \n> RTE_RELATION, then yes, such code won’t find views anymore. That’s \n> because the rewriter no longer adds extraneous RTE_RELATION RTEs for \n> views into the range table. Views are still there, it’s just that their \n> RTEs are of kind RTE_SUBQUERY, but they do contain some RELATION fields \n> like relid, rellockmode, etc. So an extension hook’s relation RTE \n> filtering code should also consider relid, not just rtekind.\n\nThank you, this was very helpful. I am able to get the expected result \nnow with:\n\n/* We only care about tables/views and can ignore subqueries, etc. */\nif (!(rte->rtekind == RTE_RELATION ||\n (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n continue;\n\nOne thing, though, rte->relkind is not set for views, so I still need to \ncall get_rel_relkind(rte->relid). Not a big deal, but do you think it \nwould make sense to set rte->relkind for views?\n\n> Perhaps, we are missing a comment near the hook definition mentioning \n> this detail about views.\n\nI don't see any meaningful comments near the hook definition. That would \ncertainly be helpful.\n\nThanks!\n-David\n\n\n",
"msg_date": "Fri, 9 Jun 2023 17:46:48 +0300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "David Steele <[email protected]> writes:\n> Thank you, this was very helpful. I am able to get the expected result \n> now with:\n\n> /* We only care about tables/views and can ignore subqueries, etc. */\n> if (!(rte->rtekind == RTE_RELATION ||\n> (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n> continue;\n\nRight, that matches places like add_rtes_to_flat_rtable().\n\n> One thing, though, rte->relkind is not set for views, so I still need to \n> call get_rel_relkind(rte->relid). Not a big deal, but do you think it \n> would make sense to set rte->relkind for views?\n\nIf you see \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\",\nit's dead certain that relid refers to a view, so you could just wire\nin that knowledge.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Jun 2023 12:14:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/9/23 19:14, Tom Lane wrote:\n> David Steele <[email protected]> writes:\n>> Thank you, this was very helpful. I am able to get the expected result\n>> now with:\n> \n>> /* We only care about tables/views and can ignore subqueries, etc. */\n>> if (!(rte->rtekind == RTE_RELATION ||\n>> (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n>> continue;\n> \n> Right, that matches places like add_rtes_to_flat_rtable().\n\nGood to have confirmation of that, thanks!\n\n>> One thing, though, rte->relkind is not set for views, so I still need to\n>> call get_rel_relkind(rte->relid). Not a big deal, but do you think it\n>> would make sense to set rte->relkind for views?\n> \n> If you see \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\",\n> it's dead certain that relid refers to a view, so you could just wire\n> in that knowledge.\n\nYeah, that's a good trick. Even so, I wonder why relkind is not set when \nrelid is set?\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 10 Jun 2023 09:51:34 +0300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 15:51 David Steele <[email protected]> wrote:\n\n> On 6/9/23 19:14, Tom Lane wrote:\n> > David Steele <[email protected]> writes:\n> >> Thank you, this was very helpful. I am able to get the expected result\n> >> now with:\n> >\n> >> /* We only care about tables/views and can ignore subqueries, etc. */\n> >> if (!(rte->rtekind == RTE_RELATION ||\n> >> (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n> >> continue;\n> >\n> > Right, that matches places like add_rtes_to_flat_rtable().\n>\n> Good to have confirmation of that, thanks!\n>\n> >> One thing, though, rte->relkind is not set for views, so I still need to\n> >> call get_rel_relkind(rte->relid). Not a big deal, but do you think it\n> >> would make sense to set rte->relkind for views?\n> >\n> > If you see \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\",\n> > it's dead certain that relid refers to a view, so you could just wire\n> > in that knowledge.\n>\n> Yeah, that's a good trick. Even so, I wonder why relkind is not set when\n> relid is set?\n\n\nI too have been thinking that setting relkind might be a good idea, even if\nonly as a crosscheck that only view relations can look like that in the\nrange table.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Sat, Jun 10, 2023 at 15:51 David Steele <[email protected]> wrote:On 6/9/23 19:14, Tom Lane wrote:\n> David Steele <[email protected]> writes:\n>> Thank you, this was very helpful. I am able to get the expected result\n>> now with:\n> \n>> /* We only care about tables/views and can ignore subqueries, etc. */\n>> if (!(rte->rtekind == RTE_RELATION ||\n>> (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n>> continue;\n> \n> Right, that matches places like add_rtes_to_flat_rtable().\n\nGood to have confirmation of that, thanks!\n\n>> One thing, though, rte->relkind is not set for views, so I still need to\n>> call get_rel_relkind(rte->relid). Not a big deal, but do you think it\n>> would make sense to set rte->relkind for views?\n> \n> If you see \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\",\n> it's dead certain that relid refers to a view, so you could just wire\n> in that knowledge.\n\nYeah, that's a good trick. Even so, I wonder why relkind is not set when \nrelid is set?I too have been thinking that setting relkind might be a good idea, even if only as a crosscheck that only view relations can look like that in the range table.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 10 Jun 2023 15:57:22 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/10/23 09:57, Amit Langote wrote:\n> On Sat, Jun 10, 2023 at 15:51 David Steele <[email protected] \n> <mailto:[email protected]>> wrote:\n> On 6/9/23 19:14, Tom Lane wrote:\n> >\n> > If you see \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\",\n> > it's dead certain that relid refers to a view, so you could just wire\n> > in that knowledge.\n> \n> Yeah, that's a good trick. Even so, I wonder why relkind is not set\n> when\n> relid is set?\n> \n> I too have been thinking that setting relkind might be a good idea, even \n> if only as a crosscheck that only view relations can look like that in \n> the range table.\n\n+1. Even better if we can do it for PG16.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 10 Jun 2023 10:30:29 +0300",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "David Steele <[email protected]> writes:\n> On 6/10/23 09:57, Amit Langote wrote:\n>> I too have been thinking that setting relkind might be a good idea, even \n>> if only as a crosscheck that only view relations can look like that in \n>> the range table.\n\n> +1. Even better if we can do it for PG16.\n\nWell, if we're gonna do it we should do it for v16, rather than\nchange the data structure twice. It wouldn't be hard exactly:\n\n /*\n * Clear fields that should not be set in a subquery RTE. Note that we\n * leave the relid, rellockmode, and perminfoindex fields set, so that the\n * view relation can be appropriately locked before execution and its\n * permissions checked.\n */\n- rte->relkind = 0;\n rte->tablesample = NULL;\n rte->inh = false; /* must not be set for a subquery */\n\nplus adjustment of that comment and probably also the comment\nfor struct RangeTblEntry.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 08:56:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 08:56:47AM -0400, Tom Lane wrote:\n>\n> Well, if we're gonna do it we should do it for v16, rather than\n> change the data structure twice. It wouldn't be hard exactly:\n>\n> /*\n> * Clear fields that should not be set in a subquery RTE. Note that we\n> * leave the relid, rellockmode, and perminfoindex fields set, so that the\n> * view relation can be appropriately locked before execution and its\n> * permissions checked.\n> */\n> - rte->relkind = 0;\n> rte->tablesample = NULL;\n> rte->inh = false; /* must not be set for a subquery */\n>\n> plus adjustment of that comment and probably also the comment\n> for struct RangeTblEntry.\n\nand also handle that field in (read|out)funcs.c\n\n\n",
"msg_date": "Sat, 10 Jun 2023 21:18:57 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "Julien Rouhaud <[email protected]> writes:\n> On Sat, Jun 10, 2023 at 08:56:47AM -0400, Tom Lane wrote:\n>> - rte->relkind = 0;\n\n> and also handle that field in (read|out)funcs.c\n\nOh, right. Ugh, that means a catversion bump. It's not like\nwe've never done that during beta, but it's kind of an annoying\ncost for a detail as tiny as this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 09:27:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 10:27 PM Tom Lane <[email protected]> wrote:\n> Julien Rouhaud <[email protected]> writes:\n> > On Sat, Jun 10, 2023 at 08:56:47AM -0400, Tom Lane wrote:\n> >> - rte->relkind = 0;\n>\n> > and also handle that field in (read|out)funcs.c\n>\n> Oh, right. Ugh, that means a catversion bump. It's not like\n> we've never done that during beta, but it's kind of an annoying\n> cost for a detail as tiny as this.\n\nOK, so how about the attached?\n\nI considered adding Assert(relkind == RELKIND_VIEW) in all places that\nhave the \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\"\ncondition, but that seemed like an overkill, so only added one in the\n#ifdef USE_ASSERT_CHECKING block in ExecCheckPermissions() that\nf75cec4fff877 added.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 13:09:21 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/13/23 06:09, Amit Langote wrote:\n> On Sat, Jun 10, 2023 at 10:27 PM Tom Lane <[email protected]> wrote:\n>> Julien Rouhaud <[email protected]> writes:\n>>> On Sat, Jun 10, 2023 at 08:56:47AM -0400, Tom Lane wrote:\n>>>> - rte->relkind = 0;\n>>\n>>> and also handle that field in (read|out)funcs.c\n>>\n>> Oh, right. Ugh, that means a catversion bump. It's not like\n>> we've never done that during beta, but it's kind of an annoying\n>> cost for a detail as tiny as this.\n> \n> OK, so how about the attached?\n\nThe patch looks good to me. I also tested it against pgAudit and \neverything worked. I decided to go with the following because I think it \nis easier to read:\n\n/* We only care about tables/views and can ignore subqueries, etc. */\nif (!(rte->rtekind == RTE_RELATION ||\n (rte->rtekind == RTE_SUBQUERY && rte->relkind == RELKIND_VIEW)))\n continue;\n\n> I considered adding Assert(relkind == RELKIND_VIEW) in all places that\n> have the \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\"\n> condition, but that seemed like an overkill, so only added one in the\n> #ifdef USE_ASSERT_CHECKING block in ExecCheckPermissions() that\n> f75cec4fff877 added.\n\nThis seems like a good place for the assert.\n\nThanks!\n-David\n\n\n",
"msg_date": "Tue, 13 Jun 2023 09:44:44 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 4:44 PM David Steele <[email protected]> wrote:\n> On 6/13/23 06:09, Amit Langote wrote:\n> > On Sat, Jun 10, 2023 at 10:27 PM Tom Lane <[email protected]> wrote:\n> >> Julien Rouhaud <[email protected]> writes:\n> >>> On Sat, Jun 10, 2023 at 08:56:47AM -0400, Tom Lane wrote:\n> >>>> - rte->relkind = 0;\n> >>\n> >>> and also handle that field in (read|out)funcs.c\n> >>\n> >> Oh, right. Ugh, that means a catversion bump. It's not like\n> >> we've never done that during beta, but it's kind of an annoying\n> >> cost for a detail as tiny as this.\n> >\n> > OK, so how about the attached?\n>\n> The patch looks good to me. I also tested it against pgAudit and\n> everything worked.\n\nThanks for the review.\n\n> I decided to go with the following because I think it\n> is easier to read:\n>\n> /* We only care about tables/views and can ignore subqueries, etc. */\n> if (!(rte->rtekind == RTE_RELATION ||\n> (rte->rtekind == RTE_SUBQUERY && rte->relkind == RELKIND_VIEW)))\n> continue;\n\nIt didn't occur to me so far to mention it but this could be replaced with:\n\n if (rte->perminfoindex != 0)\n\nand turn that condition into an Assert instead, like the loop over\nrange table in ExecCheckPermissions() does.\n\n> > I considered adding Assert(relkind == RELKIND_VIEW) in all places that\n> > have the \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\"\n> > condition, but that seemed like an overkill, so only added one in the\n> > #ifdef USE_ASSERT_CHECKING block in ExecCheckPermissions() that\n> > f75cec4fff877 added.\n>\n> This seems like a good place for the assert.\n\nI added a comment above this Assert.\n\nI'd like to push this tomorrow barring objections.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 17:27:30 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/13/23 10:27, Amit Langote wrote:\n> On Tue, Jun 13, 2023 at 4:44 PM David Steele <[email protected]> wrote:\n> \n>> I decided to go with the following because I think it\n>> is easier to read:\n>>\n>> /* We only care about tables/views and can ignore subqueries, etc. */\n>> if (!(rte->rtekind == RTE_RELATION ||\n>> (rte->rtekind == RTE_SUBQUERY && rte->relkind == RELKIND_VIEW)))\n>> continue;\n> \n> It didn't occur to me so far to mention it but this could be replaced with:\n> \n> if (rte->perminfoindex != 0)\n> \n> and turn that condition into an Assert instead, like the loop over\n> range table in ExecCheckPermissions() does.\n\nHmmm, that might work, and save us a filter on rte->perminfoindex later \non (to filter out table partitions). Thanks for the tip!\n\n>>> I considered adding Assert(relkind == RELKIND_VIEW) in all places that\n>>> have the \"rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid)\"\n>>> condition, but that seemed like an overkill, so only added one in the\n>>> #ifdef USE_ASSERT_CHECKING block in ExecCheckPermissions() that\n>>> f75cec4fff877 added.\n>>\n>> This seems like a good place for the assert.\n> \n> I added a comment above this Assert.\n> \n> I'd like to push this tomorrow barring objections.\n\n+1 for the new comment.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 13 Jun 2023 10:57:36 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "Note that you changed one comment from \"permission checks\" to\n\"permission hecks\".\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:33:40 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 6:33 PM Alvaro Herrera <[email protected]> wrote:\n> Note that you changed one comment from \"permission checks\" to\n> \"permission hecks\".\n\nOops, thanks for pointing that out.\n\nFixed in the attached.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 18:38:42 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/13/23 11:38, Amit Langote wrote:\n> On Tue, Jun 13, 2023 at 6:33 PM Alvaro Herrera <[email protected]> wrote:\n>> Note that you changed one comment from \"permission checks\" to\n>> \"permission hecks\".\n> \n> Oops, thanks for pointing that out.\n> \n> Fixed in the attached.\n\nI have done another (more careful) review of the comments and I don't \nsee any other issues.\n\nRegards,\n-David\n\n\n",
"msg_date": "Tue, 13 Jun 2023 14:40:12 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 9:40 PM David Steele <[email protected]> wrote:\n> On 6/13/23 11:38, Amit Langote wrote:\n> > On Tue, Jun 13, 2023 at 6:33 PM Alvaro Herrera <[email protected]> wrote:\n> >> Note that you changed one comment from \"permission checks\" to\n> >> \"permission hecks\".\n> >\n> > Oops, thanks for pointing that out.\n> >\n> > Fixed in the attached.\n>\n> I have done another (more careful) review of the comments and I don't\n> see any other issues.\n\nThanks for checking.\n\nSeeing no other comments, I've pushed this after rewriting the\ncomments that needed to be changed to mention \"relkind\" right after\n\"relid\".\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:08:55 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 12:08 Amit Langote <[email protected]> wrote:\n\n> On Tue, Jun 13, 2023 at 9:40 PM David Steele <[email protected]> wrote:\n> > On 6/13/23 11:38, Amit Langote wrote:\n> > > On Tue, Jun 13, 2023 at 6:33 PM Alvaro Herrera <\n> [email protected]> wrote:\n> > >> Note that you changed one comment from \"permission checks\" to\n> > >> \"permission hecks\".\n> > >\n> > > Oops, thanks for pointing that out.\n> > >\n> > > Fixed in the attached.\n> >\n> > I have done another (more careful) review of the comments and I don't\n> > see any other issues.\n>\n> Thanks for checking.\n>\n> Seeing no other comments, I've pushed this after rewriting the\n> comments that needed to be changed to mention \"relkind\" right after\n> \"relid\".\n\n\nThis being my first commit, I intently looked to check if everything’s set\nup correctly. While it seemed to have hit gitweb and GitHub, it didn’t\npgsql-committers for some reason.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 14, 2023 at 12:08 Amit Langote <[email protected]> wrote:On Tue, Jun 13, 2023 at 9:40 PM David Steele <[email protected]> wrote:\n> On 6/13/23 11:38, Amit Langote wrote:\n> > On Tue, Jun 13, 2023 at 6:33 PM Alvaro Herrera <[email protected]> wrote:\n> >> Note that you changed one comment from \"permission checks\" to\n> >> \"permission hecks\".\n> >\n> > Oops, thanks for pointing that out.\n> >\n> > Fixed in the attached.\n>\n> I have done another (more careful) review of the comments and I don't\n> see any other issues.\n\nThanks for checking.\n\nSeeing no other comments, I've pushed this after rewriting the\ncomments that needed to be changed to mention \"relkind\" right after\n\"relid\".This being my first commit, I intently looked to check if everything’s set up correctly. While it seemed to have hit gitweb and GitHub, it didn’t pgsql-committers for some reason.\n-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 14:34:56 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 02:34:56PM +0900, Amit Langote wrote:\n> This being my first commit, I intently looked to check if everything’s set\n> up correctly. While it seemed to have hit gitweb and GitHub, it didn’t\n> pgsql-committers for some reason.\n\nIt seems to me that the email of pgsql-committers is just being held\nin moderation. Your first commit is in the tree, so this worked fine\nseen from here. Congrats!\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 15:44:21 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 15:44 Michael Paquier <[email protected]> wrote:\n\n> On Wed, Jun 14, 2023 at 02:34:56PM +0900, Amit Langote wrote:\n> > This being my first commit, I intently looked to check if everything’s\n> set\n> > up correctly. While it seemed to have hit gitweb and GitHub, it didn’t\n> > pgsql-committers for some reason.\n>\n> It seems to me that the email of pgsql-committers is just being held\n> in moderation. Your first commit is in the tree, so this worked fine\n> seen from here. Congrats!\n\n\nAh, did think it might be moderation. Thanks for the confirmation, Michael.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 14, 2023 at 15:44 Michael Paquier <[email protected]> wrote:On Wed, Jun 14, 2023 at 02:34:56PM +0900, Amit Langote wrote:\n> This being my first commit, I intently looked to check if everything’s set\n> up correctly. While it seemed to have hit gitweb and GitHub, it didn’t\n> pgsql-committers for some reason.\n\nIt seems to me that the email of pgsql-committers is just being held\nin moderation. Your first commit is in the tree, so this worked fine\nseen from here. Congrats!Ah, did think it might be moderation. Thanks for the confirmation, Michael.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 15:49:04 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 15:49 Amit Langote <[email protected]> wrote:\n\n> On Wed, Jun 14, 2023 at 15:44 Michael Paquier <[email protected]> wrote:\n>\n>> On Wed, Jun 14, 2023 at 02:34:56PM +0900, Amit Langote wrote:\n>> > This being my first commit, I intently looked to check if everything’s\n>> set\n>> > up correctly. While it seemed to have hit gitweb and GitHub, it didn’t\n>> > pgsql-committers for some reason.\n>>\n>> It seems to me that the email of pgsql-committers is just being held\n>> in moderation. Your first commit is in the tree, so this worked fine\n>> seen from here. Congrats!\n>\n>\n> Ah, did think it might be moderation. Thanks for the confirmation,\n> Michael.\n>\n\nIt’s out now:\n\nhttps://www.postgresql.org/message-id/E1q9Gms-001h5g-8Q%40gemulon.postgresql.org\n\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 14, 2023 at 15:49 Amit Langote <[email protected]> wrote:On Wed, Jun 14, 2023 at 15:44 Michael Paquier <[email protected]> wrote:On Wed, Jun 14, 2023 at 02:34:56PM +0900, Amit Langote wrote:\n> This being my first commit, I intently looked to check if everything’s set\n> up correctly. While it seemed to have hit gitweb and GitHub, it didn’t\n> pgsql-committers for some reason.\n\nIt seems to me that the email of pgsql-committers is just being held\nin moderation. Your first commit is in the tree, so this worked fine\nseen from here. Congrats!Ah, did think it might be moderation. Thanks for the confirmation, Michael.It’s out now:https://www.postgresql.org/message-id/E1q9Gms-001h5g-8Q%40gemulon.postgresql.org\n-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 14 Jun 2023 19:51:18 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Views no longer in rangeTabls?"
},
{
"msg_contents": "On 6/14/23 12:51, Amit Langote wrote:\n> \n> Ah, did think it might be moderation. Thanks for the confirmation,\n> Michael.\n> \n> It’s out now:\n> \n> https://www.postgresql.org/message-id/E1q9Gms-001h5g-8Q%40gemulon.postgresql.org <https://www.postgresql.org/message-id/E1q9Gms-001h5g-8Q%40gemulon.postgresql.org>\n\nThank you for committing this and congratulations on your first commit!\n\nRegards,\n-David\n\n\n",
"msg_date": "Wed, 14 Jun 2023 14:34:22 +0200",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Views no longer in rangeTabls?"
}
] |
[
{
"msg_contents": "Per discussion at the unconference[0], I started to write a patch that \nremoves the make distprep target. A more detailed explanation of the \nrationale is also in the patch.\n\nIt needs some polishing around the edges, but I wanted to put it out \nhere to get things moving and avoid duplicate work.\n\nOne thing in particular that isn't clear right now is how \"make world\" \nshould behave if the documentation tools are not found. Maybe we should \nmake a build option, like \"--with-docs\", to mirror the meson behavior.\n\n[0]: \nhttps://wiki.postgresql.org/wiki/PgCon_2023_Developer_Unconference#Build_System",
"msg_date": "Fri, 9 Jun 2023 11:10:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove distprep"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-09 11:10:14 +0200, Peter Eisentraut wrote:\n> Per discussion at the unconference[0], I started to write a patch that\n> removes the make distprep target. A more detailed explanation of the\n> rationale is also in the patch.\n\nThanks for tackling this!\n\n\n> It needs some polishing around the edges, but I wanted to put it out here to\n> get things moving and avoid duplicate work.\n\nIt'd be nice if we could get this in soon, so we could move ahead with the\nsrc/tools/msvc removal.\n\n\n> One thing in particular that isn't clear right now is how \"make world\"\n> should behave if the documentation tools are not found. Maybe we should\n> make a build option, like \"--with-docs\", to mirror the meson behavior.\n\nIsn't that somewhat unrelated to distprep? I see that you removed missing,\nbut I don't really understand why as part of this commit?\n\n\n> -# If there are any files in the source directory that we also generate in the\n> -# build directory, they might get preferred over the newly generated files,\n> -# e.g. because of a #include \"file\", which always will search in the current\n> -# directory first.\n> -message('checking for file conflicts between source and build directory')\n\nYou're thinking this can be removed because distclean is now reliable? There\nwere some pretty annoying to debug issues early on, where people switched from\nan in-tree autoconf build to meson, with some files left over from the source\nbuild, causing problems at a *later* time (when the files should have changed,\nbut the wrong ones were picked up). That's not really related distprep etc,\nso I'd change this separately, if we want to change it.\n\n\n> diff --git a/src/tools/pginclude/cpluspluscheck b/src/tools/pginclude/cpluspluscheck\n> index 4e09c4686b..287395887c 100755\n> --- a/src/tools/pginclude/cpluspluscheck\n> +++ b/src/tools/pginclude/cpluspluscheck\n> @@ -134,6 +134,9 @@ do\n> \ttest \"$f\" = src/interfaces/ecpg/preproc/preproc.h && continue\n> \ttest \"$f\" = src/test/isolation/specparse.h && continue\n>\n> +\t# FIXME\n> +\ttest \"$f\" = src/backend/utils/adt/jsonpath_internal.h && continue\n> +\n> \t# ppport.h is not under our control, so we can't make it standalone.\n> \ttest \"$f\" = src/pl/plperl/ppport.h && continue\n\nHm, what's that about?\n\nWe really ought to replace these scripts with something better, which\nunderstands concurrency...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Jul 2023 16:19:49 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 13.07.23 01:19, Andres Freund wrote:\n>> One thing in particular that isn't clear right now is how \"make world\"\n>> should behave if the documentation tools are not found. Maybe we should\n>> make a build option, like \"--with-docs\", to mirror the meson behavior.\n> \n> Isn't that somewhat unrelated to distprep? I see that you removed missing,\n> but I don't really understand why as part of this commit?\n\nOk, I put the docs stuff back the way it was and put \"missing\" back in.\n\n>> -# If there are any files in the source directory that we also generate in the\n>> -# build directory, they might get preferred over the newly generated files,\n>> -# e.g. because of a #include \"file\", which always will search in the current\n>> -# directory first.\n>> -message('checking for file conflicts between source and build directory')\n> \n> You're thinking this can be removed because distclean is now reliable? There\n> were some pretty annoying to debug issues early on, where people switched from\n> an in-tree autoconf build to meson, with some files left over from the source\n> build, causing problems at a *later* time (when the files should have changed,\n> but the wrong ones were picked up). That's not really related distprep etc,\n> so I'd change this separately, if we want to change it.\n\nOk, I kept it in.\n\n>> diff --git a/src/tools/pginclude/cpluspluscheck b/src/tools/pginclude/cpluspluscheck\n>> index 4e09c4686b..287395887c 100755\n>> --- a/src/tools/pginclude/cpluspluscheck\n>> +++ b/src/tools/pginclude/cpluspluscheck\n>> @@ -134,6 +134,9 @@ do\n>> \ttest \"$f\" = src/interfaces/ecpg/preproc/preproc.h && continue\n>> \ttest \"$f\" = src/test/isolation/specparse.h && continue\n>>\n>> +\t# FIXME\n>> +\ttest \"$f\" = src/backend/utils/adt/jsonpath_internal.h && continue\n>> +\n>> \t# ppport.h is not under our control, so we can't make it standalone.\n>> \ttest \"$f\" = src/pl/plperl/ppport.h && continue\n> \n> Hm, what's that about?\n\nDon't remember ... ;-) I removed this.\n\nAttached is a new version with the above changes, also updated for the \nrecently added generate-wait_event_types.pl, and I have adjusted all the \nheader file linking to use relative paths consistently. This addresses \nall issues known to me.",
"msg_date": "Fri, 14 Jul 2023 09:54:03 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 2023-Jul-14, Peter Eisentraut wrote:\n\n> diff --git a/src/backend/parser/Makefile b/src/backend/parser/Makefile\n> index 9f1c4022bb..3d33b082f2 100644\n> --- a/src/backend/parser/Makefile\n> +++ b/src/backend/parser/Makefile\n> @@ -64,8 +64,8 @@ scan.c: FLEX_FIX_WARNING=yes\n> # Force these dependencies to be known even without dependency info built:\n> gram.o scan.o parser.o: gram.h\n> \n> -\n> -# gram.c, gram.h, and scan.c are in the distribution tarball, so they\n> -# are not cleaned here.\n> -clean distclean maintainer-clean:\n> +clean:\n> +\trm -f parser/gram.c \\\n> +\t parser/gram.h \\\n> +\t parser/scan.c\n> \trm -f lex.backup\n\nHmm, this hunk and the equivalents in src/backend/bootstrap and\nsrc/backend/replication are wrong: you moved the rule from the parent\ndirectory's makefile to the directory where the files reside, but didn't\nremove the directory name from the command arguments, so the files\naren't actually deleted.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n\n",
"msg_date": "Fri, 14 Jul 2023 10:24:43 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 14.07.23 09:54, Peter Eisentraut wrote:\n>>> diff --git a/src/tools/pginclude/cpluspluscheck \n>>> b/src/tools/pginclude/cpluspluscheck\n>>> index 4e09c4686b..287395887c 100755\n>>> --- a/src/tools/pginclude/cpluspluscheck\n>>> +++ b/src/tools/pginclude/cpluspluscheck\n>>> @@ -134,6 +134,9 @@ do\n>>> test \"$f\" = src/interfaces/ecpg/preproc/preproc.h && continue\n>>> test \"$f\" = src/test/isolation/specparse.h && continue\n>>>\n>>> + # FIXME\n>>> + test \"$f\" = src/backend/utils/adt/jsonpath_internal.h && continue\n>>> +\n>>> # ppport.h is not under our control, so we can't make it \n>>> standalone.\n>>> test \"$f\" = src/pl/plperl/ppport.h && continue\n>>\n>> Hm, what's that about?\n> \n> Don't remember ... ;-) I removed this.\n\nAh, there was a reason. The headerscheck and cpluspluscheck targets \nneed jsonpath_gram.h to be built first. Which previously happened \nindirectly somehow? I have fixed this in the new patch version. I also \nfixed the issue that Álvaro reported nearby.",
"msg_date": "Fri, 14 Jul 2023 10:56:26 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Ah, there was a reason. The headerscheck and cpluspluscheck targets \n> need jsonpath_gram.h to be built first. Which previously happened \n> indirectly somehow? I have fixed this in the new patch version. I also \n> fixed the issue that Álvaro reported nearby.\n\nHave we yet run this concept past the packagers list? I'm still\nwondering if they will raise any objection to getting rid of all\nthe prebuilt files.\n\nAlso, I personally don't like the fact that you have removed the\ndistinction between distclean and maintainer-clean. I use\ndistclean-and-reconfigure quite a lot to avoid having to rebuild\nbison/flex outputs. This patch seems to have destroyed that\nworkflow optimization in return for not much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jul 2023 05:48:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Re: Tom Lane\n> Have we yet run this concept past the packagers list? I'm still\n> wondering if they will raise any objection to getting rid of all\n> the prebuilt files.\n\nNo problem for Debian, we are building snapshot releases from plain\ngit already without issues. In fact, there are already some explicit\nrm/clean rules in the packages to force rebuilding some (most?) of the\npre-built files.\n\nMost notably, we are also rebuilding \"configure\" using autoconf 2.71\nwithout issues. Perhaps we can get rid of the 2.69 hardcoding there?\n\nThanks for the heads-up,\nChristoph\n\n\n",
"msg_date": "Wed, 9 Aug 2023 11:13:10 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Christoph Berg <[email protected]> writes:\n> Most notably, we are also rebuilding \"configure\" using autoconf 2.71\n> without issues. Perhaps we can get rid of the 2.69 hardcoding there?\n\nMeh ... the fact that it works fine for you doesn't mean it will work\nfine elsewhere. Since we're trying to get out from under maintaining\nthe autoconf build system, I don't think it makes sense to open\nourselves up to having to do more work on it. A policy of benign\nneglect seems appropriate to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Aug 2023 10:11:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Re: Tom Lane\n> Meh ... the fact that it works fine for you doesn't mean it will work\n> fine elsewhere. Since we're trying to get out from under maintaining\n> the autoconf build system, I don't think it makes sense to open\n> ourselves up to having to do more work on it. A policy of benign\n> neglect seems appropriate to me.\n\nUnderstood, I was just pointing out there are more types of generated\nfiles in there.\n\nChristoph\n\n\n",
"msg_date": "Wed, 9 Aug 2023 16:25:28 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Hi,\n\nThanks for sending the -packagers email Peter!\n\nOn 2023-08-09 16:25:28 +0200, Christoph Berg wrote:\n> Re: Tom Lane\n> > Meh ... the fact that it works fine for you doesn't mean it will work\n> > fine elsewhere. Since we're trying to get out from under maintaining\n> > the autoconf build system, I don't think it makes sense to open\n> > ourselves up to having to do more work on it. A policy of benign\n> > neglect seems appropriate to me.\n> \n> Understood, I was just pointing out there are more types of generated\n> files in there.\n\nThe situation for configure is somewhat different, due to being maintained in\nthe repository, rather than just being included in the tarball...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Aug 2023 07:38:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 14.07.23 11:48, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> Ah, there was a reason. The headerscheck and cpluspluscheck targets\n>> need jsonpath_gram.h to be built first. Which previously happened\n>> indirectly somehow? I have fixed this in the new patch version. I also\n>> fixed the issue that Álvaro reported nearby.\n> \n> Have we yet run this concept past the packagers list? I'm still\n> wondering if they will raise any objection to getting rid of all\n> the prebuilt files.\n\nSo far there hasn't been any feedback from packagers that would appear \nto affect the outcome here.\n\n> Also, I personally don't like the fact that you have removed the\n> distinction between distclean and maintainer-clean. I use\n> distclean-and-reconfigure quite a lot to avoid having to rebuild\n> bison/flex outputs. This patch seems to have destroyed that\n> workflow optimization in return for not much.\n\nThe distclean target has a standard meaning that is baked into \ndownstream build systems, so it would be pretty disruptive if distclean \ndidn't actually clean everything back down to what was in the \ndistribution tarball. We could add a different clean target that cleans \nnot quite everything, if you can suggest a definition of what that \nshould do.\n\n\n\n",
"msg_date": "Wed, 16 Aug 2023 08:25:21 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Wed, Aug 09, 2023 at 07:38:40AM -0700, Andres Freund wrote:\n> On 2023-08-09 16:25:28 +0200, Christoph Berg wrote:\n>> Understood, I was just pointing out there are more types of generated\n>> files in there.\n> \n> The situation for configure is somewhat different, due to being maintained in\n> the repository, rather than just being included in the tarball...\n\nThis one comes down to Debian that patches autoconf with its own set\nof options, requiring a new ./configure in the tree, right?\n--\nMichael",
"msg_date": "Fri, 18 Aug 2023 10:11:12 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 2023-08-18 10:11:12 +0900, Michael Paquier wrote:\n> On Wed, Aug 09, 2023 at 07:38:40AM -0700, Andres Freund wrote:\n> > On 2023-08-09 16:25:28 +0200, Christoph Berg wrote:\n> >> Understood, I was just pointing out there are more types of generated\n> >> files in there.\n> > \n> > The situation for configure is somewhat different, due to being maintained in\n> > the repository, rather than just being included in the tarball...\n> \n> This one comes down to Debian that patches autoconf with its own set\n> of options, requiring a new ./configure in the tree, right?\n\nI'm not sure what you're really asking here?\n\n\n",
"msg_date": "Thu, 17 Aug 2023 19:39:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Re: Michael Paquier\n> This one comes down to Debian that patches autoconf with its own set\n> of options, requiring a new ./configure in the tree, right?\n\nYes, mostly. Since autoconf had not seen a new release for so long,\neveryone started to patch it, and one of the things that Debian and\nothers added was --runstatedir=DIR. The toolchain is also using it,\nit's part of the default set of options used by dh_auto_configure.\n\nIn parallel, the standard debhelper toolchain also started to run\nautoreconf by default, so instead of telling dh_auto_configure to omit\n--runstatedir, it was really easier to patch configure.ac to remove\nthe autoconf 2.69 check.\n\nTwo of the other patches are touching configure(.ac) anyway to tweak\ncompiler flags (reproducibility, aarch64 tweaks).\n\nChristoph\n\n\n",
"msg_date": "Fri, 18 Aug 2023 10:22:47 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 10:22:47AM +0200, Christoph Berg wrote:\n> Yes, mostly. Since autoconf had not seen a new release for so long,\n> everyone started to patch it, and one of the things that Debian and\n> others added was --runstatedir=DIR. The toolchain is also using it,\n> it's part of the default set of options used by dh_auto_configure.\n> \n> In parallel, the standard debhelper toolchain also started to run\n> autoreconf by default, so instead of telling dh_auto_configure to omit\n> --runstatedir, it was really easier to patch configure.ac to remove\n> the autoconf 2.69 check.\n\nAh, I didn't know this part of the story. Thanks for the insights.\n\n> Two of the other patches are touching configure(.ac) anyway to tweak\n> compiler flags (reproducibility, aarch64 tweaks).\n\nIs reproducibility something you've brought to a separate thread?\nFWIW, I'd be interested in improving this area for the in-core code,\nif need be. (Not material for this thread, of course).\n--\nMichael",
"msg_date": "Mon, 21 Aug 2023 16:24:47 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Re: Michael Paquier\n> Is reproducibility something you've brought to a separate thread?\n> FWIW, I'd be interested in improving this area for the in-core code,\n> if need be. (Not material for this thread, of course).\n\nAll the \"normal\" things like C compilation are actually already\nreproducible.\n\nThe bit addressed by the mentioned patch is that the compiler flags\nare recorded for later output by pg_config, and that includes\n-ffile-prefix-map=/path/to/source=. which does improve C\nreproducibility, but ironically is itself not reproducible when the\nsource is then compiled in a different directory. The patch simply\nremoves that flag from the information stored.\n\nhttps://salsa.debian.org/postgresql/postgresql/-/blob/15/debian/patches/filter-debug-prefix-map\n\nNot sure not much of that would be material for inclusion in PG.\n\nThis fix made PG 10 reproducible in Debian for about a week. Then LLVM\nhappened :D. The .bc files still record the build path, and so far no\none has found a way to prevent that:\n\nhttps://tests.reproducible-builds.org/debian/rb-pkg/unstable/arm64/diffoscope-results/postgresql-15.html\n\nAfaict that's the last part to be resolved (but it's been a while\nsince I checked). clang seems to have learned about -ffile-prefix-map=\nin the meantime, that needs to be tested.\n\nChristoph\n\n\n",
"msg_date": "Mon, 21 Aug 2023 14:23:27 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Reproducibility (Re: Remove distprep)"
},
{
"msg_contents": "On 14.07.23 10:56, Peter Eisentraut wrote:\n> On 14.07.23 09:54, Peter Eisentraut wrote:\n>>>> diff --git a/src/tools/pginclude/cpluspluscheck \n>>>> b/src/tools/pginclude/cpluspluscheck\n>>>> index 4e09c4686b..287395887c 100755\n>>>> --- a/src/tools/pginclude/cpluspluscheck\n>>>> +++ b/src/tools/pginclude/cpluspluscheck\n>>>> @@ -134,6 +134,9 @@ do\n>>>> test \"$f\" = src/interfaces/ecpg/preproc/preproc.h && continue\n>>>> test \"$f\" = src/test/isolation/specparse.h && continue\n>>>>\n>>>> + # FIXME\n>>>> + test \"$f\" = src/backend/utils/adt/jsonpath_internal.h && continue\n>>>> +\n>>>> # ppport.h is not under our control, so we can't make it \n>>>> standalone.\n>>>> test \"$f\" = src/pl/plperl/ppport.h && continue\n>>>\n>>> Hm, what's that about?\n>>\n>> Don't remember ... ;-) I removed this.\n> \n> Ah, there was a reason. The headerscheck and cpluspluscheck targets \n> need jsonpath_gram.h to be built first. Which previously happened \n> indirectly somehow? I have fixed this in the new patch version. I also \n> fixed the issue that Álvaro reported nearby.\n\nApparently, the headerscheck and cpluspluscheck targets still didn't \nwork right in the Cirrus jobs. Here is an updated patch to address \nthat. This is also rebased over some recent changes that affected this \npatch (generated wait events stuff).",
"msg_date": "Wed, 23 Aug 2023 12:46:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Wed, Aug 23, 2023 at 12:46:45PM +0200, Peter Eisentraut wrote:\n> Apparently, the headerscheck and cpluspluscheck targets still didn't work\n> right in the Cirrus jobs. Here is an updated patch to address that. This\n> is also rebased over some recent changes that affected this patch (generated\n> wait events stuff).\n\n-gettext-files: distprep\n+gettext-files: postgres\n\nThis bit in src/backend/nls.mk does not seem right to me. The\nfollowing sequence works on HEAD, but breaks with the patch because\nthe files that should be automatically generated from the perl scripts\naren't anymore:\n$ ./configure ...\n$ make -C src/backend/ gettext-files\n[...]\nIn file included from ../../../../src/include/postgres.h:46,\nfrom brin.c:16:\n../../../../src/include/utils/elog.h:79:10: fatal error:\nutils/errcodes.h: No such file or directory\n79 | #include \"utils/errcodes.h\" \n\n# Technically, this should depend on Makefile.global, but then\n# version.sgml would need to be rebuilt after every configure run,\n# even in distribution tarballs. So this is cheating a bit, but it\n\nThere is this comment in doc/src/sgml/Makefile. Does it still apply?\n\n Note that building <productname>PostgreSQL</productname> from the source\n repository requires reasonably up-to-date versions of <application>bison</application>,\n <application>flex</application>, and <application>Perl</application>.\n These tools are not needed to build from a distribution tarball, because\n the files generated with these tools are included in the tarball.\n Other tool requirements\n\nThis paragraph exists in sourcerepo.sgml, but it should be updated, I\nguess, because now these three binaries would be required when\nbuilding from a tarball.\n\n# specparse.c and specscanner.c are in the distribution tarball,\n# so do not clean them here\n\nThis comment in src/test/isolation/Makefile should be removed.\n--\nMichael",
"msg_date": "Tue, 26 Sep 2023 13:49:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-23 12:46:45 +0200, Peter Eisentraut wrote:\n> Subject: [PATCH v4] Remove distprep\n> \n> A PostgreSQL release tarball contains a number of prebuilt files, in\n> particular files produced by bison, flex, perl, and well as html and\n> man documentation. We have done this consistent with established\n> practice at the time to not require these tools for building from a\n> tarball. Some of these tools were hard to get, or get the right\n> version of, from time to time, and shipping the prebuilt output was a\n> convenience to users.\n> \n> Now this has at least two problems:\n> \n> One, we have to make the build system(s) work in two modes: Building\n> from a git checkout and building from a tarball. This is pretty\n> complicated, but it works so far for autoconf/make. It does not\n> currently work for meson; you can currently only build with meson from\n> a git checkout. Making meson builds work from a tarball seems very\n> difficult or impossible. One particular problem is that since meson\n> requires a separate build directory, we cannot make the build update\n> files like gram.h in the source tree. So if you were to build from a\n> tarball and update gram.y, you will have a gram.h in the source tree\n> and one in the build tree, but the way things work is that the\n> compiler will always use the one in the source tree. So you cannot,\n> for example, make any gram.y changes when building from a tarball.\n> This seems impossible to fix in a non-horrible way.\n\nI think it might be possible to fix in a non-horrible way, just that the\neffort doing so could be much better spent on other things.\n\nIt's maybe also worth mentioning that this does *not* work reliably with vpath\nmake builds either...\n\n\n> The make maintainer-clean target, whose job it is to remove the\n> prebuilt files in addition to what make distclean does, is now just an\n> alias to make distprep. (In practice, it is probably obsolete given\n> that git clean is available.)\n\nFWIW, I find a \"full clean\" target useful to be sure that we don't produce\n\"untracked\" build products. Do a full build, then run \"full clean\", then see\nwhat remains.\n\n\n> 88 files changed, 169 insertions(+), 409 deletions(-)\n\nIt might be worthwhile to split this into a bit smaller chunks, e.g. depending\non perl, bison, flex, and then separately the various makefile bits that are\nall over the tree.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 29 Sep 2023 09:00:30 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 26.09.23 06:49, Michael Paquier wrote:\n> On Wed, Aug 23, 2023 at 12:46:45PM +0200, Peter Eisentraut wrote:\n>> Apparently, the headerscheck and cpluspluscheck targets still didn't work\n>> right in the Cirrus jobs. Here is an updated patch to address that. This\n>> is also rebased over some recent changes that affected this patch (generated\n>> wait events stuff).\n> \n> -gettext-files: distprep\n> +gettext-files: postgres\n> \n> This bit in src/backend/nls.mk does not seem right to me. The\n> following sequence works on HEAD, but breaks with the patch because\n> the files that should be automatically generated from the perl scripts\n> aren't anymore:\n> $ ./configure ...\n> $ make -C src/backend/ gettext-files\n> [...]\n> In file included from ../../../../src/include/postgres.h:46,\n> from brin.c:16:\n> ../../../../src/include/utils/elog.h:79:10: fatal error:\n> utils/errcodes.h: No such file or directory\n> 79 | #include \"utils/errcodes.h\"\n\nOk, I think I found a better way to address this. It requires keeping a \nsubset of the old distprep target in src/backend/Makefile for use by \nnls.mk. I have checked that the above sequence now works, and that the \ngenerated .pot files are identical to before this patch.\n\n> # Technically, this should depend on Makefile.global, but then\n> # version.sgml would need to be rebuilt after every configure run,\n> # even in distribution tarballs. So this is cheating a bit, but it\n> \n> There is this comment in doc/src/sgml/Makefile. Does it still apply?\n\nI have removed the \"even in distribution tarballs\" bit, but the general \nidea is still valid I think.\n\n> Note that building <productname>PostgreSQL</productname> from the source\n> repository requires reasonably up-to-date versions of <application>bison</application>,\n> <application>flex</application>, and <application>Perl</application>.\n> These tools are not needed to build from a distribution tarball, because\n> the files generated with these tools are included in the tarball.\n> Other tool requirements\n> \n> This paragraph exists in sourcerepo.sgml, but it should be updated, I\n> guess, because now these three binaries would be required when\n> building from a tarball.\n\nI have removed that paragraph.\n\n> # specparse.c and specscanner.c are in the distribution tarball,\n> # so do not clean them here\n> \n> This comment in src/test/isolation/Makefile should be removed.\n\ndone\n\nThe attached updated patch is also split up like Andres suggested \nnearby. (Not sure if it would be good to commit it that way, but it's \neasier to look at for now for sure.)",
"msg_date": "Thu, 5 Oct 2023 17:46:46 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 05:46:46PM +0200, Peter Eisentraut wrote:\n> Ok, I think I found a better way to address this. It requires keeping a\n> subset of the old distprep target in src/backend/Makefile for use by nls.mk.\n> I have checked that the above sequence now works, and that the generated\n> .pot files are identical to before this patch.\n\ngenerated-parser-sources looks like an elegant split.\n\n> (Not sure if it would be good to commit it that way, but it's easier to look\n> at for now for sure.)\n\nNot sure. I'd be OK if the patch set is committed into two pieces as\nwell. I guess that's up to how you feel about that at the end ;)\n\nWhile looking at the last references of the distribution tarball, this\ncame out:\n# This allows removing some files from the distribution tarballs while\n# keeping the dependencies satisfied.\n.SECONDARY: $(GENERATED_SGML)\n.SECONDARY: postgres-full.xml\n.SECONDARY: INSTALL.html INSTALL.xml\n.SECONDARY: postgres-A4.fo postgres-US.fo\n\nThat's not really something for this patch, but I got to ask. What's\nthe final plan for the documentation when it comes to releases? A\nsecond tarball separated from the git-only tarball that includes all\nthat and the HTML docs, generated with a new \"doc-only\" set of meson\ncommands?\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 11:00:41 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 06.10.23 04:00, Michael Paquier wrote:\n> That's not really something for this patch, but I got to ask. What's\n> the final plan for the documentation when it comes to releases? A\n> second tarball separated from the git-only tarball that includes all\n> that and the HTML docs, generated with a new \"doc-only\" set of meson\n> commands?\n\nYes, something like that. Some people wanted a tarball of just the HTML \ndocs for download. Similar to the PDFs currently, I suppose.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 08:38:31 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 08:38:31AM +0200, Peter Eisentraut wrote:\n> Yes, something like that. Some people wanted a tarball of just the HTML\n> docs for download. Similar to the PDFs currently, I suppose.\n\nI suspected so.\n\nI've marked the patch as RfC for now.\n--\nMichael",
"msg_date": "Fri, 6 Oct 2023 17:03:40 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-05 17:46:46 +0200, Peter Eisentraut wrote:\n> The attached updated patch is also split up like Andres suggested nearby.\n\nThanks.\n\n\n> (Not sure if it would be good to commit it that way, but it's easier to look\n> at for now for sure.)\n\nI'd push together, but I think the split is useful when looking later as well.\n\nI played around with these for a bit without finding an issue.\n\n\nThe only thing I wonder is whether we ought to keep a maintainer-clean target\n(as an alias to distclean), so that extensions that added things to\nmaintainer-clean continue to work.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Oct 2023 11:50:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 06.10.23 20:50, Andres Freund wrote:\n> The only thing I wonder is whether we ought to keep a maintainer-clean \n> target (as an alias to distclean), so that extensions that added things \n> to maintainer-clean continue to work.\n\nThe patch does do that.\n\n\n\n",
"msg_date": "Mon, 9 Oct 2023 12:16:23 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-09 12:16:23 +0200, Peter Eisentraut wrote:\n> On 06.10.23 20:50, Andres Freund wrote:\n> > The only thing I wonder is whether we ought to keep a maintainer-clean\n> > target (as an alias to distclean), so that extensions that added things\n> > to maintainer-clean continue to work.\n> \n> The patch does do that.\n\nIt kinda works, but I'm not sure how well. Because the aliasing happens in\nMakefile.global, we won't know about the \"original\" maintainer-clean target\nonce recursing into a subdir.\n\nThat's perhaps OK, because extensions likely won't utilize subdirectories? But\nI'm not sure. I know that some people build postgres extensions by adding them\nto contrib/, in those cases it won't work.\n\nOTOH, it seems somewhat unlikely that maintainer-clean is utilized much in\nextensions. I see it in things like postgis, but that has it's own configure\netc, even though it also invokes pgxs.\n\n\nI wish we had an easy way of\na) downloading most working open-source extensions\nb) building many of those\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Oct 2023 14:14:11 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 09.10.23 17:14, Andres Freund wrote:\n> It kinda works, but I'm not sure how well. Because the aliasing happens in\n> Makefile.global, we won't know about the \"original\" maintainer-clean target\n> once recursing into a subdir.\n> \n> That's perhaps OK, because extensions likely won't utilize subdirectories? But\n> I'm not sure. I know that some people build postgres extensions by adding them\n> to contrib/, in those cases it won't work.\n> \n> OTOH, it seems somewhat unlikely that maintainer-clean is utilized much in\n> extensions. I see it in things like postgis, but that has it's own configure\n> etc, even though it also invokes pgxs.\n\nI thought about this. I don't think this is something that any \nextension would use. If they care about the distinction between \ndistclean and maintainer-clean, are they also doing their own distprep \nand dist? Seems unlikely. I mean, if some extension is actually \naffected, I'm happy to accommodate, but we can deal with that when we \nlearn about it. Moreover, if we are moving forward in this direction, \nwe would presumably also like the extensions to get rid of their \ndistprep step.\n\nSo I think we are ready to move ahead with this patch. There have been \nsome light complaints earlier in this thread that people wanted to keep \nsome way to clean only some of the files. But there hasn't been any \nconcrete follow-up on that, as far as I can see, so I don't know what to \ndo about that.\n\n\n\n",
"msg_date": "Wed, 1 Nov 2023 16:39:24 -0400",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 2023-11-01 16:39:24 -0400, Peter Eisentraut wrote:\n> > OTOH, it seems somewhat unlikely that maintainer-clean is utilized much in\n> > extensions. I see it in things like postgis, but that has it's own configure\n> > etc, even though it also invokes pgxs.\n> \n> I thought about this. I don't think this is something that any extension\n> would use. If they care about the distinction between distclean and\n> maintainer-clean, are they also doing their own distprep and dist? Seems\n> unlikely. I mean, if some extension is actually affected, I'm happy to\n> accommodate, but we can deal with that when we learn about it. Moreover, if\n> we are moving forward in this direction, we would presumably also like the\n> extensions to get rid of their distprep step.\n> \n> So I think we are ready to move ahead with this patch. There have been some\n> light complaints earlier in this thread that people wanted to keep some way\n> to clean only some of the files. But there hasn't been any concrete\n> follow-up on that, as far as I can see, so I don't know what to do about\n> that.\n\n+1, let's do this. We can add dedicated target for more specific cases later\nif we decide we want that.\n\n\n",
"msg_date": "Thu, 2 Nov 2023 15:34:18 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 02.11.23 23:34, Andres Freund wrote:\n> On 2023-11-01 16:39:24 -0400, Peter Eisentraut wrote:\n>>> OTOH, it seems somewhat unlikely that maintainer-clean is utilized much in\n>>> extensions. I see it in things like postgis, but that has it's own configure\n>>> etc, even though it also invokes pgxs.\n>>\n>> I thought about this. I don't think this is something that any extension\n>> would use. If they care about the distinction between distclean and\n>> maintainer-clean, are they also doing their own distprep and dist? Seems\n>> unlikely. I mean, if some extension is actually affected, I'm happy to\n>> accommodate, but we can deal with that when we learn about it. Moreover, if\n>> we are moving forward in this direction, we would presumably also like the\n>> extensions to get rid of their distprep step.\n>>\n>> So I think we are ready to move ahead with this patch. There have been some\n>> light complaints earlier in this thread that people wanted to keep some way\n>> to clean only some of the files. But there hasn't been any concrete\n>> follow-up on that, as far as I can see, so I don't know what to do about\n>> that.\n> \n> +1, let's do this. We can add dedicated target for more specific cases later\n> if we decide we want that.\n\ndone\n\n\n\n",
"msg_date": "Mon, 6 Nov 2023 16:21:40 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Mon, Nov 06, 2023 at 04:21:40PM +0100, Peter Eisentraut wrote:\n> done\n\nNice to see 721856ff24b3 in, thanks!\n--\nMichael",
"msg_date": "Tue, 7 Nov 2023 16:19:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 2023-Nov-07, Michael Paquier wrote:\n\n> On Mon, Nov 06, 2023 at 04:21:40PM +0100, Peter Eisentraut wrote:\n> > done\n> \n> Nice to see 721856ff24b3 in, thanks!\n\nHmm, do we still need to have README.git as a separate file from README?\n\nAlso, looking at README, I see it refers to the INSTALL file in the\nroot, but that doesn't exist. \"make -C doc/src/sgml INSTALL\" creates\nit, but it's not copied to the root directory. Do we need some fixup\nfor that?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n",
"msg_date": "Tue, 21 Nov 2023 17:13:12 +0100",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Hmm, do we still need to have README.git as a separate file from README?\n\n> Also, looking at README, I see it refers to the INSTALL file in the\n> root, but that doesn't exist. \"make -C doc/src/sgml INSTALL\" creates\n> it, but it's not copied to the root directory. Do we need some fixup\n> for that?\n\nYeah, we clearly need to rethink this area if the plan is that tarballs\nwill be pristine git pulls. I think we want just README at the top\nlevel, and I propose we give up on the text INSTALL file altogether\n(thereby removing a documentation build gotcha that catches people\nevery so often). I propose that in 2023 it ought to be sufficient\nfor the README file to point at build instructions on the web.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Nov 2023 13:23:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "\nOn 2023-11-21 Tu 13:23, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Hmm, do we still need to have README.git as a separate file from README?\n>> Also, looking at README, I see it refers to the INSTALL file in the\n>> root, but that doesn't exist. \"make -C doc/src/sgml INSTALL\" creates\n>> it, but it's not copied to the root directory. Do we need some fixup\n>> for that?\n> Yeah, we clearly need to rethink this area if the plan is that tarballs\n> will be pristine git pulls. I think we want just README at the top\n> level, and I propose we give up on the text INSTALL file altogether\n> (thereby removing a documentation build gotcha that catches people\n> every so often). I propose that in 2023 it ought to be sufficient\n> for the README file to point at build instructions on the web.\n>\n> \t\t\t\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 21 Nov 2023 16:41:26 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Thu, Oct 05, 2023 at 05:46:46PM +0200, Peter Eisentraut wrote:\n> --- a/src/backend/Makefile\n> +++ b/src/backend/Makefile\n\n> $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n> +\trm -f '$@'\n> +\t$(LN_S) ../../backend/$< '$@'\n> \n> $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n> +\trm -f '$@'\n> +\t$(LN_S) ../../backend/$< '$@'\n\nThese broke the\nhttps://www.postgresql.org/docs/17/installation-platform-notes.html#INSTALLATION-NOTES-MINGW\nbuild, where LN_S='cp -pR'. On other platforms, \"make LN_S='cp -pR'\"\nreproduces this. Reverting the above lines fixes things. The buildfarm has\nno coverage for that build scenario (fairywren uses Meson).\n\n\n",
"msg_date": "Sun, 16 Jun 2024 12:34:48 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 2024-Jun-16, Noah Misch wrote:\n\n> These broke the\n> https://www.postgresql.org/docs/17/installation-platform-notes.html#INSTALLATION-NOTES-MINGW\n> build, where LN_S='cp -pR'. On other platforms, \"make LN_S='cp -pR'\"\n> reproduces this. Reverting the above lines fixes things. The buildfarm has\n> no coverage for that build scenario (fairywren uses Meson).\n\nI agree we should revert this change. I had commented on this:\nhttps://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Sun, 16 Jun 2024 21:59:06 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 16.06.24 21:34, Noah Misch wrote:\n> On Thu, Oct 05, 2023 at 05:46:46PM +0200, Peter Eisentraut wrote:\n>> --- a/src/backend/Makefile\n>> +++ b/src/backend/Makefile\n> \n>> $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n>> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n>> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n>> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n>> +\trm -f '$@'\n>> +\t$(LN_S) ../../backend/$< '$@'\n>> \n>> $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n>> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n>> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n>> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n>> +\trm -f '$@'\n>> +\t$(LN_S) ../../backend/$< '$@'\n> \n> These broke the\n> https://www.postgresql.org/docs/17/installation-platform-notes.html#INSTALLATION-NOTES-MINGW\n> build, where LN_S='cp -pR'. On other platforms, \"make LN_S='cp -pR'\"\n> reproduces this. Reverting the above lines fixes things. The buildfarm has\n> no coverage for that build scenario (fairywren uses Meson).\n\nIs it just these two instances?\n\nCommit 721856ff24b contains a few more hunks that change something about \nLN_S. Are those ok?\n\n\n\n",
"msg_date": "Thu, 20 Jun 2024 09:29:45 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On Thu, Jun 20, 2024 at 09:29:45AM +0200, Peter Eisentraut wrote:\n> On 16.06.24 21:34, Noah Misch wrote:\n> > On Thu, Oct 05, 2023 at 05:46:46PM +0200, Peter Eisentraut wrote:\n> > > --- a/src/backend/Makefile\n> > > +++ b/src/backend/Makefile\n> > \n> > > $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n> > > -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n> > > -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n> > > -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n> > > +\trm -f '$@'\n> > > +\t$(LN_S) ../../backend/$< '$@'\n> > > $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n> > > -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n> > > -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n> > > -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n> > > +\trm -f '$@'\n> > > +\t$(LN_S) ../../backend/$< '$@'\n> > \n> > These broke the\n> > https://www.postgresql.org/docs/17/installation-platform-notes.html#INSTALLATION-NOTES-MINGW\n> > build, where LN_S='cp -pR'. On other platforms, \"make LN_S='cp -pR'\"\n> > reproduces this. Reverting the above lines fixes things. The buildfarm has\n> > no coverage for that build scenario (fairywren uses Meson).\n> \n> Is it just these two instances?\n\nYes.\n\n> Commit 721856ff24b contains a few more hunks that change something about\n> LN_S. Are those ok?\n\nI'm guessing \"make LN_S='cp -pR'\" didn't have a problem with those because\nthey have \".\" as the second argument. \"cp -pR ../foo .\" is compatible with\n\"ln -s ../foo .\" in that both are interpreting \"../bar\" relative to the same\ndirectory. Not so for \"cp -pR ../foo bar/baz\" vs. \"ln -s ../foo bar/baz\".\n\n\n",
"msg_date": "Thu, 20 Jun 2024 07:34:05 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove distprep"
},
{
"msg_contents": "On 20.06.24 16:34, Noah Misch wrote:\n> On Thu, Jun 20, 2024 at 09:29:45AM +0200, Peter Eisentraut wrote:\n>> On 16.06.24 21:34, Noah Misch wrote:\n>>> On Thu, Oct 05, 2023 at 05:46:46PM +0200, Peter Eisentraut wrote:\n>>>> --- a/src/backend/Makefile\n>>>> +++ b/src/backend/Makefile\n>>>\n>>>> $(top_builddir)/src/include/storage/lwlocknames.h: storage/lmgr/lwlocknames.h\n>>>> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n>>>> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n>>>> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n>>>> +\trm -f '$@'\n>>>> +\t$(LN_S) ../../backend/$< '$@'\n>>>> $(top_builddir)/src/include/utils/wait_event_types.h: utils/activity/wait_event_types.h\n>>>> -\tprereqdir=`cd '$(dir $<)' >/dev/null && pwd` && \\\n>>>> -\t cd '$(dir $@)' && rm -f $(notdir $@) && \\\n>>>> -\t $(LN_S) \"$$prereqdir/$(notdir $<)\" .\n>>>> +\trm -f '$@'\n>>>> +\t$(LN_S) ../../backend/$< '$@'\n>>>\n>>> These broke the\n>>> https://www.postgresql.org/docs/17/installation-platform-notes.html#INSTALLATION-NOTES-MINGW\n>>> build, where LN_S='cp -pR'. On other platforms, \"make LN_S='cp -pR'\"\n>>> reproduces this. Reverting the above lines fixes things. The buildfarm has\n>>> no coverage for that build scenario (fairywren uses Meson).\n>>\n>> Is it just these two instances?\n> \n> Yes.\n> \n>> Commit 721856ff24b contains a few more hunks that change something about\n>> LN_S. Are those ok?\n> \n> I'm guessing \"make LN_S='cp -pR'\" didn't have a problem with those because\n> they have \".\" as the second argument. \"cp -pR ../foo .\" is compatible with\n> \"ln -s ../foo .\" in that both are interpreting \"../bar\" relative to the same\n> directory. Not so for \"cp -pR ../foo bar/baz\" vs. \"ln -s ../foo bar/baz\".\n\nOk, fix pushed.\n\n\n\n",
"msg_date": "Fri, 21 Jun 2024 08:19:53 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove distprep"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've always been annoyed by the fact that pg_get_serial_sequence takes\nthe table and returns the sequence as strings rather than regclass. And\nsince identity columns were added, the name is misleading as well (which\nis even acknowledged in the docs, together with a suggestion for a\nbetter name).\n\nSo, instead of making excuses in the documentation, I thought why not\nadd a new function which addresses all of these issues, and document the\nold one as a backward-compatibilty wrapper?\n\nPlease see the attached patch for my stab at this.\n\n- ilmari",
"msg_date": "Fri, 09 Jun 2023 20:19:44 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Fri, 9 Jun 2023, at 20:19, Dagfinn Ilmari Mannsåker wrote:\n\n> Hi hackers,\n>\n> I've always been annoyed by the fact that pg_get_serial_sequence takes\n> the table and returns the sequence as strings rather than regclass. And\n> since identity columns were added, the name is misleading as well (which\n> is even acknowledged in the docs, together with a suggestion for a\n> better name).\n> \n> So, instead of making excuses in the documentation, I thought why not\n> add a new function which addresses all of these issues, and document the\n> old one as a backward-compatibilty wrapper?\n> \n> Please see the attached patch for my stab at this.\n\nThis didn't get any replies, so I've created a commitfest entry to make sure it doesn't get lost:\n\nhttps://commitfest.postgresql.org/44/4535/\n\n-- \n- ilmari\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:24:41 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Fri, Jun 09, 2023 at 08:19:44PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> I've always been annoyed by the fact that pg_get_serial_sequence takes\n> the table and returns the sequence as strings rather than regclass. And\n> since identity columns were added, the name is misleading as well (which\n> is even acknowledged in the docs, together with a suggestion for a\n> better name).\n> \n> So, instead of making excuses in the documentation, I thought why not\n> add a new function which addresses all of these issues, and document the\n> old one as a backward-compatibilty wrapper?\n\nThis sounds generally reasonable to me. That note has been there since\n2006 (2b2a507). I didn't find any further discussion about this on the\nlists.\n\n> + A backwards-compatibility wrapper\n> + for <function>pg_get_owned_sequence</function>, which\n> + uses <type>text</type> for the table and sequence names instead of\n> + <type>regclass</type>. The first parameter is a table name with optional\n\nI wonder if it'd be possible to just remove pg_get_serial_sequence().\nAssuming 2b2a507 removed the last use of it in pg_dump, any dump files\ncreated on versions >= v8.2 shouldn't use it. But I suppose it wouldn't be\ntoo much trouble to keep it around for anyone who happens to need it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 09:42:50 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> I wonder if it'd be possible to just remove pg_get_serial_sequence().\n\nA quick search at http://codesearch.debian.net/ finds uses of it\nin packages like gdal, qgis, rails, ... We could maybe get rid of\nit after a suitable deprecation period, but I think we can't just\nsummarily remove it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Sep 2023 13:31:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 01:31:43PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> I wonder if it'd be possible to just remove pg_get_serial_sequence().\n> \n> A quick search at http://codesearch.debian.net/ finds uses of it\n> in packages like gdal, qgis, rails, ... We could maybe get rid of\n> it after a suitable deprecation period, but I think we can't just\n> summarily remove it.\n\nGiven that, I'd still vote for marking it deprecated, but I don't feel\nstrongly about actually removing it anytime soon (or anytime at all,\nreally).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 10:55:37 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart ([email protected]) wrote:\n> On Fri, Sep 01, 2023 at 01:31:43PM -0400, Tom Lane wrote:\n> > Nathan Bossart <[email protected]> writes:\n> >> I wonder if it'd be possible to just remove pg_get_serial_sequence().\n> > \n> > A quick search at http://codesearch.debian.net/ finds uses of it\n> > in packages like gdal, qgis, rails, ... We could maybe get rid of\n> > it after a suitable deprecation period, but I think we can't just\n> > summarily remove it.\n\nI don't agree with this- we would only be removing it from the next\nmajor release which is a year away and our other major releases will be\nsupported for years to come. If we do remove it, it'd be great to\nmention it to those projects and ask them to update ahead of the next\nrelease.\n\n> Given that, I'd still vote for marking it deprecated, but I don't feel\n> strongly about actually removing it anytime soon (or anytime at all,\n> really).\n\nWhy would we mark it as deprecated then? If we're not going to get rid\nof it, then we're supporting it and we'll fix issues with it and that\nbasically means it's not deprecated. If it's broken and we're not going\nto fix it, then we should get rid of it.\n\nIf we're going to actually mark it deprecated then it should be, at\nleast, a yearly discussion about removing it. I'm generally more in\nfavor of either just keeping it, or just removing it, though. We've had\nvery little success marking things as deprecated as a way of getting\neveryone to stop using it- some folks will stop using it right away and\nthose are the same people who would just adapt to it being gone in the\nnext major version quickly, and then there's folks who won't do anything\nuntil it's actually gone (and maybe not even then). There really isn't\na serious middle-ground where deprecation is helpful given our yearly\nrelease cycle and long major version support period.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 8 Sep 2023 10:56:15 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Fri, Sep 08, 2023 at 10:56:15AM -0400, Stephen Frost wrote:\n> If we're going to actually mark it deprecated then it should be, at\n> least, a yearly discussion about removing it. I'm generally more in\n> favor of either just keeping it, or just removing it, though. We've had\n> very little success marking things as deprecated as a way of getting\n> everyone to stop using it- some folks will stop using it right away and\n> those are the same people who would just adapt to it being gone in the\n> next major version quickly, and then there's folks who won't do anything\n> until it's actually gone (and maybe not even then). There really isn't\n> a serious middle-ground where deprecation is helpful given our yearly\n> release cycle and long major version support period.\n\nFair point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 8 Sep 2023 10:53:17 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On 09.06.23 21:19, Dagfinn Ilmari Mannsåker wrote:\n> I've always been annoyed by the fact that pg_get_serial_sequence takes\n> the table and returns the sequence as strings rather than regclass. And\n> since identity columns were added, the name is misleading as well (which\n> is even acknowledged in the docs, together with a suggestion for a\n> better name).\n\nIf you are striving for less misleading terminology, note that the \nconcept of an \"owned sequence\" does not exist for users of identity \nsequences, and ALTER SEQUENCE / OWNED BY cannot be used for such sequences.\n\nWould it work to just overload pg_get_serial_sequence with regclass \nargument types?\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 09:33:54 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Would it work to just overload pg_get_serial_sequence with regclass \n> argument types?\n\nProbably not; the parser would have no principled way to resolve\npg_get_serial_sequence('foo', 'bar') as one or the other. I'm\nnot sure offhand if it would throw error or just choose one, but\nif it just chooses one it'd likely be the text variant.\n\nIt's possible that we could get away with just summarily changing\nthe argument type from text to regclass. ISTR that we did exactly\nthat with nextval() years ago, and didn't get too much push-back.\nBut we couldn't do the same for the return type. Also, this\napproach does nothing for the concern about the name being\nmisleading.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Sep 2023 10:40:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Peter Eisentraut <[email protected]> writes:\n>> Would it work to just overload pg_get_serial_sequence with regclass \n>> argument types?\n>\n> Probably not; the parser would have no principled way to resolve\n> pg_get_serial_sequence('foo', 'bar') as one or the other. I'm\n> not sure offhand if it would throw error or just choose one, but\n> if it just chooses one it'd likely be the text variant.\n\nThat works fine, and it prefers the text version:\n\n~=# create function pg_get_serial_sequence(tbl regclass, col name)\n returns regclass strict stable parallel safe\n return pg_get_serial_sequence(tbl::text, col::text)::regclass;\nCREATE FUNCTION\n\n~=# select pg_typeof(pg_get_serial_sequence('identest', 'id'));\n┌───────────┐\n│ pg_typeof │\n├───────────┤\n│ text │\n└───────────┘\n(1 row)\n\n~=# select pg_typeof(pg_get_serial_sequence('identest', 'id'::name));\n┌───────────┐\n│ pg_typeof │\n├───────────┤\n│ regclass │\n└───────────┘\n(1 row)\n\n> It's possible that we could get away with just summarily changing\n> the argument type from text to regclass. ISTR that we did exactly\n> that with nextval() years ago, and didn't get too much push-back.\n> But we couldn't do the same for the return type. Also, this\n> approach does nothing for the concern about the name being\n> misleading.\n\nMaybe we should go all the way the other way, and call it\npg_get_identity_sequence() and claim that \"serial\" is a legacy form of\nidentity columns?\n\n- ilmari\n\n\n",
"msg_date": "Tue, 12 Sep 2023 15:53:28 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 03:53:28PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Tom Lane <[email protected]> writes:\n>> It's possible that we could get away with just summarily changing\n>> the argument type from text to regclass. ISTR that we did exactly\n>> that with nextval() years ago, and didn't get too much push-back.\n>> But we couldn't do the same for the return type. Also, this\n>> approach does nothing for the concern about the name being\n>> misleading.\n> \n> Maybe we should go all the way the other way, and call it\n> pg_get_identity_sequence() and claim that \"serial\" is a legacy form of\n> identity columns?\n\nHm. Could we split it into two functions, pg_get_owned_sequence() and\npg_get_identity_sequence()? I see that commit 3012061 [0] added support\nfor identity columns to pg_get_serial_sequence() because folks expected\nthat to work, so maybe that's a good reason to keep them together. If we\ndo elect to keep them combined, I'd be okay with renaming it to\npg_get_identity_sequence() along with your other proposed changes.\n\n[0] https://postgr.es/m/20170912212054.25640.55202%40wrigleys.postgresql.org\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 Oct 2023 11:29:29 -0500",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 3:43 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I've always been annoyed by the fact that pg_get_serial_sequence takes\n> the table and returns the sequence as strings rather than regclass. And\n> since identity columns were added, the name is misleading as well (which\n> is even acknowledged in the docs, together with a suggestion for a\n> better name).\n>\n> So, instead of making excuses in the documentation, I thought why not\n> add a new function which addresses all of these issues, and document the\n> old one as a backward-compatibilty wrapper?\n>\n> Please see the attached patch for my stab at this.\n>\n\nI reviewed the Patch and the compilation looks fine. I tested various\nscenarios and did not find any issues. Also I did RUN 'make check'\nand 'make check-world' and all the test cases passed successfully. I\nfigured out a small typo please have a look at it:-\n\nThis is the name the docs say `pg_get_serial_sequence` sholud have\nhad, and gives us the opportunity to change the return and table\nargument types to `regclass` and the column argument to `name`,\ninstead of using `text` everywhere. This matches what's in catalogs,\nand requires less explaining than the rules for\n`pg_get_serial_sequence`.\n\nHere 'sholud' have been 'should'.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Fri, 8 Dec 2023 16:36:58 +0530",
"msg_from": "Shubham Khanna <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Tue, 24 Oct 2023 at 22:00, Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Sep 12, 2023 at 03:53:28PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > Tom Lane <[email protected]> writes:\n> >> It's possible that we could get away with just summarily changing\n> >> the argument type from text to regclass. ISTR that we did exactly\n> >> that with nextval() years ago, and didn't get too much push-back.\n> >> But we couldn't do the same for the return type. Also, this\n> >> approach does nothing for the concern about the name being\n> >> misleading.\n> >\n> > Maybe we should go all the way the other way, and call it\n> > pg_get_identity_sequence() and claim that \"serial\" is a legacy form of\n> > identity columns?\n>\n> Hm. Could we split it into two functions, pg_get_owned_sequence() and\n> pg_get_identity_sequence()? I see that commit 3012061 [0] added support\n> for identity columns to pg_get_serial_sequence() because folks expected\n> that to work, so maybe that's a good reason to keep them together. If we\n> do elect to keep them combined, I'd be okay with renaming it to\n> pg_get_identity_sequence() along with your other proposed changes.\n\nI have changed the status of the patch to \"Waiting on Author\" as\nNathan's comments have not yet been followed up.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 8 Jan 2024 10:54:39 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "vignesh C <[email protected]> writes:\n\n> On Tue, 24 Oct 2023 at 22:00, Nathan Bossart <[email protected]> wrote:\n>>\n>> On Tue, Sep 12, 2023 at 03:53:28PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> > Tom Lane <[email protected]> writes:\n>> >> It's possible that we could get away with just summarily changing\n>> >> the argument type from text to regclass. ISTR that we did exactly\n>> >> that with nextval() years ago, and didn't get too much push-back.\n>> >> But we couldn't do the same for the return type. Also, this\n>> >> approach does nothing for the concern about the name being\n>> >> misleading.\n>> >\n>> > Maybe we should go all the way the other way, and call it\n>> > pg_get_identity_sequence() and claim that \"serial\" is a legacy form of\n>> > identity columns?\n>>\n>> Hm. Could we split it into two functions, pg_get_owned_sequence() and\n>> pg_get_identity_sequence()? I see that commit 3012061 [0] added support\n>> for identity columns to pg_get_serial_sequence() because folks expected\n>> that to work, so maybe that's a good reason to keep them together. If we\n>> do elect to keep them combined, I'd be okay with renaming it to\n>> pg_get_identity_sequence() along with your other proposed changes.\n\nWe can't make pg_get_serial_sequence(text, text) not work on identity\ncolumns any more, that would break existing users, and making the new\nfunction not work on serial columns would make it harder for people to\nmigrate to it. If you're suggesting adding two new functions,\npg_get_identity_sequence(regclass, name) and\npg_get_serial_sequence(regclass, name)¹, which only work on the type of\ncolumn corresponding to the name, I don't think that's great for\nusability or migratability either.\n\n> I have changed the status of the patch to \"Waiting on Author\" as\n> Nathan's comments have not yet been followed up.\n\nThanks for the nudge, here's an updated patch that together with the\nabove addresses them. I've changed the commitfest entry back to \"Needs\nreview\".\n\n> Regards,\n> Vignesh\n\n- ilmari",
"msg_date": "Mon, 08 Jan 2024 16:58:02 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On Mon, Jan 08, 2024 at 04:58:02PM +0000, Dagfinn Ilmari Manns�ker wrote:\n> We can't make pg_get_serial_sequence(text, text) not work on identity\n> columns any more, that would break existing users, and making the new\n> function not work on serial columns would make it harder for people to\n> migrate to it. If you're suggesting adding two new functions,\n> pg_get_identity_sequence(regclass, name) and\n> pg_get_serial_sequence(regclass, name)�, which only work on the type of\n> column corresponding to the name, I don't think that's great for\n> usability or migratability either.\n\nI think these are reasonable concerns, but with this patch, we now have the\nfollowing functions:\n\n\tpg_get_identity_sequence(table regclass, column name) -> regclass\n\tpg_get_serial_sequence(table text, column text) -> text\n\nIf we only look at the names, it sure sounds like the first one only works\nfor identity columns, and the second only works for serial columns. But\nboth work for identity _and_ serial. The real differences between the two\nare the parameter and return types. Granted, this is described in the\ndocumentation updates, but IMHO this is a kind-of bizarre state to end up\nin.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Jan 2024 15:08:47 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "On 08.01.24 22:08, Nathan Bossart wrote:\n> I think these are reasonable concerns, but with this patch, we now have the\n> following functions:\n> \n> \tpg_get_identity_sequence(table regclass, column name) -> regclass\n> \tpg_get_serial_sequence(table text, column text) -> text\n> \n> If we only look at the names, it sure sounds like the first one only works\n> for identity columns, and the second only works for serial columns. But\n> both work for identity_and_ serial. The real differences between the two\n> are the parameter and return types. Granted, this is described in the\n> documentation updates, but IMHO this is a kind-of bizarre state to end up\n> in.\n\nYeah, that's really weird.\n\nWould it work to change the signature of pg_get_serial_sequence to\n\n pg_get_serial_sequence(table anyelement, column text) -> anyelement\n\nand then check inside the function code whether text or regclass was passed?\n\n\n",
"msg_date": "Tue, 9 Jan 2024 17:41:14 +0100",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Would it work to change the signature of pg_get_serial_sequence to\n> pg_get_serial_sequence(table anyelement, column text) -> anyelement\n> and then check inside the function code whether text or regclass was passed?\n\nProbably not very well, because then we'd get no automatic coercion of\ninputs that were not either type.\n\nMaybe it would work to have both\n\npg_get_serial_sequence(table text, column text) -> text\npg_get_serial_sequence(table regclass, column text) -> regclass\n\nbut I wonder if that would create any situations where the parser\ncouldn't choose between these candidates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jan 2024 12:03:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Peter Eisentraut <[email protected]> writes:\n>> Would it work to change the signature of pg_get_serial_sequence to\n>> pg_get_serial_sequence(table anyelement, column text) -> anyelement\n>> and then check inside the function code whether text or regclass was passed?\n>\n> Probably not very well, because then we'd get no automatic coercion of\n> inputs that were not either type.\n>\n> Maybe it would work to have both\n>\n> pg_get_serial_sequence(table text, column text) -> text\n> pg_get_serial_sequence(table regclass, column text) -> regclass\n\nI think it would be more correct to use name for the column argument\ntype, rather than text.\n\n> but I wonder if that would create any situations where the parser\n> couldn't choose between these candidates.\n\nAccording to my my earlier testing¹, the parser prefers the text version\nfor unknown-type arguments, and further testing shows that that's also\nthe case for other types with implicit casts to text such as varchar and\nname. The regclass version gets chosen for oid and (big)int, because of\nthe implicit cast from (big)int to oid and oid to regclass.\n\nThe only case I could foresee failing would be types that have implicit\ncasts to both text and regtype. It turns out that varchar does have\nboth, but the parser still picks the text version without copmlaint.\nDoes it prefer the binary-coercible cast over the one that requires\ncalling a conversion function?\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n[1]: https://postgr.es/m/[email protected]\n\n\n",
"msg_date": "Tue, 09 Jan 2024 17:44:13 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Maybe it would work to have both\n>> pg_get_serial_sequence(table text, column text) -> text\n>> pg_get_serial_sequence(table regclass, column text) -> regclass\n\n> I think it would be more correct to use name for the column argument\n> type, rather than text.\n\nIn a green field perhaps, but we're not in a green field:\n\tpg_get_serial_sequence(text, text)\nalready exists. That being the case, I'd strongly recommend that if\nwe overload this function name then we stick to text for the column\nargument. If you add\n\tpg_get_serial_sequence(regclass, name)\nthen you will be presenting the parser with situations where one\nalternative is \"more desirable\" for one argument and \"less desirable\"\nfor the other, so that it's unclear which it will choose or whether\nit will throw up its hands and refuse to choose.\n\n> The only case I could foresee failing would be types that have implicit\n> casts to both text and regtype. It turns out that varchar does have\n> both, but the parser still picks the text version without copmlaint.\n> Does it prefer the binary-coercible cast over the one that requires\n> calling a conversion function?\n\nWithout having checked the code, I don't recall that there's any\npreference for binary-coercible casts. But there definitely is\na preference for casting to \"preferred\" types, which text is\nand these other types are not. That's why I'm afraid of your\nuse-name-not-text proposal: it puts the regclass alternative at an\neven greater disadvantage in terms of the cast-choice heuristics.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jan 2024 15:51:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Adding a pg_get_owned_sequence function?"
}
] |
[
{
"msg_contents": "Fix search_path to a safe value during maintenance operations.\n\nWhile executing maintenance operations (ANALYZE, CLUSTER, REFRESH\nMATERIALIZED VIEW, REINDEX, or VACUUM), set search_path to\n'pg_catalog, pg_temp' to prevent inconsistent behavior.\n\nFunctions that are used for functional indexes, in index expressions,\nor in materialized views and depend on a different search path must be\ndeclared with CREATE FUNCTION ... SET search_path='...'.\n\nThis change addresses a security risk introduced in commit 60684dd834,\nwhere a role with MAINTAIN privileges on a table may be able to\nescalate privileges to the table owner. That commit is not yet part of\nany release, so no need to backpatch.\n\nDiscussion: https://postgr.es/m/e44327179e5c9015c8dda67351c04da552066017.camel%40j-davis.com\nReviewed-by: Greg Stark\nReviewed-by: Nathan Bossart\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/05e17373517114167d002494e004fa0aa32d1fd1\n\nModified Files\n--------------\ncontrib/amcheck/verify_nbtree.c | 2 ++\nsrc/backend/access/brin/brin.c | 2 ++\nsrc/backend/catalog/index.c | 8 ++++++++\nsrc/backend/commands/analyze.c | 2 ++\nsrc/backend/commands/cluster.c | 2 ++\nsrc/backend/commands/indexcmds.c | 6 ++++++\nsrc/backend/commands/matview.c | 2 ++\nsrc/backend/commands/vacuum.c | 2 ++\nsrc/bin/scripts/t/100_vacuumdb.pl | 4 ----\nsrc/include/utils/guc.h | 6 ++++++\nsrc/test/modules/test_oat_hooks/expected/test_oat_hooks.out | 4 ++++\nsrc/test/regress/expected/privileges.out | 12 ++++++------\nsrc/test/regress/expected/vacuum.out | 2 +-\nsrc/test/regress/sql/privileges.sql | 8 ++++----\nsrc/test/regress/sql/vacuum.sql | 2 +-\n15 files changed, 48 insertions(+), 16 deletions(-)",
"msg_date": "Fri, 09 Jun 2023 20:54:09 +0000",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix search_path to a safe value during maintenance operations."
},
{
"msg_contents": "On Fri, 2023-06-09 at 20:54 +0000, Jeff Davis wrote:\n> Fix search_path to a safe value during maintenance operations.\n\nLooks like this is causing pg_amcheck failures in the buildfarm.\nInvestigating...\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 09 Jun 2023 15:16:11 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, 2023-06-09 at 15:16 -0700, Jeff Davis wrote:\n> On Fri, 2023-06-09 at 20:54 +0000, Jeff Davis wrote:\n> > Fix search_path to a safe value during maintenance operations.\n> \n> Looks like this is causing pg_amcheck failures in the buildfarm.\n> Investigating...\n\nIt looks related to bt_index_check_internal(), which is called by SQL\nfunctions bt_index_check() and bt_index_parent_check(). SQL functions\ncan be called in parallel, so it raises the error:\n\n ERROR: cannot set parameters during a parallel operation\n\nbecause commit 05e1737351 added the SetConfigOption() line. Normally\nthose functions would not be called in parallel, but\ndebug_parallel_mode makes that happen.\n\nAttached a patch to mark those functions as PARALLEL UNSAFE, which\nfixes the problem.\n\nAlternatively, I could just take out that line, as those SQL functions\nare not controlled by the MAINTAIN privilege. But for consistency I\nthink it's a good idea to leave it in so that index functions are\ncalled with the right search path for amcheck.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 09 Jun 2023 17:37:57 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, 2023-06-09 at 17:37 -0700, Jeff Davis wrote:\n> Attached a patch to mark those functions as PARALLEL UNSAFE, which\n> fixes the problem.\n\nOn second thought, that might not be acceptable for amcheck, depending\non how its run.\n\nI think it's OK if run by pg_amcheck, because that runs it on a single\nindex at a time in each connection, and parallelizes by opening more\nconnections.\n\nBut if some users are calling the check functions across many tables in\na single select statement (e.g. in the targetlist of a query on\npg_class), then they'll experience a regression.\n\n> Alternatively, I could just take out that line, as those SQL\n> functions\n> are not controlled by the MAINTAIN privilege. But for consistency I\n> think it's a good idea to leave it in so that index functions are\n> called with the right search path for amcheck.\n\nIf marking them PARALLEL UNSAFE is not acceptable, then this seems like\na fine solution.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 09 Jun 2023 17:49:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> Attached a patch to mark those functions as PARALLEL UNSAFE, which\n> fixes the problem.\n\n> Alternatively, I could just take out that line, as those SQL functions\n> are not controlled by the MAINTAIN privilege. But for consistency I\n> think it's a good idea to leave it in so that index functions are\n> called with the right search path for amcheck.\n\nI concur with the upthread objection that it is way too late in\nthe release cycle to be introducing a breaking change like this.\nI request that you revert it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Jun 2023 01:33:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Sat, Jun 10, 2023 at 01:33:31AM -0400, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > Attached a patch to mark those functions as PARALLEL UNSAFE, which\n> > fixes the problem.\n> \n> > Alternatively, I could just take out that line, as those SQL functions\n> > are not controlled by the MAINTAIN privilege. But for consistency I\n> > think it's a good idea to leave it in so that index functions are\n> > called with the right search path for amcheck.\n> \n> I concur with the upthread objection that it is way too late in\n> the release cycle to be introducing a breaking change like this.\n> I request that you revert it.\n\nThe timing was not great, but this is fixing a purported defect in an older\nv16 feature. If the MAINTAIN privilege is actually fine, we're all set for\nv16. If MAINTAIN does have a material problem that $SUBJECT had fixed, we\nshould either revert MAINTAIN, un-revert $SUBJECT, or fix the problem a\ndifferent way.\n\n\n",
"msg_date": "Mon, 12 Jun 2023 13:05:10 -0400",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 1:05 PM Noah Misch <[email protected]> wrote:\n> > I concur with the upthread objection that it is way too late in\n> > the release cycle to be introducing a breaking change like this.\n> > I request that you revert it.\n>\n> The timing was not great, but this is fixing a purported defect in an older\n> v16 feature. If the MAINTAIN privilege is actually fine, we're all set for\n> v16. If MAINTAIN does have a material problem that $SUBJECT had fixed, we\n> should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem a\n> different way.\n\nI wonder why this commit used pg_catalog, pg_temp rather than just the\nempty string, as AutoVacWorkerMain does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 13:33:45 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-06-12 at 13:33 -0400, Robert Haas wrote:\n> I wonder why this commit used pg_catalog, pg_temp rather than just\n> the\n> empty string, as AutoVacWorkerMain does.\n\nI followed the rules here for \"Writing SECURITY DEFINER Functions\nSafely\":\n\nhttps://www.postgresql.org/docs/16/sql-createfunction.html\n\nwhich suggests adding pg_temp at the end (otherwise it is searched\nfirst by default).\n\nIt's not exactly like a SECURITY DEFINER function, but running a\nmaintenance command does switch to the table owner, so the risks are\nsimilar.\n\nI don't see a problem with the pg_temp schema in non-interactive\nprocesses, because we trust the non-interactive processes.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 17:20:47 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n> The timing was not great, but this is fixing a purported defect in an\n> older\n> v16 feature. If the MAINTAIN privilege is actually fine, we're all\n> set for\n> v16. If MAINTAIN does have a material problem that $SUBJECT had\n> fixed, we\n> should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n> a\n> different way.\n\nSomeone with the MAINTAIN privilege on a table can use search_path\ntricks against the table owner, if the code is susceptible, because\nmaintenance code runs with the privileges of the table owner.\n\nI was concerned enough to bring it up on the -security list, and then\nto -hackers followed by a commit (too late). But perhaps that was\nparanoia: the practical risk is probably quite low, because a user with\nthe MAINTAIN privilege is likely to be highly trusted.\n\nI'd like to hear from others on the topic about the relative risks of\nshipping with/without the search_path changes.\n\nI don't think a full revert of the MAINTAIN privilege is the right\nthing -- the predefined role is very valuable and many other predefined\nroles are much more dangerous than pg_maintain is.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 17:39:40 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 5:40 PM Jeff Davis <[email protected]> wrote:\n\n> On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n> > The timing was not great, but this is fixing a purported defect in an\n> > older\n> > v16 feature. If the MAINTAIN privilege is actually fine, we're all\n> > set for\n> > v16. If MAINTAIN does have a material problem that $SUBJECT had\n> > fixed, we\n> > should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n> > a\n> > different way.\n>\n> Someone with the MAINTAIN privilege on a table can use search_path\n> tricks against the table owner, if the code is susceptible, because\n> maintenance code runs with the privileges of the table owner.\n>\n>\nOnly change the search_path if someone other than the table owner or\nsuperuser is running the command (which should only be possible via the new\nMAINTAIN privilege)?\n\nDavid J.\n\nOn Mon, Jun 12, 2023 at 5:40 PM Jeff Davis <[email protected]> wrote:On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n> The timing was not great, but this is fixing a purported defect in an\n> older\n> v16 feature. If the MAINTAIN privilege is actually fine, we're all\n> set for\n> v16. If MAINTAIN does have a material problem that $SUBJECT had\n> fixed, we\n> should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n> a\n> different way.\n\nSomeone with the MAINTAIN privilege on a table can use search_path\ntricks against the table owner, if the code is susceptible, because\nmaintenance code runs with the privileges of the table owner.\nOnly change the search_path if someone other than the table owner or superuser is running the command (which should only be possible via the new MAINTAIN privilege)?David J.",
"msg_date": "Mon, 12 Jun 2023 17:50:32 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Monday, June 12, 2023, David G. Johnston <[email protected]>\nwrote:\n\n> On Mon, Jun 12, 2023 at 5:40 PM Jeff Davis <[email protected]> wrote:\n>\n>> On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n>> > The timing was not great, but this is fixing a purported defect in an\n>> > older\n>> > v16 feature. If the MAINTAIN privilege is actually fine, we're all\n>> > set for\n>> > v16. If MAINTAIN does have a material problem that $SUBJECT had\n>> > fixed, we\n>> > should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n>> > a\n>> > different way.\n>>\n>> Someone with the MAINTAIN privilege on a table can use search_path\n>> tricks against the table owner, if the code is susceptible, because\n>> maintenance code runs with the privileges of the table owner.\n>>\n>>\n> Only change the search_path if someone other than the table owner or\n> superuser is running the command (which should only be possible via the new\n> MAINTAIN privilege)?\n>\n\nOn a related note, are we OK with someone using this privilege setting\ntheir own default_statistics_target?\n\nhttps://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET\n\nMy prior attempt to open up analyze had brought this up as a reason to\navoid having someone besides the table owner allowed to analyze the table.\n\nDavid J.\n\nOn Monday, June 12, 2023, David G. Johnston <[email protected]> wrote:On Mon, Jun 12, 2023 at 5:40 PM Jeff Davis <[email protected]> wrote:On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n> The timing was not great, but this is fixing a purported defect in an\n> older\n> v16 feature. If the MAINTAIN privilege is actually fine, we're all\n> set for\n> v16. If MAINTAIN does have a material problem that $SUBJECT had\n> fixed, we\n> should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n> a\n> different way.\n\nSomeone with the MAINTAIN privilege on a table can use search_path\ntricks against the table owner, if the code is susceptible, because\nmaintenance code runs with the privileges of the table owner.\nOnly change the search_path if someone other than the table owner or superuser is running the command (which should only be possible via the new MAINTAIN privilege)?On a related note, are we OK with someone using this privilege setting their own default_statistics_target?https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGETMy prior attempt to open up analyze had brought this up as a reason to avoid having someone besides the table owner allowed to analyze the table.David J.",
"msg_date": "Mon, 12 Jun 2023 18:31:21 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 8:20 PM Jeff Davis <[email protected]> wrote:\n> I followed the rules here for \"Writing SECURITY DEFINER Functions\n> Safely\":\n>\n> https://www.postgresql.org/docs/16/sql-createfunction.html\n>\n> which suggests adding pg_temp at the end (otherwise it is searched\n> first by default).\n\nInteresting. The issue of \"what is a safe search path?\" is more\nnuanced than I would prefer. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 11:24:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jun 12, 2023 at 05:39:40PM -0700, Jeff Davis wrote:\n> On Mon, 2023-06-12 at 13:05 -0400, Noah Misch wrote:\n> > The timing was not great, but this is fixing a purported defect in an\n> > older\n> > v16 feature.� If the MAINTAIN privilege is actually fine, we're all\n> > set for\n> > v16.� If MAINTAIN does have a material problem that $SUBJECT had\n> > fixed, we\n> > should either revert MAINTAIN, un-revert $SUBJECT, or fix the problem\n> > a\n> > different way.\n> \n> Someone with the MAINTAIN privilege on a table can use search_path\n> tricks against the table owner, if the code is susceptible, because\n> maintenance code runs with the privileges of the table owner.\n> \n> I was concerned enough to bring it up on the -security list, and then\n> to -hackers followed by a commit (too late). But perhaps that was\n> paranoia: the practical risk is probably quite low, because a user with\n> the MAINTAIN privilege is likely to be highly trusted.\n> \n> I'd like to hear from others on the topic about the relative risks of\n> shipping with/without the search_path changes.\n\nI find shipping with the search_path change ($SUBJECT) to be lower risk\noverall, though both are fairly low-risk. Expect no new errors in non-FULL\nVACUUM, which doesn't run the relevant kinds of code. Tables not ready for\nthe search_path change in ANALYZE already cause errors in Autovacuum ANALYZE\nand have since 2018-02 (CVE-2018-1058). Hence, $SUBJECT poses less\ncompatibility risk than the CVE-2018-1058 fix.\n\nBest argument for shipping without $SUBJECT: we already have REFERENCES and\nTRIGGER privilege that tend to let the grantee hijack the table owner's\naccount. Adding MAINTAIN to the list, while sad, is defensible. I still\nprefer to ship with $SUBJECT, not without.\n\n\n",
"msg_date": "Tue, 13 Jun 2023 14:29:20 -0400",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-06-12 at 17:50 -0700, David G. Johnston wrote:\n> Only change the search_path if someone other than the table owner or\n> superuser is running the command (which should only be possible via\n> the new MAINTAIN privilege)?\n\nThat sounds like a reasonable compromise, but a bit messy. If we do it\nthis way, is there hope to clean things up a bit in the future? These\nspecial cases are quite difficult to document in a comprehensible way.\n\nIf others like this approach I'm fine with it.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:32:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-06-13 at 11:24 -0400, Robert Haas wrote:\n> Interesting. The issue of \"what is a safe search path?\" is more\n> nuanced than I would prefer. :-(\n\nAs far as I can tell, search_path was designed as a convenience for\ninteractive queries, where a lot of these nuances make sense. But they\ndon't make sense as defaults for code inside functions, in my opinion.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:43:41 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-06-12 at 18:31 -0700, David G. Johnston wrote:\n> On a related note, are we OK with someone using this privilege\n> setting their own default_statistics_target?\n> \n> https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET\n> \n> My prior attempt to open up analyze had brought this up as a reason\n> to avoid having someone besides the table owner allowed to analyze\n> the table.\n\nThank you for bringing it up. I don't see a major concern there, but\nplease link to the prior discussion so I can see the objections.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:54:24 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 12:54 PM Jeff Davis <[email protected]> wrote:\n\n> On Mon, 2023-06-12 at 18:31 -0700, David G. Johnston wrote:\n> > On a related note, are we OK with someone using this privilege\n> > setting their own default_statistics_target?\n> >\n> >\n> https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET\n> >\n> > My prior attempt to open up analyze had brought this up as a reason\n> > to avoid having someone besides the table owner allowed to analyze\n> > the table.\n>\n> Thank you for bringing it up. I don't see a major concern there, but\n> please link to the prior discussion so I can see the objections.\n>\n>\nThis is the specific (first?) message I am recalling.\n\nhttps://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B53803F5A%40ntex2010i.host.magwien.gv.at\n\nDavid J.\n\nOn Tue, Jun 13, 2023 at 12:54 PM Jeff Davis <[email protected]> wrote:On Mon, 2023-06-12 at 18:31 -0700, David G. Johnston wrote:\n> On a related note, are we OK with someone using this privilege\n> setting their own default_statistics_target?\n> \n> https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET\n> \n> My prior attempt to open up analyze had brought this up as a reason\n> to avoid having someone besides the table owner allowed to analyze\n> the table.\n\nThank you for bringing it up. I don't see a major concern there, but\nplease link to the prior discussion so I can see the objections.This is the specific (first?) message I am recalling.https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B53803F5A%40ntex2010i.host.magwien.gv.atDavid J.",
"msg_date": "Tue, 13 Jun 2023 13:22:01 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "Noah Misch <[email protected]> writes:\n> Best argument for shipping without $SUBJECT: we already have REFERENCES and\n> TRIGGER privilege that tend to let the grantee hijack the table owner's\n> account. Adding MAINTAIN to the list, while sad, is defensible. I still\n> prefer to ship with $SUBJECT, not without.\n\nWhat I'm concerned about is making such a fundamental semantics change\npost-beta1. It'll basically invalidate any application compatibility\ntesting anybody might have done against beta1. I think this ship has\nsailed as far as v16 is concerned, although we could reconsider it\nin v17.\n\nAlso, I fail to see any connection to the MAINTAIN privilege: the\ncommitted-and-reverted patch would break things whether the user\nwas making any use of that privilege or not. Thus, I do not accept\nthe idea that we're fixing something that's new in 16.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:23:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-06-13 at 13:22 -0700, David G. Johnston wrote:\n> This is the specific (first?) message I am recalling.\n> \n> https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B53803F5A%40ntex2010i.host.magwien.gv.at\n\nThe most objection seems to be expressed most succinctly in this\nmessage:\n\nhttps://www.postgresql.org/message-id/16134.1456767564%40sss.pgh.pa.us\n\n\"if we allow non-owners to run ANALYZE, they'd be able to mess things\nup by setting the stats target either much lower or much higher than\nthe table owner expected\"\n\nI have trouble seeing much of a problem here if there is an explicit\nMAINTAIN privilege. If you grant someone MAINTAIN to someone, it's not\nsurprising that you need to coordinate maintenance-related settings\nwith that user; and if you don't, then it's not surprising that the\nstatistics could get messed up.\n\nPerhaps the objections in that thread were because the proposal\ninvolved inferring the privilege to ANALYZE from other privileges,\nrather than having an explicit MAINTAIN privilege?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 13 Jun 2023 13:55:13 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 1:55 PM Jeff Davis <[email protected]> wrote:\n\n> Perhaps the objections in that thread were because the proposal\n> involved inferring the privilege to ANALYZE from other privileges,\n> rather than having an explicit MAINTAIN privilege?\n>\n>\nThat is true; but it seems worth being explicit whether we expect the user\nto only be able to run \"ANALYZE\" using defaults (like auto-analyze would\ndo) or if this additional capability is assumed to be part of the grant. I\ndo imagine you'd want to be able to set the statistic target in order to do\nvacuum --analyze-in-stages with a non-superuser.\n\nDavid J.\n\nOn Tue, Jun 13, 2023 at 1:55 PM Jeff Davis <[email protected]> wrote:Perhaps the objections in that thread were because the proposal\ninvolved inferring the privilege to ANALYZE from other privileges,\nrather than having an explicit MAINTAIN privilege?That is true; but it seems worth being explicit whether we expect the user to only be able to run \"ANALYZE\" using defaults (like auto-analyze would do) or if this additional capability is assumed to be part of the grant. I do imagine you'd want to be able to set the statistic target in order to do vacuum --analyze-in-stages with a non-superuser.David J.",
"msg_date": "Tue, 13 Jun 2023 14:00:32 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> The most objection seems to be expressed most succinctly in this\n> message:\n> https://www.postgresql.org/message-id/16134.1456767564%40sss.pgh.pa.us\n> \"if we allow non-owners to run ANALYZE, they'd be able to mess things\n> up by setting the stats target either much lower or much higher than\n> the table owner expected\"\n\n> I have trouble seeing much of a problem here if there is an explicit\n> MAINTAIN privilege. If you grant someone MAINTAIN to someone, it's not\n> surprising that you need to coordinate maintenance-related settings\n> with that user; and if you don't, then it's not surprising that the\n> statistics could get messed up.\n\nI agree that granting MAINTAIN implies that you trust the grantee\nnot to mess up your stats.\n\n> Perhaps the objections in that thread were because the proposal\n> involved inferring the privilege to ANALYZE from other privileges,\n> rather than having an explicit MAINTAIN privilege?\n\nExactly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jun 2023 17:18:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-06-13 at 16:23 -0400, Tom Lane wrote:\n> What I'm concerned about is making such a fundamental semantics\n> change\n> post-beta1.\n\nI have added the patch to the July CF for v17.\n\nIf someone does feel like something should be done for v16, David G.\nJohnston posted one possibility here:\n\nhttps://www.postgresql.org/message-id/CAKFQuwaVJkM9u+qpOaom2UkPE1sz0BASF-E5amxWPxncUhm4Hw@mail.gmail.com\n\nBut as Noah pointed out, there are other privileges that can be abused,\nso a workaround for 16 might not be important if we have a likely fix\nfor MAINTAIN coming in 17.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 14 Jun 2023 21:59:40 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 12:59 AM Jeff Davis <[email protected]> wrote:\n> On Tue, 2023-06-13 at 16:23 -0400, Tom Lane wrote:\n> > What I'm concerned about is making such a fundamental semantics\n> > change\n> > post-beta1.\n>\n> I have added the patch to the July CF for v17.\n>\n> If someone does feel like something should be done for v16, David G.\n> Johnston posted one possibility here:\n>\n> https://www.postgresql.org/message-id/CAKFQuwaVJkM9u+qpOaom2UkPE1sz0BASF-E5amxWPxncUhm4Hw@mail.gmail.com\n>\n> But as Noah pointed out, there are other privileges that can be abused,\n> so a workaround for 16 might not be important if we have a likely fix\n> for MAINTAIN coming in 17.\n\nRather than is_superuser(userid) || userid == ownerid, I think that\nthe test should be has_privs_of_role(userid, ownerid).\n\nI'm inclined to think that this is a real security issue and am not\nvery sanguine about waiting another year to fix it, but at the same\ntime, I'm somewhat worried that the proposed fix might be too narrow\nor wrongly-shaped. I'm not too convinced that we've properly\nunderstood what all of the problems in this area are. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Jun 2023 16:03:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-06-19 at 16:03 -0400, Robert Haas wrote:\n> I'm inclined to think that this is a real security issue and am not\n\nCan you expand on that a bit? You mean a practical security issue for\nthe intended use cases?\n\n> very sanguine about waiting another year to fix it, but at the same\n> time, I'm somewhat worried that the proposed fix might be too narrow\n> or wrongly-shaped. I'm not too convinced that we've properly\n> understood what all of the problems in this area are. :-(\n\nWould it be acceptable to document that the MAINTAIN privilege (along\nwith TRIGGER and, if I understand correctly, REFERENCES) carries\nprivilege escalation risk for the grantor?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 19 Jun 2023 15:58:55 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "[ emerges from hibernation ]\n\nOn Mon, Jun 19, 2023 at 6:58 PM Jeff Davis <[email protected]> wrote:\n> On Mon, 2023-06-19 at 16:03 -0400, Robert Haas wrote:\n> > I'm inclined to think that this is a real security issue and am not\n>\n> Can you expand on that a bit? You mean a practical security issue for\n> the intended use cases?\n\nYeah. I mean, as things stand, it seems like giving someone the\nMAINTAIN privilege will be sufficient to allow them to escalate to the\ntable owner if there are any expression indexes involved. That seems\nlike a real problem. We shouldn't ship a new feature with a built-in\nsecurity hole like that.\n\nI was pretty outraged when I realized that we'd been shipping releases\nfor years where CREATEROLE let you grab superuser because you could\njust grant yourself pg_execute_server_program and then go to town.\nIdeally, that hazard should have been identified and fixed in some way\nbefore introducing pg_execute_server_program. I don't know whether the\nhazard wasn't realized at the time or whether somebody somehow\nconvinced themselves that that was OK, but it clearly isn't.\n\nNow we're proposing to ship a brand-new feature with a hole that we\ndefinitely already know exists. I can't understand that at all. Should\nwe just go file the CVE against ourselves right now, then? Seriously,\nwhat are we doing?\n\nIf we're not going to fix the feature so that it doesn't break the\nsecurity model, we should probably just revert it. I don't understand\nat all the idea of shipping something that we 100% know is broken.\n\n> > very sanguine about waiting another year to fix it, but at the same\n> > time, I'm somewhat worried that the proposed fix might be too narrow\n> > or wrongly-shaped. I'm not too convinced that we've properly\n> > understood what all of the problems in this area are. :-(\n>\n> Would it be acceptable to document that the MAINTAIN privilege (along\n> with TRIGGER and, if I understand correctly, REFERENCES) carries\n> privilege escalation risk for the grantor?\n\nThat's clearly better than nothing, but also seems like it's pretty\nclearly the wrong approach. If somebody electrocutes themselves on the\ntoaster in the break room, you don't stick a sign on the side of it\nthat says \"this toaster will electrocute you if you try to use it\" and\nthen call it good. You either fix or replace the toaster, or at the\nvery least throw it out, or at the VERY least unplug it. I am failing\nto understand how this situation is any different.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 11:19:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On 2023-06-29 Th 11:19, Robert Haas wrote:\n>\n> Now we're proposing to ship a brand-new feature with a hole that we\n> definitely already know exists. I can't understand that at all. Should\n> we just go file the CVE against ourselves right now, then? Seriously,\n> what are we doing?\n>\n> If we're not going to fix the feature so that it doesn't break the\n> security model, we should probably just revert it. I don't understand\n> at all the idea of shipping something that we 100% know is broken.\n>\n>\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-06-29 Th 11:19, Robert Haas\n wrote:\n\n\nNow we're proposing to ship a brand-new feature with a hole that we\ndefinitely already know exists. I can't understand that at all. Should\nwe just go file the CVE against ourselves right now, then? Seriously,\nwhat are we doing?\n\nIf we're not going to fix the feature so that it doesn't break the\nsecurity model, we should probably just revert it. I don't understand\nat all the idea of shipping something that we 100% know is broken.\n\n\n\n\n\n\n+1\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 29 Jun 2023 15:08:35 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 11:19:38AM -0400, Robert Haas wrote:\n> [ emerges from hibernation ]\n\nWelcome back.\n\n> If we're not going to fix the feature so that it doesn't break the\n> security model, we should probably just revert it. I don't understand\n> at all the idea of shipping something that we 100% know is broken.\n\nGiven Jeff's commit followed the precedent set by the fix for\nCVE-2018-1058, I'm inclined to think he was on the right track. Perhaps a\nmore targeted fix, such as only changing search_path when the command is\nnot run by the table owner (as suggested upthread [0]) is worth\nconsidering.\n\n[0] https://postgr.es/m/CAKFQuwaVJkM9u%2BqpOaom2UkPE1sz0BASF-E5amxWPxncUhm4Hw%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Jun 2023 13:29:40 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, 2023-06-29 at 11:19 -0400, Robert Haas wrote:\n> Yeah. I mean, as things stand, it seems like giving someone the\n> MAINTAIN privilege will be sufficient to allow them to escalate to\n> the\n> table owner if there are any expression indexes involved. That seems\n> like a real problem. We shouldn't ship a new feature with a built-in\n> security hole like that.\n\nLet's take David's suggestion[1] then, and only restrict the search\npath for those without owner privileges on the object.\n\nThat would mean no behavior change unless using the MAINTAIN privilege,\nwhich is new, so no breakage. And if someone is using the MAINTAIN\nprivilege, they wouldn't be able to abuse the search_path, so it would\nclose the hole.\n\nPatch attached (created a bit quickly, but seems to work).\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://postgr.es/m/CAKFQuwaVJkM9u%2BqpOaom2UkPE1sz0BASF-E5amxWPxncUhm4Hw%40mail.gmail.com",
"msg_date": "Thu, 29 Jun 2023 17:36:17 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Thu, 2023-06-29 at 11:19 -0400, Robert Haas wrote:\n>> We shouldn't ship a new feature with a built-in\n>> security hole like that.\n\n> Let's take David's suggestion[1] then, and only restrict the search\n> path for those without owner privileges on the object.\n\nI think that's a seriously awful kluge. It will mean that things behave\ndifferently for the owner than for MAINTAIN grantees, which pretty much\ndestroys the use-case for that privilege, as well as being very confusing\nand hard to debug. Yes, *if* you're careful about search path cleanliness\nthen you can make it work, but that will be a foot-gun forevermore.\n\n(I'm also less than convinced that this is sufficient to remove all\nsecurity hazards. One pretty obvious question is do we really want\nsuperusers to be treated as owners, rather than MAINTAIN grantees,\nfor this purpose.)\n\nI'm leaning to Robert's thought that we need to revert this for now,\nand think harder about how to make it work cleanly and safely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Jun 2023 20:53:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 08:53:56PM -0400, Tom Lane wrote:\n> I'm leaning to Robert's thought that we need to revert this for now,\n> and think harder about how to make it work cleanly and safely.\n\nSince it sounds like this is headed towards a revert, here's a patch for\nremoving MAINTAIN and pg_maintain.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Jun 2023 22:09:21 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, 2023-06-29 at 20:53 -0400, Tom Lane wrote:\n> I think that's a seriously awful kluge. It will mean that things\n> behave\n> differently for the owner than for MAINTAIN grantees, which pretty\n> much\n> destroys the use-case for that privilege, as well as being very\n> confusing\n> and hard to debug.\n\nIn version 15, try this:\n\n CREATE USER foo;\n CREATE SCHEMA foo AUTHORIZATION foo;\n CREATE USER bar;\n CREATE SCHEMA bar AUTHORIZATION bar;\n \\c - foo\n CREATE FUNCTION foo.mod10(INT) RETURNS INT IMMUTABLE\n LANGUAGE plpgsql AS $$ BEGIN RETURN mod($1,10); END; $$;\n CREATE TABLE t(i INT);\n -- units digit must be unique\n CREATE UNIQUE INDEX t_idx ON t (foo.mod10(i));\n INSERT INTO t VALUES(7); -- success\n INSERT INTO t VALUES(17); -- fails\n GRANT USAGE ON SCHEMA foo TO bar;\n GRANT INSERT ON t TO bar;\n \\c - bar\n CREATE FUNCTION bar.mod(INT, INT) RETURNS INT IMMUTABLE\n LANGUAGE plpgsql AS $$ BEGIN RETURN $1 + 1000000; END; $$;\n SET search_path = bar, pg_catalog;\n INSERT INTO foo.t VALUES(7); -- succeeds\n \\c - foo\n SELECT * FROM t;\n i \n ---\n 7\n 7\n (2 rows)\n\n\nI'm not sure that everyone in this thread realizes just how broken it\nis to depend on search_path in a functional index at all. And doubly so\nif it depends on a schema other than pg_catalog in the search_path.\n\nLet's also not forget that logical replication always uses\nsearch_path=pg_catalog, so if you depend on a different search_path for\nany function attached to the table (not just functional indexes, also\nfunctions inside expressions or trigger functions), then those are\nalready broken in version 15. And if a superuser is executing\nmaintenance commands, there's little reason to think they'll have the\nsame search path as the user that created the table.\n\nAt some point in the very near future (though I realize that point may\ncome after version 16), we need to lock down the search path in a lot\nof cases (not just maintenance commands), and I don't see any way\naround that.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 30 Jun 2023 00:41:02 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 10:09:21PM -0700, Nathan Bossart wrote:\n> On Thu, Jun 29, 2023 at 08:53:56PM -0400, Tom Lane wrote:\n>> I'm leaning to Robert's thought that we need to revert this for now,\n>> and think harder about how to make it work cleanly and safely.\n> \n> Since it sounds like this is headed towards a revert, here's a patch for\n> removing MAINTAIN and pg_maintain.\n\nI will revert this next week unless opinions change before then. I'm\ncurrently planning to revert on both master and REL_16_STABLE, but another\noption would be to keep it on master while we sort out the remaining\nissues. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Jun 2023 11:35:46 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 11:35:46AM -0700, Nathan Bossart wrote:\n> On Thu, Jun 29, 2023 at 10:09:21PM -0700, Nathan Bossart wrote:\n> > On Thu, Jun 29, 2023 at 08:53:56PM -0400, Tom Lane wrote:\n> >> I'm leaning to Robert's thought that we need to revert this for now,\n> >> and think harder about how to make it work cleanly and safely.\n\nAnother dimension of compromise could be to make MAINTAIN affect fewer\ncommands in v16. Un-revert the part of commit 05e1737 affecting just the\ncommands it still affects. For example, limit MAINTAIN and the 05e1737\nbehavior change to VACUUM, ANALYZE, and REINDEX. Don't worry about VACUUM or\nANALYZE failing under commit 05e1737, since they would have been failing under\nautovacuum since 2018. A problem index expression breaks both autoanalyze and\nREINDEX, hence the inclusion of REINDEX. The already-failing argument doesn't\napply to CLUSTER or REFRESH MATERIALIZED VIEW, so those commands could join\nMAINTAIN in v17.\n\n> > Since it sounds like this is headed towards a revert, here's a patch for\n> > removing MAINTAIN and pg_maintain.\n> \n> I will revert this next week unless opinions change before then. I'm\n> currently planning to revert on both master and REL_16_STABLE, but another\n> option would be to keep it on master while we sort out the remaining\n> issues. Thoughts?\n\n From my reading of the objections, I think they're saying that commit 05e1737\narrived too late and that MAINTAIN is unacceptable without commit 05e1737. I\nthink you'd conform to all objections by pushing the revert to v16 and pushing\na roll-forward of commit 05e1737 to master.\n\n\n",
"msg_date": "Sun, 2 Jul 2023 20:57:31 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Sun, Jul 02, 2023 at 08:57:31PM -0700, Noah Misch wrote:\n> Another dimension of compromise could be to make MAINTAIN affect fewer\n> commands in v16. Un-revert the part of commit 05e1737 affecting just the\n> commands it still affects. For example, limit MAINTAIN and the 05e1737\n> behavior change to VACUUM, ANALYZE, and REINDEX. Don't worry about VACUUM or\n> ANALYZE failing under commit 05e1737, since they would have been failing under\n> autovacuum since 2018. A problem index expression breaks both autoanalyze and\n> REINDEX, hence the inclusion of REINDEX. The already-failing argument doesn't\n> apply to CLUSTER or REFRESH MATERIALIZED VIEW, so those commands could join\n> MAINTAIN in v17.\n\nI'm open to compromise if others are, but I'm skeptical that folks will be\nokay with too much fancy footwork this late in the game.\n\nAnyway, IMO your argument could extend to CLUSTER and REFRESH, too. If\nwe're willing to change behavior under the assumption that autovacuum\nwould've been failing since 2018, then why wouldn't we be willing to change\nit everywhere? I suppose someone could have been manually vacuuming with a\nspecial search_path for 5 years to avoid needing to schema-qualify their\nindex expressions (and would then be surprised that CLUSTER/REFRESH no\nlonger work), but limiting MAINTAIN to VACUUM, etc. would still break their\nuse-case, right?\n\n> From my reading of the objections, I think they're saying that commit 05e1737\n> arrived too late and that MAINTAIN is unacceptable without commit 05e1737. I\n> think you'd conform to all objections by pushing the revert to v16 and pushing\n> a roll-forward of commit 05e1737 to master.\n\nOkay, I'll adjust my plans accordingly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:19:14 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 11:19:14AM -0700, Nathan Bossart wrote:\n> On Sun, Jul 02, 2023 at 08:57:31PM -0700, Noah Misch wrote:\n> > Another dimension of compromise could be to make MAINTAIN affect fewer\n> > commands in v16. Un-revert the part of commit 05e1737 affecting just the\n> > commands it still affects. For example, limit MAINTAIN and the 05e1737\n> > behavior change to VACUUM, ANALYZE, and REINDEX. Don't worry about VACUUM or\n> > ANALYZE failing under commit 05e1737, since they would have been failing under\n> > autovacuum since 2018. A problem index expression breaks both autoanalyze and\n> > REINDEX, hence the inclusion of REINDEX. The already-failing argument doesn't\n> > apply to CLUSTER or REFRESH MATERIALIZED VIEW, so those commands could join\n> > MAINTAIN in v17.\n> \n> I'm open to compromise if others are, but I'm skeptical that folks will be\n> okay with too much fancy footwork this late in the game.\n\nGot it.\n\n> Anyway, IMO your argument could extend to CLUSTER and REFRESH, too. If\n> we're willing to change behavior under the assumption that autovacuum\n> would've been failing since 2018, then why wouldn't we be willing to change\n> it everywhere? I suppose someone could have been manually vacuuming with a\n> special search_path for 5 years to avoid needing to schema-qualify their\n> index expressions (and would then be surprised that CLUSTER/REFRESH no\n> longer work), but limiting MAINTAIN to VACUUM, etc. would still break their\n> use-case, right?\n\nYes, limiting MAINTAIN to VACUUM would still break a site that has used manual\nVACUUM to work around associated loss of autovacuum. I'm not sympathetic to a\nuser who neglected to benefit from the last five years of prep time on this\nissue as it affects VACUUM and ANALYZE. REFRESH runs more than index\nexpressions, e.g. function calls in the targetlist of the materialized view\nquery. Those targetlist expressions haven't been putting ERRORs in the log\nduring autovacuum, so REFRESH hasn't had the sort of advance warning that\nVACUUM and ANALYZE got.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 22:19:43 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, 2023-06-29 at 22:09 -0700, Nathan Bossart wrote:\n> On Thu, Jun 29, 2023 at 08:53:56PM -0400, Tom Lane wrote:\n> > I'm leaning to Robert's thought that we need to revert this for\n> > now,\n> > and think harder about how to make it work cleanly and safely.\n> \n> Since it sounds like this is headed towards a revert, here's a patch\n> for\n> removing MAINTAIN and pg_maintain.\n\nIt was difficult to review standalone, so I tried a quick version\nmyself and ended up with very similar results. The only substantial\ndifference was that I put back:\n\n\n+ if (!vacuum_is_relation_owner(relid, classForm,\noptions))\n+ continue;\n\n\nin get_all_vacuum_rels() whereas your patch left it out -- double-check\nthat we're doing the right thing there.\n\nAlso remember to bump the catversion. Other than that, it looks good to\nme.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 06 Jul 2023 00:55:14 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, Jul 06, 2023 at 12:55:14AM -0700, Jeff Davis wrote:\n> It was difficult to review standalone, so I tried a quick version\n> myself and ended up with very similar results.\n\nThanks for taking a look.\n\n> The only substantial\n> difference was that I put back:\n> \n> \n> + if (!vacuum_is_relation_owner(relid, classForm,\n> options))\n> + continue;\n> \n> \n> in get_all_vacuum_rels() whereas your patch left it out -- double-check\n> that we're doing the right thing there.\n\nThe privilege check was moved in d46a979, which I think still makes sense,\nso I left it there. That might be why it looks like I removed it.\n\n> Also remember to bump the catversion. Other than that, it looks good to\n> me.\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:20:04 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "Here is a new version of the patch that I think is ready for commit (except\nit still needs a catversion bump). The only real difference from v1 is in\nAdjustUpgrade.pm. From my cross-version pg_upgrade testing, I believe we\ncan remove the other \"privilege-set discrepancy\" rule as well.\n\nSince MAINTAIN will no longer exist in v16, we'll also need the following\nchange applied to v17devel:\n\n\tdiff --git a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n\tindex 843f65b448..d435812c06 100644\n\t--- a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n\t+++ b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n\t@@ -274,7 +274,7 @@ sub adjust_old_dumpfile\n\t \t\t$dump = _mash_view_qualifiers($dump);\n\t \t}\n\t \n\t-\tif ($old_version >= 14 && $old_version < 16)\n\t+\tif ($old_version >= 14 && $old_version < 17)\n\t \t{\n\t \t\t# Fix up some privilege-set discrepancies.\n\t \t\t$dump =~\n\nOn Thu, Jul 06, 2023 at 10:20:04AM -0700, Nathan Bossart wrote:\n> On Thu, Jul 06, 2023 at 12:55:14AM -0700, Jeff Davis wrote:\n>> Also remember to bump the catversion. Other than that, it looks good to\n>> me.\n> \n> Will do.\n\nSince we are only reverting from v16, the REL_16_STABLE catversion will be\nbumped ahead of the one on master. AFAICT that is okay, but there is also\na chance that someone bumps the catversion on master to the same value.\nI'm not sure if this is problem is worth worrying about, but I thought I'd\nraise it just in case. I could bump the catversion on master to the\nfollowing value to help prevent this scenario, but I'm not wild about\nadding unnecessary catversion bumps.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 6 Jul 2023 22:14:27 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Thu, 2023-07-06 at 22:14 -0700, Nathan Bossart wrote:\n> Since we are only reverting from v16, the REL_16_STABLE catversion\n> will be\n> bumped ahead of the one on master.\n\nI don't object to you doing it this way, but FWIW, I'd just revert in\nboth branches to avoid this kind of weirdness.\n\nAlso I'm not quite sure how quickly my search_path fix will be\ncommitted. Hopefully soon, because the current state is not great, but\nit's hard for me to say for sure.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 07 Jul 2023 09:22:22 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, Jul 07, 2023 at 09:22:22AM -0700, Jeff Davis wrote:\n> On Thu, 2023-07-06 at 22:14 -0700, Nathan Bossart wrote:\n>> Since we are only reverting from v16, the REL_16_STABLE catversion\n>> will be\n>> bumped ahead of the one on master.\n> \n> I don't object to you doing it this way, but FWIW, I'd just revert in\n> both branches to avoid this kind of weirdness.\n> \n> Also I'm not quite sure how quickly my search_path fix will be\n> committed. Hopefully soon, because the current state is not great, but\n> it's hard for me to say for sure.\n\nYeah, I guess I should just revert it in both. Given your fix will\nhopefully be committed soon, I was hoping to avoid reverting and\nun-reverting in quick succession to prevent affecting git-blame too much.\n\nI found an example of a post-beta2 revert on both master and a stable\nbranch where Tom set the catversions to different values (20b6847,\ne256312). I'll do the same here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 09:57:10 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, Jul 07, 2023 at 09:57:10AM -0700, Nathan Bossart wrote:\n> Yeah, I guess I should just revert it in both. Given your fix will\n> hopefully be committed soon, I was hoping to avoid reverting and\n> un-reverting in quick succession to prevent affecting git-blame too much.\n> \n> I found an example of a post-beta2 revert on both master and a stable\n> branch where Tom set the catversions to different values (20b6847,\n> e256312). I'll do the same here.\n\nreverted\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 11:30:59 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 3:41 AM Jeff Davis <[email protected]> wrote:\n> I'm not sure that everyone in this thread realizes just how broken it\n> is to depend on search_path in a functional index at all. And doubly so\n> if it depends on a schema other than pg_catalog in the search_path.\n>\n> Let's also not forget that logical replication always uses\n> search_path=pg_catalog, so if you depend on a different search_path for\n> any function attached to the table (not just functional indexes, also\n> functions inside expressions or trigger functions), then those are\n> already broken in version 15. And if a superuser is executing\n> maintenance commands, there's little reason to think they'll have the\n> same search path as the user that created the table.\n>\n> At some point in the very near future (though I realize that point may\n> come after version 16), we need to lock down the search path in a lot\n> of cases (not just maintenance commands), and I don't see any way\n> around that.\n\nI agree. I think there are actually two interrelated problems here.\n\nOne is that virtually all code needs to run with the originally\nintended search_path rather than some search_path chosen at another\ntime and maybe by a different user. If not, it's going to break, or\ncompromise security, depending on the situation. The other is that\nrunning arbitrary code written by somebody else as yourself is\nbasically instant death, from a security perspective.\n\nIt's a little hard to imagine a world in which these problems don't\nexist at all, but it somehow feels like the design of the system\npushes you toward doing this stuff incorrectly rather than doing it\ncorrectly. For instance, you can imagine a system where when you run\nCREATE OR REPLACE FUNCTION, the prevailing search_path is captured and\nautomatically included in proconfig. Then the default behavior would\nbe to run functions and procedures with the search_path that was in\neffect when they were created, rather than what we actually have,\nwhere it's the one in effect at execution time as it is currently.\n\nIt's a little harder to imagine something similar around all the user\nswitching behavior, just because we have so many ways of triggering\narbitrary code execution: views, triggers, event triggers, expression\nindexes, constraints, etc. But you can maybe imagine a system where\nall code associated with a table is run as the table owner in all\ncases, regardless of SECURITY INVOKER/DEFINER, which I think would at\nleast close some holes.\n\nThe difficulty is that it's a bit hard to imagine making these kinds\nof definitional changes now, because they'd probably be breaking\nchanges for pretty significant numbers of users. But on the other\nhand, if we don't start thinking about systemic changes here, it feels\nlike we're just playing whack-a-mole.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 12:53:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On 7/31/23 12:53, Robert Haas wrote:\n> On Fri, Jun 30, 2023 at 3:41 AM Jeff Davis <[email protected]> wrote:\n>> I'm not sure that everyone in this thread realizes just how broken it\n>> is to depend on search_path in a functional index at all. And doubly so\n>> if it depends on a schema other than pg_catalog in the search_path.\n>>\n>> Let's also not forget that logical replication always uses\n>> search_path=pg_catalog, so if you depend on a different search_path for\n>> any function attached to the table (not just functional indexes, also\n>> functions inside expressions or trigger functions), then those are\n>> already broken in version 15. And if a superuser is executing\n>> maintenance commands, there's little reason to think they'll have the\n>> same search path as the user that created the table.\n>>\n>> At some point in the very near future (though I realize that point may\n>> come after version 16), we need to lock down the search path in a lot\n>> of cases (not just maintenance commands), and I don't see any way\n>> around that.\n> \n> I agree. I think there are actually two interrelated problems here.\n> \n> One is that virtually all code needs to run with the originally\n> intended search_path rather than some search_path chosen at another\n> time and maybe by a different user. If not, it's going to break, or\n> compromise security, depending on the situation. The other is that\n> running arbitrary code written by somebody else as yourself is\n> basically instant death, from a security perspective.\n\nI agree too.\n\nBut the analysis of the issue needs to go one step further. Even if the \nsearch_path does not change from the originally intended one, a newly \ncreated function can shadow the intended one based on argument coercion \nrules.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Mon, 31 Jul 2023 13:17:59 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 1:18 PM Joe Conway <[email protected]> wrote:\n> But the analysis of the issue needs to go one step further. Even if the\n> search_path does not change from the originally intended one, a newly\n> created function can shadow the intended one based on argument coercion\n> rules.\n\nYeah, this is a complicated issue. As the system works today, if you\ninclude in your search_path a schema to which some other user can\nwrite, you are pretty much agreeing to execute code provided by that\nuser. If that user has strictly greater privileges than you, e.g. they\nare the super-user, then that's fine, because they can compromise your\naccount anyway, but otherwise, you're probably doomed. Not only can\nthey try to capture references with similarly-named objects, they can\nalso do things like create objects whose names are common\nmis-spellings of the objects that are supposed to be there and hope\nyou access the wrong one by mistake. Maybe there are other attacks as\nwell, but even if not, I think it's already a pretty hopeless\nsituation. I think the UNIX equivalent would be including a directory\nin your PATH that is world-writable and hoping your account will stay\nsecure. Not very likely.\n\nWe have already taken an important step in terms of preventing this\nattack in commit b073c3ccd06e4cb845e121387a43faa8c68a7b62, which\nremoved PUBLIC CREATE from the public schema. Before that, we were\nshipping a configuration analogous to a UNIX system where /usr/bin was\nworld-writable -- something which no actual UNIX system has likely\ndone any time in the last 40 years, because it's so clearly insane. We\ncould maybe go a step further by changing the default search_path to\nnot even include public, to further discourage people from using that\nas a catch-all where everybody can just dump their objects. Right now,\nanybody can revert that change with a single GRANT statement, and if\nwe removed public from the default search_path as well, people would\nhave one extra step to restore that insecure configuration. I don't\nknow such a change is worthwhile, though. It would still be super-easy\nfor users to create insecure configurations: as soon as user A can\nwrite a schema and user B has it in the search_path, B is in a lot of\ntrouble if A turns out to be malicious.\n\nOne thing we might be able to do to prevent that sort of thing is to\nhave a feature to prevent \"accidental\" code execution, as in the\n\"function trust\" mechanism proposed previously. Say I trust all users\nwho can SET ROLE to me and/or who inherit my privileges. Additionally\nI can decide to trust users who do neither of those things by some\nsort of explicit declaration. If I don't trust a user then if I do\nanything that would cause code supplied by that user to get executed,\nit just errors out:\n\nERROR: role \"rhaas\" should not execute arbitrary code provided by role \"jconway\"\nHINT: If this should be allowed, use the TRUST command to permit it.\n\nEven if we do this, I still think we need the kinds of fixes that I\nmentioned earlier. An error like this is fine if you're trying to do\nsomething to a table owned by another user and they've got a surprise\ntrigger on there or something, but a maintenance command like VACUUM\nshould find a way to work that is both secure and non-astonishing. And\nwe probably also still need to find ways to control search_path in a\nlot more widely than we do today. Otherwise, even if stuff is\ntechnically secure, it may just not work.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 16:06:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-07-31 at 16:06 -0400, Robert Haas wrote:\n> if you\n> include in your search_path a schema to which some other user can\n> write, you are pretty much agreeing to execute code provided by that\n> user.\n\nAgreed on all counts here. I don't think it's reasonable for us to try\nto make such a setup secure, and I don't think users have much need for\nsuch a setup anyway.\n\n> One thing we might be able to do to prevent that sort of thing is to\n> have a feature to prevent \"accidental\" code execution, as in the\n> \"function trust\" mechanism proposed previously. Say I trust all users\n> who can SET ROLE to me and/or who inherit my privileges. Additionally\n> I can decide to trust users who do neither of those things by some\n> sort of explicit declaration. If I don't trust a user then if I do\n> anything that would cause code supplied by that user to get executed,\n> it just errors out:\n> \n> ERROR: role \"rhaas\" should not execute arbitrary code provided by\n> role \"jconway\"\n> HINT: If this should be allowed, use the TRUST command to permit it.\n\n+1, though I'm not sure we need an extensive trust mechanism beyond\nwhat we already have with the SET ROLE privilege.\n\n> And\n> we probably also still need to find ways to control search_path in a\n> lot more widely than we do today. Otherwise, even if stuff is\n> technically secure, it may just not work.\n\n+1.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 31 Jul 2023 14:15:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-07-31 at 12:53 -0400, Robert Haas wrote:\n> I agree. I think there are actually two interrelated problems here.\n> \n> One is that virtually all code needs to run with the originally\n> intended search_path rather than some search_path chosen at another\n> time and maybe by a different user. If not, it's going to break, or\n> compromise security, depending on the situation. The other is that\n> running arbitrary code written by somebody else as yourself is\n> basically instant death, from a security perspective.\n\nGood framing.\n\nThe search_path is a particularly nasty problem in our system because\nit means that users can't even trust the code that they write\nthemselves! A function author has no way to know how their own function\nwill behave under a different search_path.\n\n> It's a little hard to imagine a world in which these problems don't\n> exist at all, but it somehow feels like the design of the system\n> pushes you toward doing this stuff incorrectly rather than doing it\n> correctly. For instance, you can imagine a system where when you run\n> CREATE OR REPLACE FUNCTION, the prevailing search_path is captured\n> and\n> automatically included in proconfig.\n\nCapturing the environment is not ideal either, in my opinion. It makes\nit easy to carelessly depend on a schema that others might not have\nUSAGE privileges on, which would then create a runtime problem for\nother callers. Also, I don't think we could just depend on the raw\nsearch_path, we'd need to do some processing for $user, and there are\nprobably a few other annoyances.\n\nIt's one possibility and we don't have a lot of great options, so I\ndon't want to rule it out though. If nothing else it could be a\ntransition path to something better.\n\n\n> But you can maybe imagine a system where\n> all code associated with a table is run as the table owner in all\n> cases, regardless of SECURITY INVOKER/DEFINER, which I think would at\n> least close some holes.\n> \n> The difficulty is that it's a bit hard to imagine making these kinds\n> of definitional changes now, because they'd probably be breaking\n> changes for pretty significant numbers of users.\n\nI believe we can get close to a good model with minimal breakage. And\nwhen we make the problem small enough I believe other solutions will\nemerge. We will probably have to hedge with some compatibility GUCs.\n\n> But on the other\n> hand, if we don't start thinking about systemic changes here, it\n> feels\n> like we're just playing whack-a-mole.\n\nExactly. If we can agree on where we're going then I think we can get\nthere.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 31 Jul 2023 15:10:32 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, 2023-07-31 at 13:17 -0400, Joe Conway wrote:\n> But the analysis of the issue needs to go one step further. Even if\n> the \n> search_path does not change from the originally intended one, a newly\n> created function can shadow the intended one based on argument\n> coercion \n> rules.\n\nThere are quite a few issues going down this path:\n\n* The set of objects in each schema can change. Argument coercion is a\nparticularly subtle one, but there are other ways that it could find\nthe wrong object. The temp namespace also has some subtle issues.\n\n* Schema USAGE privileges may vary over time or from caller to caller,\naffecting which items in the search path are searched at all. The same\ngoes if theres an object access hook in place.\n\n* $user should be resolved to a specific schema (or perhaps not in some\ncases?)\n\n* There are other GUCs and environment that can affect function\nbehavior. Is it worth trying to lock those down?\n\nI agree that each of these is some potential problem, but these are\nmuch smaller problems than allowing the caller to have complete control\nover the search_path.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 31 Jul 2023 15:22:07 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 5:15 PM Jeff Davis <[email protected]> wrote:\n> > ERROR: role \"rhaas\" should not execute arbitrary code provided by\n> > role \"jconway\"\n> > HINT: If this should be allowed, use the TRUST command to permit it.\n>\n> +1, though I'm not sure we need an extensive trust mechanism beyond\n> what we already have with the SET ROLE privilege.\n\nFWIW, I think it would be a good idea. It might not be absolutely\nmandatory but I think it would be smart.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Aug 2023 10:51:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 6:10 PM Jeff Davis <[email protected]> wrote:\n> Capturing the environment is not ideal either, in my opinion. It makes\n> it easy to carelessly depend on a schema that others might not have\n> USAGE privileges on, which would then create a runtime problem for\n> other callers. Also, I don't think we could just depend on the raw\n> search_path, we'd need to do some processing for $user, and there are\n> probably a few other annoyances.\n>\n> It's one possibility and we don't have a lot of great options, so I\n> don't want to rule it out though. If nothing else it could be a\n> transition path to something better.\n\nHere is my thought about this. Right now, we basically do one of two\nthings. In some cases, we parse statements when they're submitted, and\nthen store the resulting node trees. In such cases, references are\nfixed: the statements will always refer to the objects to which they\nreferred previously. In functions and procedures, except for the new\nBEGIN ATOMIC stuff, we just store the statements as a string and they\nget parsed at execution time. Then, the problem arises of statements\nbeing possibly parsed in an environment that differs from the original\none. It can differ either by search_path being different so that we\nlook in different schemas, or, as you point out here, if the contents\nof the schemas themselves have been modified.\n\nI think that a lot of people would like it if we moved more in the\ndirection of parsing statements at object definition time. Possibly\nbecause EDB deals with a lot of people coming from Oracle, I've heard\na lot of complaints about the PG behavior. However, there are some\nfairly significant problems with that idea. First, it would break some\nuse cases, such as creating a temporary table and then running DML\ncommands on it, or more generally any use case where a function or\nprocedure might need to reference objects that don't exist at time of\ndefinition. Second, while we have clear separation of parsing and\nexecution for queries, the same is not true for DDL commands; it's not\nthe case, I believe, that you can parse an arbitrary DDL command such\nthat all object references are resolved, and then later execute it.\nWe'd need to change a bunch of code to get there. Third, we'd have to\ndeal with dependency loops: right now, because functions and\nprocedures don't parse their bodies at definition time, they also\ndon't depend on the objects that they are going to end up accessing,\nwhich means that a function or procedure can be restored by pg_dump\nwithout worrying about whether those objects exist yet. That would\nhave to change, and that would mean creating dependency loops for\npg_dump, which we'd have to then find a way to break. I'm not trying\nto say that any of these problems are intractable, but I do think\nchanging stuff like this would be quite a bit of work -- and that's\nassuming the user impact was judged to be acceptable, which I'm not at\nall sure that it would be. We'd certainly need to provide some\nworkaround for people who want to do stuff like create and use\ntemporary tables inside of a function or procedure.\n\nNow, if we don't go in the direction of resolving everything at parse\ntime, then I think capturing search_path is probably the next best\nthing, or at least the next best thing that I've thought up so far. It\ndoesn't hold constant the meaning of the code to the same degree that\nparsing at definition time would do, but it gets us closer to that\nthan the status quo. Crucially, if the user is using a secure\nsearch_path, then any changes to the meaning of the code that captures\nthat search_path will have to be made by that user or someone with a\nsuperset of their privileges, which is a lot less serious than what\nyou get when there's no search_path setting at all, where the *caller*\ncan change the meaning of the called code. That is not, however, to\nsay that this idea is really good enough. To be honest, I think it's a\nbit of a kludge, and dropping a kludge on top of our entire user base\nand maybe also breaking a lot of things is not particularly where I\nwant to be. I just don't have an idea I like better at the moment.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Aug 2023 13:41:42 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 10:42 AM Robert Haas <[email protected]> wrote:\n\n> Now, if we don't go in the direction of resolving everything at parse\n> time, then I think capturing search_path is probably the next best\n> thing, or at least the next best thing that I've thought up so far.\n\n\nI'd much rather strongly encourage the user to write a safe and\nself-sufficient function definition. Specifically, if there is no\nsearch_path attached to the function then the search path that will be in\nplace will be temp + pg_catalog only. Though I wonder whether it would be\nadvantageous to allow a function to have a temporary schema separate from\nthe session-scoped one...\n\nThey can use ALTER FUNCTION and the existing \"FROM CURRENT\" specification\nto get back to current behavior if desired.\n\nGoing further, I could envision an \"explicit\" mode that both performs a\nparse-time check for object existence and optionally reports an error if\nthe lookup happened via an inexact match - forcing lots more type casts to\noccur (we'd probably need to invent something to say \"must match the\nanyelement function signature\") but ensuring at parse time you've correctly\nidentified everything you intend to be using. Sure, the meanings of those\nthings could change but the surface is much smaller than plugging a new\nfunction that matches earlier in the lookup resolution process.\n\nDavid J.\n\nOn Tue, Aug 1, 2023 at 10:42 AM Robert Haas <[email protected]> wrote:Now, if we don't go in the direction of resolving everything at parse\ntime, then I think capturing search_path is probably the next best\nthing, or at least the next best thing that I've thought up so far.I'd much rather strongly encourage the user to write a safe and self-sufficient function definition. Specifically, if there is no search_path attached to the function then the search path that will be in place will be temp + pg_catalog only. Though I wonder whether it would be advantageous to allow a function to have a temporary schema separate from the session-scoped one...They can use ALTER FUNCTION and the existing \"FROM CURRENT\" specification to get back to current behavior if desired.Going further, I could envision an \"explicit\" mode that both performs a parse-time check for object existence and optionally reports an error if the lookup happened via an inexact match - forcing lots more type casts to occur (we'd probably need to invent something to say \"must match the anyelement function signature\") but ensuring at parse time you've correctly identified everything you intend to be using. Sure, the meanings of those things could change but the surface is much smaller than plugging a new function that matches earlier in the lookup resolution process.David J.",
"msg_date": "Tue, 1 Aug 2023 11:16:19 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-08-01 at 11:16 -0700, David G. Johnston wrote:\n> They can use ALTER FUNCTION and the existing \"FROM CURRENT\"\n> specification to get back to current behavior if desired.\n\nThe current behavior is that the search_path comes from the environment\neach execution. FROM CURRENT saves the search_path at definition time\nand uses that each execution.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 01 Aug 2023 14:38:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 2:38 PM Jeff Davis <[email protected]> wrote:\n\n> On Tue, 2023-08-01 at 11:16 -0700, David G. Johnston wrote:\n> > They can use ALTER FUNCTION and the existing \"FROM CURRENT\"\n> > specification to get back to current behavior if desired.\n>\n> The current behavior is that the search_path comes from the environment\n> each execution. FROM CURRENT saves the search_path at definition time\n> and uses that each execution.\n>\n>\nRight...I apparently misread \"create\" as \"the\" in \"when CREATE FUNCTION is\nexecuted\".\n\nThe overall point stands, it just requires defining a similar \"FROM\nSESSION\" to allow for explicitly specifying the current default (missing)\nbehavior.\n\nDavid J.\n\nOn Tue, Aug 1, 2023 at 2:38 PM Jeff Davis <[email protected]> wrote:On Tue, 2023-08-01 at 11:16 -0700, David G. Johnston wrote:\n> They can use ALTER FUNCTION and the existing \"FROM CURRENT\"\n> specification to get back to current behavior if desired.\n\nThe current behavior is that the search_path comes from the environment\neach execution. FROM CURRENT saves the search_path at definition time\nand uses that each execution.Right...I apparently misread \"create\" as \"the\" in \"when CREATE FUNCTION is executed\".The overall point stands, it just requires defining a similar \"FROM SESSION\" to allow for explicitly specifying the current default (missing) behavior.David J.",
"msg_date": "Tue, 1 Aug 2023 14:47:47 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-08-01 at 13:41 -0400, Robert Haas wrote:\n> In functions and procedures, except for the new\n> BEGIN ATOMIC stuff, we just store the statements as a string and they\n> get parsed at execution time.\n\n...\n\n> I think that a lot of people would like it if we moved more in the\n> direction of parsing statements at object definition time.\n\nDo you mean that we'd introduce some BEGIN ATOMIC version of plpgsql\n(and other trusted languages)?\n\n> However, there are some\n> fairly significant problems with that idea.\n\nTo satisfy intended use cases of things like default expressions, CHECK\nconstraints, index expressions, etc., there is no need to call\nfunctions that would be subject to the problems you list.\n\nOne problem is that functions are too general a concept -- we have no\nidea whether the user is trying to do something simple or trying to do\nsomething \"interesting\".\n\n> Now, if we don't go in the direction of resolving everything at parse\n> time,\n\nIt would be useful to pursue this approach, but I don't think it will\nbe enough. We still need to solve our search_path problems.\n\n> then I think capturing search_path is probably the next best\n> thing,\n\n+0.5.\n\n> To be honest, I think it's\n> a\n> bit of a kludge, and dropping a kludge on top of our entire user base\n> and maybe also breaking a lot of things is not particularly where I\n> want to be. I just don't have an idea I like better at the moment.\n\nWe will also be fixing things for a lot of users who just haven't run\ninto a problem *yet* (or perhaps did, and just don't know it).\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n",
"msg_date": "Tue, 01 Aug 2023 15:00:16 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
},
{
"msg_contents": "On Tue, 2023-08-01 at 14:47 -0700, David G. Johnston wrote:\n> The overall point stands, it just requires defining a similar \"FROM\n> SESSION\" to allow for explicitly specifying the current default\n> (missing) behavior.\n\nThat sounds useful as a way to future-proof function definitions that\nintend to use the session search_path.\n\nIt seems like we're moving in the direction of search_path defaulting\nto FROM CURRENT (probably with a long road of compatibility GUCs to\nminimize breakage...) and everything else defaulting to FROM SESSION\n(as before)?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 01 Aug 2023 16:40:45 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix search_path to a safe value during maintenance\n operations."
}
] |
[
{
"msg_contents": "While working on 2dcd157, I noticed cfbot failures for Windows due to tests\nwith commands that had non-options specified before options. For example,\n\"createuser myrole -a myadmin\" specified a non-option (myrole) before the\noption/argument pair (-a admin). To get the tests passing for Windows,\nnon-options must be placed at the end (e.g., \"createuser -a myadmin\nmyrole\"). This same problem was encountered while working on 08951a7 [0],\nbut it was again fortunately caught with cfbot. Others have not been so\nlucky [1] [2] [3].\n\nThe reason for this discrepancy is because Windows uses the in-tree\nimplementation of getopt_long(), which differs from the other\nimplementations I've found in that it doesn't reorder non-options to the\nend of argv by default. Instead, it returns -1 as soon as the first\nnon-option is found, even if there are other options listed afterwards. By\nmoving non-options to the end of argv, getopt_long() can parse all\nspecified options and return -1 when only non-options remain. The\nimplementations I reviewed all reorder argv unless the POSIXLY_CORRECT\nenvironment variable is set or the \"optstring\" argument begins with '+'.\n\nThe best reasons I can think of to keep the current behavior are 1)\nreordering involves writing to the original argv array, which could be\nrisky (as noted by Tom [4]) and 2) any systems with a getopt_long()\nimplementation that doesn't reorder non-options could begin failing tests\nthat take advantage of this behavior. However, this quirk in the in-tree\ngetopt_long() is periodically missed by hackers, the only systems I'm aware\nof that use it are Windows and AIX, and every other implementation of\ngetopt_long() I saw reorders non-options by default. Furthermore, C99\nomits const decorations in main()'s signature, so modifying argv might not\nbe too scary.\n\nThus, I propose introducing this non-option reordering behavior but\nallowing it to be disabled via the POSIXLY_CORRECT environment variable.\nThe attached patch is my first attempt at implementing this proposal. I\ndon't think we need to bother with handling '+' at the beginning of\n\"optstring\" since it seems unlikely to be used in PostgreSQL, but it would\nprobably be easy enough to add if folks want it.\n\nI briefly looked at getopt() and concluded that we should probably retain\nits POSIX-compliant behavior for non-options, as reordering support seems\nmuch less universal than with getopt_long(). AFAICT all client utilities\nuse getopt_long(), anyway.\n\nThoughts?\n\n[0] https://postgr.es/m/20220525.110752.305692011781436338.horikyota.ntt%40gmail.com\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=869aa40\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ffd3980\n[3] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d9ddc50\n[4] https://postgr.es/m/20539.1237314382%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 9 Jun 2023 16:22:57 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Fri, 9 Jun 2023 16:22:57 -0700, Nathan Bossart <[email protected]> wrote in \n> While working on 2dcd157, I noticed cfbot failures for Windows due to tests\n> with commands that had non-options specified before options. For example,\n> \"createuser myrole -a myadmin\" specified a non-option (myrole) before the\n> option/argument pair (-a admin). To get the tests passing for Windows,\n> non-options must be placed at the end (e.g., \"createuser -a myadmin\n> myrole\"). This same problem was encountered while working on 08951a7 [0],\n> but it was again fortunately caught with cfbot. Others have not been so\n> lucky [1] [2] [3].\n\nWhile I don't see it as reason to change the behavior, I do believe\nthe change could be beneficial from a user's perspective.\n\n> The reason for this discrepancy is because Windows uses the in-tree\n> implementation of getopt_long(), which differs from the other\n> implementations I've found in that it doesn't reorder non-options to the\n> end of argv by default. Instead, it returns -1 as soon as the first\n> non-option is found, even if there are other options listed afterwards. By\n> moving non-options to the end of argv, getopt_long() can parse all\n> specified options and return -1 when only non-options remain. The\n> implementations I reviewed all reorder argv unless the POSIXLY_CORRECT\n> environment variable is set or the \"optstring\" argument begins with '+'.\n>\n> The best reasons I can think of to keep the current behavior are 1)\n> reordering involves writing to the original argv array, which could be\n> risky (as noted by Tom [4]) and 2) any systems with a getopt_long()\n> implementation that doesn't reorder non-options could begin failing tests\n> that take advantage of this behavior. However, this quirk in the in-tree\n> getopt_long() is periodically missed by hackers, the only systems I'm aware\n> of that use it are Windows and AIX, and every other implementation of\n> getopt_long() I saw reorders non-options by default. Furthermore, C99\n> omits const decorations in main()'s signature, so modifying argv might not\n> be too scary.\n\nPOSIXLY_CORRECT appears to be intended for debugging or feature\nvalidation. If we know we can always rearrange argv on those\nplatforms, we don't need it. I would suggest that we turn on the new\nfeature at the compile time on those platforms where we know this\nrearrangement works, instead of switching at runtime.\n\n> Thus, I propose introducing this non-option reordering behavior but\n> allowing it to be disabled via the POSIXLY_CORRECT environment variable.\n> The attached patch is my first attempt at implementing this proposal. I\n> don't think we need to bother with handling '+' at the beginning of\n> \"optstring\" since it seems unlikely to be used in PostgreSQL, but it would\n> probably be easy enough to add if folks want it.\n> \n> I briefly looked at getopt() and concluded that we should probably retain\n> its POSIX-compliant behavior for non-options, as reordering support seems\n> much less universal than with getopt_long(). AFAICT all client utilities\n> use getopt_long(), anyway.\n> \n> Thoughts?\n\nAs far as I can see, getopt_long on Rocky9 does *not* rearrange argv\nuntil it reaches the end of the array. But it won't matter much.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jun 2023 12:00:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 12:00:01PM +0900, Kyotaro Horiguchi wrote:\n> POSIXLY_CORRECT appears to be intended for debugging or feature\n> validation. If we know we can always rearrange argv on those\n> platforms, we don't need it. I would suggest that we turn on the new\n> feature at the compile time on those platforms where we know this\n> rearrangement works, instead of switching at runtime.\n\nI'd be okay with leaving it out wherever possible. I'm curious whether any\nsupported systems do not allow this.\n\n> As far as I can see, getopt_long on Rocky9 does *not* rearrange argv\n> until it reaches the end of the array. But it won't matter much.\n\nDo you mean that it rearranges argv once all the options have been\nreturned, or that it doesn't rearrange argv at all?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Jun 2023 22:13:43 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Mon, 12 Jun 2023 22:13:43 -0700, Nathan Bossart <[email protected]> wrote in \n> On Tue, Jun 13, 2023 at 12:00:01PM +0900, Kyotaro Horiguchi wrote:\n> > POSIXLY_CORRECT appears to be intended for debugging or feature\n> > validation. If we know we can always rearrange argv on those\n> > platforms, we don't need it. I would suggest that we turn on the new\n> > feature at the compile time on those platforms where we know this\n> > rearrangement works, instead of switching at runtime.\n> \n> I'd be okay with leaving it out wherever possible. I'm curious whether any\n> supported systems do not allow this.\n\nHmm. from the initial mail, I got the impression that AIX and Windows\nallow this, so I thought that we can do that for them. While there\ncould be other platforms that allow it, perhaps we don't need to go as\nfar as extending this until we come across another platform that does.\n\n> > As far as I can see, getopt_long on Rocky9 does *not* rearrange argv\n> > until it reaches the end of the array. But it won't matter much.\n> \n> Do you mean that it rearranges argv once all the options have been\n> returned, or that it doesn't rearrange argv at all?\n\nI meant the former. argv remains unaltered until getopt_long returns\n-1. Once it does, non-optional arguments are being collected at the\nend of argv. (But in my opinion, that behavior isn't very significant\nin this context..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:02:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 04:02:01PM +0900, Kyotaro Horiguchi wrote:\n> Hmm. from the initial mail, I got the impression that AIX and Windows\n> allow this, so I thought that we can do that for them. While there\n> could be other platforms that allow it, perhaps we don't need to go as\n> far as extending this until we come across another platform that does.\n\nWindows seems to allow rearranging argv, based upon cfbot's results. I do\nnot know about AIX. In any case, C99 explicitly mentions that argv should\nbe modifiable.\n\n>> > As far as I can see, getopt_long on Rocky9 does *not* rearrange argv\n>> > until it reaches the end of the array. But it won't matter much.\n>> \n>> Do you mean that it rearranges argv once all the options have been\n>> returned, or that it doesn't rearrange argv at all?\n> \n> I meant the former. argv remains unaltered until getopt_long returns\n> -1. Once it does, non-optional arguments are being collected at the\n> end of argv. (But in my opinion, that behavior isn't very significant\n> in this context..)\n\nGot it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 15:36:57 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 03:36:57PM -0700, Nathan Bossart wrote:\n> Windows seems to allow rearranging argv, based upon cfbot's results. I do\n> not know about AIX. In any case, C99 explicitly mentions that argv should\n> be modifiable.\n\nFew people have AIX machines around these days, but looking around it\nseems like the answer to this question would be no:\nhttps://github.com/nodejs/node/pull/10633\n\nNoah, do you have an idea?\n\n> Got it.\n\nMaking the internal implementation of getopt_long more flexible would\nbe really nice in the long-term. This is something that people have\nstepped on for many years, like ffd3980.\n\n(TBH, I think that there is little value in spending resources on AIX\nthese days. For one, few have an access to it, and getopt is not the\nonly tweak in the tree related to it. On top of that, C99 is required\nsince v12.)\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 08:52:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 08:52:27AM +0900, Michael Paquier wrote:\n> On Tue, Jun 13, 2023 at 03:36:57PM -0700, Nathan Bossart wrote:\n>> Windows seems to allow rearranging argv, based upon cfbot's results. I do\n>> not know about AIX. In any case, C99 explicitly mentions that argv should\n>> be modifiable.\n> \n> Few people have AIX machines around these days, but looking around it\n> seems like the answer to this question would be no:\n> https://github.com/nodejs/node/pull/10633\n> \n> Noah, do you have an idea?\n\nForgot to add Noah in CC about this part.\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 09:03:22 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 09:03:22AM +0900, Michael Paquier wrote:\n> On Wed, Jun 14, 2023 at 08:52:27AM +0900, Michael Paquier wrote:\n> > On Tue, Jun 13, 2023 at 03:36:57PM -0700, Nathan Bossart wrote:\n> >> Windows seems to allow rearranging argv, based upon cfbot's results. I do\n> >> not know about AIX. In any case, C99 explicitly mentions that argv should\n> >> be modifiable.\n> > \n> > Few people have AIX machines around these days, but looking around it\n> > seems like the answer to this question would be no:\n> > https://github.com/nodejs/node/pull/10633\n> > \n> > Noah, do you have an idea?\n\nNo, I don't have specific knowledge about mutating argv on AIX. PostgreSQL\nincludes this, which makes some statement about it working:\n\n#elif ... || defined(_AIX) || ...\n#define PS_USE_CLOBBER_ARGV\n\nIf you have a test program to be run, I can run it on AIX.\n\n\n",
"msg_date": "Tue, 13 Jun 2023 17:17:37 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 05:17:37PM -0700, Noah Misch wrote:\n> If you have a test program to be run, I can run it on AIX.\n\nThanks. The patch above [0] adjusts 040_createuser.pl to test modifying\nargv, so that's one test program. And here's a few lines for reversing\nargv:\n\n\t#include <stdio.h>\n\n\tint\n\tmain(int argc, char *argv[])\n\t{\n\t\tfor (int i = 0; i < argc / 2; i++)\n\t\t{\n\t\t\tchar\t *tmp = argv[i];\n\n\t\t\targv[i] = argv[argc - i - 1];\n\t\t\targv[argc - i - 1] = tmp;\n\t\t}\n\n\t\tfor (int i = 0; i < argc; i++)\n\t\t\tprintf(\"%s \", argv[i]);\n\t\tprintf(\"\\n\");\n\t}\n\n[0] https://postgr.es/m/attachment/147420/v1-0001-Teach-in-tree-getopt_long-to-move-non-options-to-.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 14:28:16 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 02:28:16PM -0700, Nathan Bossart wrote:\n> On Tue, Jun 13, 2023 at 05:17:37PM -0700, Noah Misch wrote:\n> > If you have a test program to be run, I can run it on AIX.\n> \n> Thanks. The patch above [0] adjusts 040_createuser.pl to test modifying\n> argv, so that's one test program. And here's a few lines for reversing\n> argv:\n> \n> \t#include <stdio.h>\n> \n> \tint\n> \tmain(int argc, char *argv[])\n> \t{\n> \t\tfor (int i = 0; i < argc / 2; i++)\n> \t\t{\n> \t\t\tchar\t *tmp = argv[i];\n> \n> \t\t\targv[i] = argv[argc - i - 1];\n> \t\t\targv[argc - i - 1] = tmp;\n> \t\t}\n> \n> \t\tfor (int i = 0; i < argc; i++)\n> \t\t\tprintf(\"%s \", argv[i]);\n> \t\tprintf(\"\\n\");\n> \t}\n\nHere's some output from this program (on AIX 7.1, same output when compiled\n32-bit or 64-bit):\n\n$ ./a.out a b c d e f\nf e d c b a ./a.out\n\nInteresting discussion here, too:\nhttps://github.com/libuv/libuv/pull/1187\n\n\n",
"msg_date": "Wed, 14 Jun 2023 15:11:54 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 03:11:54PM -0700, Noah Misch wrote:\n> Here's some output from this program (on AIX 7.1, same output when compiled\n> 32-bit or 64-bit):\n> \n> $ ./a.out a b c d e f\n> f e d c b a ./a.out\n\nThanks again.\n\n> Interesting discussion here, too:\n> https://github.com/libuv/libuv/pull/1187\n\nHm. IIUC modifying the argv pointers on AIX will modify the process title,\nwhich could cause 'ps' to temporarily show duplicate/missing arguments\nduring option parsing. That doesn't seem too terrible, but if pointer\nassignments aren't atomic, maybe 'ps' could be sent off to another part of\nmemory, which does seem terrible.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 15:46:08 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Wed, 14 Jun 2023 15:46:08 -0700, Nathan Bossart <[email protected]> wrote in \n> On Wed, Jun 14, 2023 at 03:11:54PM -0700, Noah Misch wrote:\n> > Here's some output from this program (on AIX 7.1, same output when compiled\n> > 32-bit or 64-bit):\n> > \n> > $ ./a.out a b c d e f\n> > f e d c b a ./a.out\n> \n> Thanks again.\n> \n> > Interesting discussion here, too:\n> > https://github.com/libuv/libuv/pull/1187\n> \n> Hm. IIUC modifying the argv pointers on AIX will modify the process title,\n> which could cause 'ps' to temporarily show duplicate/missing arguments\n> during option parsing. That doesn't seem too terrible, but if pointer\n> assignments aren't atomic, maybe 'ps' could be sent off to another part of\n> memory, which does seem terrible.\n\nHmm, the discussion seems to be based on the assumption that argv[0]\ncan be safely redirected to a different memory location. If that's the\ncase, we can prpbably rearrange the array, even if there's a small\nwindow where ps might display a confusing command line, right?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Jun 2023 14:30:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 02:30:34PM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 14 Jun 2023 15:46:08 -0700, Nathan Bossart <[email protected]> wrote in \n>> Hm. IIUC modifying the argv pointers on AIX will modify the process title,\n>> which could cause 'ps' to temporarily show duplicate/missing arguments\n>> during option parsing. That doesn't seem too terrible, but if pointer\n>> assignments aren't atomic, maybe 'ps' could be sent off to another part of\n>> memory, which does seem terrible.\n> \n> Hmm, the discussion seems to be based on the assumption that argv[0]\n> can be safely redirected to a different memory location. If that's the\n> case, we can prpbably rearrange the array, even if there's a small\n> window where ps might display a confusing command line, right?\n\nIf that's the extent of the breakage, then it seems alright to me. I've\nattached a new version of the patch that omits the POSIXLY_CORRECT stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Jun 2023 17:09:59 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 05:09:59PM -0700, Nathan Bossart wrote:\n> On Thu, Jun 15, 2023 at 02:30:34PM +0900, Kyotaro Horiguchi wrote:\n>> Hmm, the discussion seems to be based on the assumption that argv[0]\n>> can be safely redirected to a different memory location. If that's the\n>> case, we can prpbably rearrange the array, even if there's a small\n>> window where ps might display a confusing command line, right?\n> \n> If that's the extent of the breakage, then it seems alright to me.\n\nOkay by me to live with this burden.\n\n> I've attached a new version of the patch that omits the\n> POSIXLY_CORRECT stuff.\n\nThis looks OK at quick glance, though you may want to document at the\ntop of getopt_long() the reordering business with the non-options?\n--\nMichael",
"msg_date": "Fri, 16 Jun 2023 10:30:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 10:30:09AM +0900, Michael Paquier wrote:\n> On Thu, Jun 15, 2023 at 05:09:59PM -0700, Nathan Bossart wrote:\n>> I've attached a new version of the patch that omits the\n>> POSIXLY_CORRECT stuff.\n> \n> This looks OK at quick glance, though you may want to document at the\n> top of getopt_long() the reordering business with the non-options?\n\nI added a comment to this effect in v3. I also noticed that '-' wasn't\nproperly handled as a non-option, so I've tried to fix that as well.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Jun 2023 21:58:28 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Thu, 15 Jun 2023 21:58:28 -0700, Nathan Bossart <[email protected]> wrote in \n> On Fri, Jun 16, 2023 at 10:30:09AM +0900, Michael Paquier wrote:\n> > On Thu, Jun 15, 2023 at 05:09:59PM -0700, Nathan Bossart wrote:\n> >> I've attached a new version of the patch that omits the\n> >> POSIXLY_CORRECT stuff.\n> > \n> > This looks OK at quick glance, though you may want to document at the\n> > top of getopt_long() the reordering business with the non-options?\n> \n> I added a comment to this effect in v3. I also noticed that '-' wasn't\n> properly handled as a non-option, so I've tried to fix that as well.\n\n(Honestly, the rearrangement code looks somewhat tricky to grasp..)\n\nIt doesn't work properly if '--' is involved.\n\nFor example, consider the following options (even though they don't\nwork for the command).\n\npsql -t -T hoho -a hoge -- -1 hage hige huge\n\nAfter the function returns -1, the argv array looks like this. The\n\"[..]\" indicates the position of optind.\n\npsql -t -T hoho -a -- [-1] hage hige huge hoge\n\nThis is clearly incorrect since \"hoge\" gets moved to the end. The\nrearrangement logic needs to take into account the '--'. For your\ninformation, the glibc version yields the following result for the\nsame options.\n\npsql -t -T hoho -a -- [hoge] -1 hage hige huge\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Jun 2023 16:51:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 04:51:38PM +0900, Kyotaro Horiguchi wrote:\n> (Honestly, the rearrangement code looks somewhat tricky to grasp..)\n\nYeah, I think there's still some room for improvement here.\n\n> It doesn't work properly if '--' is involved.\n> \n> For example, consider the following options (even though they don't\n> work for the command).\n> \n> psql -t -T hoho -a hoge -- -1 hage hige huge\n> \n> After the function returns -1, the argv array looks like this. The\n> \"[..]\" indicates the position of optind.\n> \n> psql -t -T hoho -a -- [-1] hage hige huge hoge\n> \n> This is clearly incorrect since \"hoge\" gets moved to the end. The\n> rearrangement logic needs to take into account the '--'. For your\n> information, the glibc version yields the following result for the\n> same options.\n> \n> psql -t -T hoho -a -- [hoge] -1 hage hige huge\n\nAh, so it effectively retains the non-option ordering, even if there is a\n'--'. I think this behavior is worth keeping. I attempted to fix this in\nthe attached patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 16 Jun 2023 11:28:47 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Fri, 16 Jun 2023 11:28:47 -0700, Nathan Bossart <[email protected]> wrote in \n> On Fri, Jun 16, 2023 at 04:51:38PM +0900, Kyotaro Horiguchi wrote:\n> > (Honestly, the rearrangement code looks somewhat tricky to grasp..)\n> \n> Yeah, I think there's still some room for improvement here.\n\nThe argv elements get shuffled around many times with the\npatch. However, I couldn't find a way to decrease the count without\nresorting to a forward scan. So I've concluded the current approach\nis them most effeicient, considering the complexity.\n\n> Ah, so it effectively retains the non-option ordering, even if there is a\n> '--'. I think this behavior is worth keeping. I attempted to fix this in\n> the attached patch.\n\nI tried some patterns with the patch and it generates the same results\nwith the glibc version.\n\nThe TAP test looks fine and it catches the change.\n\nEverything looks fine to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 20 Jun 2023 14:12:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 02:12:44PM +0900, Kyotaro Horiguchi wrote:\n> The argv elements get shuffled around many times with the\n> patch. However, I couldn't find a way to decrease the count without\n> resorting to a forward scan. So I've concluded the current approach\n> is them most effeicient, considering the complexity.\n\nYeah, I'm not sure it's worth doing anything more sophisticated.\n\n> I tried some patterns with the patch and it generates the same results\n> with the glibc version.\n> \n> The TAP test looks fine and it catches the change.\n> \n> Everything looks fine to me.\n\nThanks for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Jun 2023 10:56:24 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "I spent some time tidying up the patch and adding a more detailed commit\nmessage.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 7 Jul 2023 20:52:24 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Fri, 7 Jul 2023 20:52:24 -0700, Nathan Bossart <[email protected]> wrote in \n> I spent some time tidying up the patch and adding a more detailed commit\n> message.\n\nThe commit message and the change to TAP script looks good.\n\n\nTwo conditions are to be reversed and one of them look somewhat\nunintuitive to me.\n\n+\t\t\tif (!force_nonopt && place[0] == '-' && place[1])\n+\t\t\t{\n+\t\t\t\tif (place[1] != '-' || place[2])\n+\t\t\t\t\tbreak;\n+\n+\t\t\t\toptind++;\n+\t\t\t\tforce_nonopt = true;\n+\t\t\t\tcontinue;\n+\t\t\t}\n\nThe first if looks good to me, but the second if is a bit hard to get the meaning at a glance. \"!(place[1] == '-' && place[2] == 0)\" is easier to read *to me*. Or I'm fine with the following structure here.\n\n> if (!force_nonopt ... )\n> {\n> if (place[1] == '-' && place[2] == 0)\n> {\n> optind+;\n> force_nonopt = true;\n> continue;\n> }\n> break;\n> }\n\n(To be honest, I see the goto looked clear than for(;;)..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:57:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 04:57:11PM +0900, Kyotaro Horiguchi wrote:\n> +\t\t\tif (!force_nonopt && place[0] == '-' && place[1])\n> +\t\t\t{\n> +\t\t\t\tif (place[1] != '-' || place[2])\n> +\t\t\t\t\tbreak;\n> +\n> +\t\t\t\toptind++;\n> +\t\t\t\tforce_nonopt = true;\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> \n> The first if looks good to me, but the second if is a bit hard to get the meaning at a glance. \"!(place[1] == '-' && place[2] == 0)\" is easier to read *to me*. Or I'm fine with the following structure here.\n\nI'd readily admit that it's tough to read these lines. I briefly tried to\nmake use of strcmp() to improve readability, but I wasn't having much luck.\nI'll give it another try.\n\n>> if (!force_nonopt ... )\n>> {\n>> if (place[1] == '-' && place[2] == 0)\n>> {\n>> optind+;\n>> force_nonopt = true;\n>> continue;\n>> }\n>> break;\n>> }\n> \n> (To be honest, I see the goto looked clear than for(;;)..)\n\nOkay. I don't mind adding it back if it improves readability.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Jul 2023 11:12:37 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "Here's a new version of the patch with the latest feedback addressed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 10 Jul 2023 13:06:58 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Mon, 10 Jul 2023 13:06:58 -0700, Nathan Bossart <[email protected]> wrote in \n> Here's a new version of the patch with the latest feedback addressed.\n\nThanks!\n\n+\t\t * An argument is a non-option if it meets any of the following\n+\t\t * criteria: it follows an argument that is equivalent to the string\n+\t\t * \"--\", it is equivalent to the string \"-\", or it does not start with\n+\t\t * '-'. When we encounter a non-option, we move it to the end of argv\n+\t\t * (after shifting all remaining arguments over to make room), and\n+\t\t * then we try again with the next argument.\n+\t\t */\n+\t\tif (force_nonopt || strcmp(\"-\", place) == 0 || place[0] != '-')\n...\n+\t\telse if (strcmp(\"--\", place) == 0)\n\nI like it. We don't need to overcomplicate things just for the sake of\nspeed here. Plus, this version looks the most simple to me. That being\nsaid, it might be better if the last term is positioned in the second\nplace. This is mainly because a lone hyphen is less common than words\nthat starts with characters other than a pyphen.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Jul 2023 16:16:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 04:16:09PM +0900, Kyotaro Horiguchi wrote:\n> I like it. We don't need to overcomplicate things just for the sake of\n> speed here. Plus, this version looks the most simple to me. That being\n> said, it might be better if the last term is positioned in the second\n> place. This is mainly because a lone hyphen is less common than words\n> that starts with characters other than a pyphen.\n\nSure. І did it this way in v7.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 11 Jul 2023 09:32:32 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 09:32:32AM -0700, Nathan Bossart wrote:\n> Sure. І did it this way in v7.\n\nAfter a couple more small edits, I've committed this. I looked through all\nuses of getopt_long() in PostgreSQL earlier today, and of the programs that\naccepted non-options, most accepted only one, some others accepted 2-3, and\necpg and pg_regress accepted any number. Given this analysis, I'm not too\nworried about the O(n^2) behavior that the patch introduces. You'd likely\nneed to provide an enormous number of non-options for it to be noticeable,\nand I'm dubious such use-cases exist.\n\nDuring my analysis, I noticed that pg_ctl contains a workaround for the\nlack of argument reordering. I think this can now be removed. Patch\nattached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 12 Jul 2023 20:49:03 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 08:49:03PM -0700, Nathan Bossart wrote:\n> After a couple more small edits, I've committed this. I looked through all\n> uses of getopt_long() in PostgreSQL earlier today, and of the programs that\n> accepted non-options, most accepted only one, some others accepted 2-3, and\n> ecpg and pg_regress accepted any number. Given this analysis, I'm not too\n> worried about the O(n^2) behavior that the patch introduces. You'd likely\n> need to provide an enormous number of non-options for it to be noticeable,\n> and I'm dubious such use-cases exist.\n> \n> During my analysis, I noticed that pg_ctl contains a workaround for the\n> lack of argument reordering. I think this can now be removed. Patch\n> attached.\n\nInteresting piece of history that you have found here, coming from\nf3d6d94 back in 2004. The old pg_ctl.sh before that did not need any\nof that. It looks sensible to do something about that.\n\nSomething does not seem to be right seen from here, a CI run with\nWindows 2019 fails when using pg_ctl at the beginning of most of the\ntests, like:\n[04:56:07.404] ------------------------------------- 8< -------------------------------------\n[04:56:07.404] stderr:\n[04:56:07.404] pg_ctl: too many command-line arguments (first is \"-D\")\n--\nMichael",
"msg_date": "Thu, 13 Jul 2023 14:39:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "At Thu, 13 Jul 2023 14:39:32 +0900, Michael Paquier <[email protected]> wrote in \n> [04:56:07.404] pg_ctl: too many command-line arguments (first is \"-D\")\n\nMmm. It checks, for example, for \"pg_ctl initdb -D $tempdir/data -o\n-N\". This version of getopt_long() returns -1 as soon as it meets the\nfirst non-option \"initdb\". This is somewhat different from the last\ntime what I saw the patch and looks strange at a glance..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Jul 2023 17:25:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 02:39:32PM +0900, Michael Paquier wrote:\n> Something does not seem to be right seen from here, a CI run with\n> Windows 2019 fails when using pg_ctl at the beginning of most of the\n> tests, like:\n> [04:56:07.404] ------------------------------------- 8< -------------------------------------\n> [04:56:07.404] stderr:\n> [04:56:07.404] pg_ctl: too many command-line arguments (first is \"-D\")\n\nAssuming you are referring to [0], it looks like you are missing 411b720.\n\n[0] https://github.com/michaelpq/postgres/commits/getopt_test\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 07:57:12 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 07:57:12AM -0700, Nathan Bossart wrote:\n> Assuming you are referring to [0], it looks like you are missing 411b720.\n> \n> [0] https://github.com/michaelpq/postgres/commits/getopt_test\n\nIndeed, it looks like I've fat-fingered a rebase here. I am able to\nget a clean CI run when running this patch, sorry for the noise.\n\nAnyway, this introduces a surprising behavior when specifying too many\nsubcommands. On HEAD:\n$ pg_ctl stop -D $PGDATA kill -t 20 start\npg_ctl: too many command-line arguments (first is \"stop\")\nTry \"pg_ctl --help\" for more information.\n$ pg_ctl stop -D $PGDATA -t 20 start\npg_ctl: too many command-line arguments (first is \"stop\")\nTry \"pg_ctl --help\" for more information.\n\nWith the patch:\n$ pg_ctl stop -D $PGDATA -t 20 start\npg_ctl: too many command-line arguments (first is \"start\")\nTry \"pg_ctl --help\" for more information.\n$ pg_ctl stop -D $PGDATA kill -t 20 start\npg_ctl: too many command-line arguments (first is \"kill\")\nTry \"pg_ctl --help\" for more information.\n\nSo the error message reported is incorrect now, referring to an\nincorrect first subcommand.\n--\nMichael",
"msg_date": "Fri, 14 Jul 2023 13:27:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 01:27:26PM +0900, Michael Paquier wrote:\n> Indeed, it looks like I've fat-fingered a rebase here. I am able to\n> get a clean CI run when running this patch, sorry for the noise.\n> \n> Anyway, this introduces a surprising behavior when specifying too many\n> subcommands. On HEAD:\n> $ pg_ctl stop -D $PGDATA kill -t 20 start\n> pg_ctl: too many command-line arguments (first is \"stop\")\n> Try \"pg_ctl --help\" for more information.\n> $ pg_ctl stop -D $PGDATA -t 20 start\n> pg_ctl: too many command-line arguments (first is \"stop\")\n> Try \"pg_ctl --help\" for more information.\n> \n> With the patch:\n> $ pg_ctl stop -D $PGDATA -t 20 start\n> pg_ctl: too many command-line arguments (first is \"start\")\n> Try \"pg_ctl --help\" for more information.\n> $ pg_ctl stop -D $PGDATA kill -t 20 start\n> pg_ctl: too many command-line arguments (first is \"kill\")\n> Try \"pg_ctl --help\" for more information.\n> \n> So the error message reported is incorrect now, referring to an\n> incorrect first subcommand.\n\nI did notice this, but I had the opposite reaction. Take the following\nexamples of client programs that accept one non-option:\n\n\t~$ pg_resetwal a b c\n\tpg_resetwal: error: too many command-line arguments (first is \"b\")\n\tpg_resetwal: hint: Try \"pg_resetwal --help\" for more information.\n\n\t~$ createuser a b c\n\tcreateuser: error: too many command-line arguments (first is \"b\")\n\tcreateuser: hint: Try \"createuser --help\" for more information.\n\n\t~$ pgbench a b c\n\tpgbench: error: too many command-line arguments (first is \"b\")\n\tpgbench: hint: Try \"pgbench --help\" for more information.\n\n\t~$ pg_restore a b c\n\tpg_restore: error: too many command-line arguments (first is \"b\")\n\tpg_restore: hint: Try \"pg_restore --help\" for more information.\n\nYet pg_ctl gives:\n\n\t~$ pg_ctl start a b c\n\tpg_ctl: too many command-line arguments (first is \"start\")\n\tTry \"pg_ctl --help\" for more information.\n\nIn this example, isn't \"a\" the first extra non-option that should be\nreported?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Jul 2023 21:38:42 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 09:38:42PM -0700, Nathan Bossart wrote:\n> I did notice this, but I had the opposite reaction.\n\nAhah, well ;)\n\n> Take the following examples of client programs that accept one non-option:\n> \n> \t~$ pg_resetwal a b c\n> \tpg_resetwal: error: too many command-line arguments (first is \"b\")\n> \tpg_resetwal: hint: Try \"pg_resetwal --help\" for more information.\n> \n> Yet pg_ctl gives:\n> \n> \t~$ pg_ctl start a b c\n> \tpg_ctl: too many command-line arguments (first is \"start\")\n> \tTry \"pg_ctl --help\" for more information.\n> \n> In this example, isn't \"a\" the first extra non-option that should be\n> reported?\n\nGood point. This is interpreting \"first\" as being the first option\nthat's invalid. Here my first impression was that pg_ctl got that\nright, where \"first\" refers to the first subcommand that would be\nvalid. Objection withdrawn.\n--\nMichael",
"msg_date": "Fri, 14 Jul 2023 14:02:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 02:02:28PM +0900, Michael Paquier wrote:\n> Objection withdrawn.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 14 Jul 2023 12:42:52 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Jul 13, 2023 at 09:38:42PM -0700, Nathan Bossart wrote:\n>> Take the following examples of client programs that accept one non-option:\n>> \n>> ~$ pg_resetwal a b c\n>> pg_resetwal: error: too many command-line arguments (first is \"b\")\n>> pg_resetwal: hint: Try \"pg_resetwal --help\" for more information.\n>> \n>> Yet pg_ctl gives:\n>> \n>> ~$ pg_ctl start a b c\n>> pg_ctl: too many command-line arguments (first is \"start\")\n>> Try \"pg_ctl --help\" for more information.\n>> \n>> In this example, isn't \"a\" the first extra non-option that should be\n>> reported?\n\n> Good point. This is interpreting \"first\" as being the first option\n> that's invalid. Here my first impression was that pg_ctl got that\n> right, where \"first\" refers to the first subcommand that would be\n> valid. Objection withdrawn.\n\nWe just had a user complaint that seems to trace to exactly this\nbogus reporting in pg_ctl [1]. Although I was originally not\nvery pleased with changing our getopt_long to do switch reordering,\nI'm now wondering if we should back-patch these changes as bug\nfixes. It's probably not worth the risk, but ...\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CANzqJaDCagH5wNkPQ42%3DFx3mJPR-YnB3PWFdCAYAVdb9%3DQ%2Bt-A%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 14:41:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 02:41:22PM -0500, Tom Lane wrote:\n> We just had a user complaint that seems to trace to exactly this\n> bogus reporting in pg_ctl [1]. Although I was originally not\n> very pleased with changing our getopt_long to do switch reordering,\n> I'm now wondering if we should back-patch these changes as bug\n> fixes. It's probably not worth the risk, but ...\n\nI'm not too concerned about the risks of back-patching these commits, but\nif this 19-year-old bug was really first reported today, I'd agree that\nfixing it in the stable branches is probably not worth it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 18 Dec 2023 15:52:01 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Dec 18, 2023 at 02:41:22PM -0500, Tom Lane wrote:\n>> We just had a user complaint that seems to trace to exactly this\n>> bogus reporting in pg_ctl [1]. Although I was originally not\n>> very pleased with changing our getopt_long to do switch reordering,\n>> I'm now wondering if we should back-patch these changes as bug\n>> fixes. It's probably not worth the risk, but ...\n\n> I'm not too concerned about the risks of back-patching these commits, but\n> if this 19-year-old bug was really first reported today, I'd agree that\n> fixing it in the stable branches is probably not worth it.\n\nAgreed, if it actually is 19 years old. I'm wondering a little bit\nif there could be some moderately-recent glibc behavior change\ninvolved. I'm not excited enough about it to go trawl their change\nlog, but we should keep our ears cocked for similar reports.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Dec 2023 21:31:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 09:31:54PM -0500, Tom Lane wrote:\n> Agreed, if it actually is 19 years old. I'm wondering a little bit\n> if there could be some moderately-recent glibc behavior change\n> involved. I'm not excited enough about it to go trawl their change\n> log, but we should keep our ears cocked for similar reports.\n\n From a brief glance, I believe this is long-standing behavior. Even though\nwe advance optind at the bottom of the loop, the next getopt_long() call\nseems to reset it to the first non-option (which was saved in a previous\ncall).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 19 Dec 2023 10:44:52 -0600",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: add non-option reordering to in-tree getopt_long"
}
] |
[
{
"msg_contents": "Hi,\n\nThis message is not about inventing backend threads, but I was\nreminded to post this by some thought-provoking hallway track\ndiscussion of that large topic at PGCon, mostly with Heikki.\n\n1. We should completely remove --disable-thread-safety and the\nvarious !defined(ENABLE_THREAD_SAFETY) code paths. There are no\ncomputers that need it, it's not being tested, and support was already\ndropped/hardcoded in meson.build. Here is a mostly mechanical patch\nto finish that job, originally posted in one of the big 'historical\nbaggage' threads -- I just didn't get around to committing in time for\n16. Also a couple of related tiny cleanups for port/thread.c, which\nis by now completely bogus.\n\n2. I don't like the way we have to deal with POSIX vs Windows at\nevery site where we use threads, and each place has a different style\nof wrappers. I considered a few different approaches to cleaning this\nup:\n\n* provide centralised and thorough pthread emulation for Windows; I\ndon't like this, I don't even like all of pthreads and there are many\ndetails to get lost in\n* adopt C11 <threads.h>; unfortunately it is too early, so you'd need\nto write/borrow replacements for at least 3 of our 11 target systems\n* invent our own mini-abstraction for a carefully controlled subset of stuff\n\nHere is an attempt at that last thing. Since I don't really like\nmaking up new names for things, I just used all the C11 names but with\npg_ in front, and declared it all in \"port/pg_threads.h\" (actually I\ntried to write C11 replacements and then noped out and added the pg_\nprefixes). I suppose the idea is that it and the prefixes might\neventually go away. Note: here I am talking only about very basic\noperations like create, exit, join, explicit thread locals -- the\nstuff that we actually use today in frontend code. I'm not talking\nabout other stuff like C11 atomics, memory models, and the\nthread_local storage class, which are all very good and interesting\ntopics for another day.\n\nOne mystery still eludes me on Windows: while trying to fix the\nancient bug that ECPG leaks memory on Windows, because it doesn't call\nthread-local destructors, I discovered that if you use FlsAlloc\ninstead of TlsAlloc you can pass in a destructor (that's\n\"fibre-local\", but we're not using fibres so it's works out the same\nas thread local AFAICT). It seems to crash in strange ways if you\nuncomment the line FlsAlloc(destructor). Maybe it calls the\ndestructor even for NULL value? Or maybe it suffers from some kind of\nWindowsian multiple-heaps-active problem. I noticed that some other\nattempts at implementing tss on Windows use TlsAlloc but do their own\nbookkeeping of destructors and run them at thread exit, which I was\nhoping to avoid, but ...\n\nBefore I spend time fixing that issue or seeking Windowsian help...\ndo you think this direction is an improvement?",
"msg_date": "Sat, 10 Jun 2023 14:23:52 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cleaning up threading code"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-10 14:23:52 +1200, Thomas Munro wrote:\n> 1. We should completely remove --disable-thread-safety and the\n> various !defined(ENABLE_THREAD_SAFETY) code paths.\n\nYes, please!\n\n\n> 2. I don't like the way we have to deal with POSIX vs Windows at\n> every site where we use threads, and each place has a different style\n> of wrappers. I considered a few different approaches to cleaning this\n> up:\n> \n> * provide centralised and thorough pthread emulation for Windows; I\n> don't like this, I don't even like all of pthreads and there are many\n> details to get lost in\n> * adopt C11 <threads.h>; unfortunately it is too early, so you'd need\n> to write/borrow replacements for at least 3 of our 11 target systems\n> * invent our own mini-abstraction for a carefully controlled subset of stuff\n> \n> Here is an attempt at that last thing. Since I don't really like\n> making up new names for things, I just used all the C11 names but with\n> pg_ in front, and declared it all in \"port/pg_threads.h\" (actually I\n> tried to write C11 replacements and then noped out and added the pg_\n> prefixes). I suppose the idea is that it and the prefixes might\n> eventually go away. Note: here I am talking only about very basic\n> operations like create, exit, join, explicit thread locals -- the\n> stuff that we actually use today in frontend code.\n\nUnsurprisingly, I like this.\n\n\n> I'm not talking\n> about other stuff like C11 atomics, memory models, and the\n> thread_local storage class, which are all very good and interesting\n> topics for another day.\n\nHm. I agree on C11 atomics and memory models, but I don't see a good reason to\nnot add support for thread_local?\n\nIn fact, I'd rather add support for thread_local and not add support for\n\"thread keys\" than the other way round. Afaict most, if not all, the places\nusing keys would look simpler with thread_local than with keys.\n\n\n> One mystery still eludes me on Windows: while trying to fix the\n> ancient bug that ECPG leaks memory on Windows, because it doesn't call\n> thread-local destructors, I discovered that if you use FlsAlloc\n> instead of TlsAlloc you can pass in a destructor (that's\n> \"fibre-local\", but we're not using fibres so it's works out the same\n> as thread local AFAICT). It seems to crash in strange ways if you\n> uncomment the line FlsAlloc(destructor).\n\nDo you have a backtrace available?\n\n\n> Subject: [PATCH 1/5] Remove obsolete comments and code from fe-auth.c.\n> Subject: [PATCH 2/5] Rename port/thread.c to port/user.c.\n\nLGTM\n\n\n> -###############################################################\n> -# Threading\n> -###############################################################\n> -\n> -# XXX: About to rely on thread safety in the autoconf build, so not worth\n> -# implementing a fallback.\n> -cdata.set('ENABLE_THREAD_SAFETY', 1)\n\nI wonder if we should just unconditionally set that in c.h or such? It'd not\nbe crazy for external projects to rely on that being set.\n\n\n\n> From ca74df4ff11ce0fd1e51786eccaeca810921fc6d Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <[email protected]>\n> Date: Sat, 10 Jun 2023 09:14:07 +1200\n> Subject: [PATCH 4/5] Add port/pg_threads.h for a common threading API.\n> \n> Loosely based on C11's <threads.h>, but with pg_ prefixes, this will\n> allow us to clean up many places that have to cope with POSIX and\n> Windows threads.\n> ---\n> src/include/port/pg_threads.h | 252 +++++++++++++++++++++++++++++++\n> src/port/Makefile | 1 +\n> src/port/meson.build | 1 +\n> src/port/pg_threads.c | 117 ++++++++++++++\n> src/tools/pgindent/typedefs.list | 7 +\n> 5 files changed, 378 insertions(+)\n> create mode 100644 src/include/port/pg_threads.h\n> create mode 100644 src/port/pg_threads.c\n> \n> diff --git a/src/include/port/pg_threads.h b/src/include/port/pg_threads.h\n> new file mode 100644\n> index 0000000000..1706709994\n> --- /dev/null\n> +++ b/src/include/port/pg_threads.h\n> @@ -0,0 +1,252 @@\n> +/*\n> + * A multi-threading API abstraction loosely based on the C11 standard's\n> + * <threads.h> header. The identifiers have a pg_ prefix. Perhaps one day\n> + * we'll use standard C threads directly, and we'll drop the prefixes.\n> + *\n> + * Exceptions:\n> + * - pg_thrd_barrier_t is not based on C11\n> + */\n> +\n> +#ifndef PG_THREADS_H\n> +#define PG_THREADS_H\n> +\n> +#ifdef WIN32\n\nI guess we could try to use C11 thread.h style threading on msvc 2022:\nhttps://developercommunity.visualstudio.com/t/finalize-c11-support-for-threading/1629487\n\n\n> +#define WIN32_LEAN_AND_MEAN\n\nWhy do we need this again here - shouldn't the define in\nsrc/include/port/win32_port.h already take care of this?\n\n\n> +#include <windows.h>\n> +#include <processthreadsapi.h>\n> +#include <fibersapi.h>\n> +#include <synchapi.h>\n> +#else\n> +#include <errno.h>\n> +#include \"port/pg_pthread.h\"\n> +#endif\n\nSeems somewhat odd to have port pg_threads.h and pg_pthread.h - why not merge\nthese?\n\n\n> +#include <stdint.h>\n\nI think we widely rely stdint.h, errno.h to be included via c.h.\n\n\n> +#ifdef WIN32\n> +typedef HANDLE pg_thrd_t;\n> +typedef CRITICAL_SECTION pg_mtx_t;\n> +typedef CONDITION_VARIABLE pg_cnd_t;\n> +typedef SYNCHRONIZATION_BARRIER pg_thrd_barrier_t;\n> +typedef DWORD pg_tss_t;\n> +typedef INIT_ONCE pg_once_flag;\n> +#define PG_ONCE_FLAG_INIT INIT_ONCE_STATIC_INIT\n> +#else\n> +typedef pthread_t pg_thrd_t;\n> +typedef pthread_mutex_t pg_mtx_t;\n> +typedef pthread_cond_t pg_cnd_t;\n> +typedef pthread_barrier_t pg_thrd_barrier_t;\n> +typedef pthread_key_t pg_tss_t;\n> +typedef pthread_once_t pg_once_flag;\n> +#define PG_ONCE_FLAG_INIT PTHREAD_ONCE_INIT\n> +#endif\n> +\n> +typedef int (*pg_thrd_start_t) (void *);\n> +typedef void (*pg_tss_dtor_t) (void *);\n> +typedef void (*pg_call_once_function_t) (void);\n> +\n> +enum\n> +{\n> +\tpg_thrd_success = 0,\n> +\tpg_thrd_nomem = 1,\n> +\tpg_thrd_timedout = 2,\n> +\tpg_thrd_busy = 3,\n> +\tpg_thrd_error = 4\n> +};\n> +\n> +enum\n> +{\n> +\tpg_mtx_plain = 0\n> +};\n\nI think we add typedefs for enums as well.\n\n\n> +#ifndef WIN32\n> +static inline int\n> +pg_thrd_maperror(int error)\n\nSeems like that should have pthread in the name?\n\n> +{\n> +\tif (error == 0)\n> +\t\treturn pg_thrd_success;\n> +\tif (error == ENOMEM)\n> +\t\treturn pg_thrd_nomem;\n> +\treturn pg_thrd_error;\n> +}\n> +#endif\n> +\n> +#ifdef WIN32\n> +BOOL\t\tpg_call_once_trampoline(pg_once_flag *flag, void *parameter, void **context);\n> +#endif\n> +\n> +static inline void\n> +pg_call_once(pg_once_flag *flag, pg_call_once_function_t function)\n> +{\n> +#ifdef WIN32\n> +\tInitOnceExecuteOnce(flag, pg_call_once_trampoline, (void *) function, NULL);\n> +#else\n> +\tpthread_once(flag, function);\n> +#endif\n> +}\n> +\n> +static inline int\n> +pg_thrd_equal(pg_thrd_t lhs, pg_thrd_t rhs)\n> +{\n> +#ifdef WIN32\n> +\treturn lhs == rhs;\n> +#else\n> +\treturn pthread_equal(lhs, rhs);\n> +#endif\n> +}\n> +\n> +static inline int\n> +pg_tss_create(pg_tss_t *key, pg_tss_dtor_t destructor)\n> +{\n> +#ifdef WIN32\n> +\t//*key = FlsAlloc(destructor);\n> +\t*key = FlsAlloc(NULL);\n> +\treturn pg_thrd_success;\n\nAfaict FlsAlloc() can return errors:\n> If the function fails, the return value is FLS_OUT_OF_INDEXES. To get\n> extended error information, call GetLastError.\n\n\n> +#else\n> +\treturn pg_thrd_maperror(pthread_key_create(key, destructor));\n> +#endif\n> +}\n\n\n> +static inline int\n> +pg_tss_set(pg_tss_t key, void *value)\n> +{\n> +#ifdef WIN32\n> +\treturn FlsSetValue(key, value) ? pg_thrd_success : pg_thrd_error;\n> +#else\n> +\treturn pg_thrd_maperror(pthread_setspecific(key, value));\n> +#endif\n> +}\n\nIt's somewhat annoying that this can return errors.\n\n\n> +static inline int\n> +pg_mtx_init(pg_mtx_t *mutex, int type)\n\nI am somewhat confused by the need for these three letter\nabbreviations... Blaming C11, not you, to be clear.\n\n\n\n> \n> -# port needs to be in include path due to pthread-win32.h\n> +# XXX why do we need this?\n> libpq_inc = include_directories('.', '../../port')\n> libpq_c_args = ['-DSO_MAJOR_VERSION=5']\n\nHistorically we need it because random places outside of libpq include\npthread-win32.h - but pthread-win32.h is in src/port, not in\nsrc/include/port. Why on earth it was done this way, I do not know.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 22:26:46 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On 6/10/23 01:26, Andres Freund wrote:\n> On 2023-06-10 14:23:52 +1200, Thomas Munro wrote:\n>> I'm not talking\n>> about other stuff like C11 atomics, memory models, and the\n>> thread_local storage class, which are all very good and interesting\n>> topics for another day.\n> \n> Hm. I agree on C11 atomics and memory models, but I don't see a good reason to\n> not add support for thread_local?\n\nPerhaps obvious to most/all on this thread, but something that bit me \nrecently to keep in mind as we contemplate threads.\n\nWhen a shared object library (e.g. extensions) is loaded, the init() \nfunction, and any functions marked with the constructor attribute, are \nexecuted. However they are not run again during thread_start().\n\nSo if there is thread local state that needs to be initialized, the \ninitialization needs to be manually redone in each thread. Unless there \nis some mechanism similar to a constructor that I missed?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Sat, 10 Jun 2023 11:19:46 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On 10.06.23 07:26, Andres Freund wrote:\n>> -###############################################################\n>> -# Threading\n>> -###############################################################\n>> -\n>> -# XXX: About to rely on thread safety in the autoconf build, so not worth\n>> -# implementing a fallback.\n>> -cdata.set('ENABLE_THREAD_SAFETY', 1)\n> \n> I wonder if we should just unconditionally set that in c.h or such? It'd not\n> be crazy for external projects to rely on that being set.\n\nWe definitely should keep the mention in ecpg_config.h.in, since that is \nexplicitly put there for client code to use. We keep HAVE_LONG_LONG_INT \netc. there for similar reasons.\n\nAnother comment on patch 0003: I think this removal in configure.ac is \ntoo much:\n\n- else\n- LDAP_LIBS_FE=\"-lldap $EXTRA_LDAP_LIBS\"\n\nThis could would still be reachable for $enable_thread_safety=yes and \n$thread_safe_libldap=yes, which I suppose is the normal case?\n\n\n",
"msg_date": "Mon, 3 Jul 2023 09:29:34 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On 03/07/2023 10:29, Peter Eisentraut wrote:\n> On 10.06.23 07:26, Andres Freund wrote:\n>>> -###############################################################\n>>> -# Threading\n>>> -###############################################################\n>>> -\n>>> -# XXX: About to rely on thread safety in the autoconf build, so not worth\n>>> -# implementing a fallback.\n>>> -cdata.set('ENABLE_THREAD_SAFETY', 1)\n>>\n>> I wonder if we should just unconditionally set that in c.h or such? It'd not\n>> be crazy for external projects to rely on that being set.\n> \n> We definitely should keep the mention in ecpg_config.h.in, since that is\n> explicitly put there for client code to use. We keep HAVE_LONG_LONG_INT\n> etc. there for similar reasons.\n\n+1. Patches 1-3 look good to me, with the things Andres & Peter already \npointed out.\n\nThe docs at https://www.postgresql.org/docs/current/libpq-threading.html \nneeds updating. It's technically still accurate, but it ought to at \nleast mention that libpq on v17 and above is always thread-safe. I \npropose the attached. It moves the note on \"you can only use one PGConn \nfrom one thread at a time\" to the top, before the description of the \nPQisthreadsafe() function.\n\nOn 10/06/2023 05:23, Thomas Munro wrote:\n> \n> 2. I don't like the way we have to deal with POSIX vs Windows at\n> every site where we use threads, and each place has a different style\n> of wrappers. I considered a few different approaches to cleaning this\n> up:\n> \n> * provide centralised and thorough pthread emulation for Windows; I\n> don't like this, I don't even like all of pthreads and there are many\n> details to get lost in\n> * adopt C11 <threads.h>; unfortunately it is too early, so you'd need\n> to write/borrow replacements for at least 3 of our 11 target systems\n> * invent our own mini-abstraction for a carefully controlled subset of stuff\n\nGoogle search on \"c11 threads on Windows\" found some emulation wrappers: \nhttps://github.com/jtsiomb/c11threads and \nhttps://github.com/tinycthread/tinycthread, for example. Would either of \nthose work for us?\n\nEven if we use an existing emulation wrapper, I wouldn't mind having our \nown pg_* abstration on top of it, to document which subset of the POSIX \nor C11 functions we actually use.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 3 Jul 2023 11:43:14 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "Thanks all for the reviews. I pushed the first two small patches.\nHere is a new version of the main --disable-thread-safety removal\npart, with these changes:\n\n* fixed LDAP_LIBS_FE snafu reported by Peter E\n* defined ENABLE_THREAD_SAFETY 1 in c.h, for the benefit of extensions\n* defined ENABLE_THREAD_SAFETY 1 ecpg_config.h, for the benefit of ECPG clients\n* added Heikki's doc change as a separate patch (with minor xml snafu fixed)\n\nI'll follow up with a new version of the later pg_threads.h proposal\nand responses to feedback on that after getting this basic clean-up\nwork in. Any other thoughts on this part?",
"msg_date": "Mon, 10 Jul 2023 10:45:04 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 10:45 AM Thomas Munro <[email protected]> wrote:\n> * defined ENABLE_THREAD_SAFETY 1 in c.h, for the benefit of extensions\n\nI may lack imagination but I couldn't think of any use for that\nvestigial macro in backend/extension code, and client code doesn't see\nc.h and might not get the right answer anyway if it's dynamically\nlinked which is the usual case. I took it out for now. Open to\ndiscussing further if someone can show what kinds of realistic\nexternal code would be affected.\n\n> * defined ENABLE_THREAD_SAFETY 1 ecpg_config.h, for the benefit of ECPG clients\n\nI left this one in. I'm not sure if it could really be needed.\nPerhaps at a stretch, perhaps ECPG code that is statically linked\nmight test that instead of calling PQisthreadsafe().\n\nPushed.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 08:58:29 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On 2023-07-12 08:58:29 +1200, Thomas Munro wrote:\n> On Mon, Jul 10, 2023 at 10:45 AM Thomas Munro <[email protected]> wrote:\n> > * defined ENABLE_THREAD_SAFETY 1 in c.h, for the benefit of extensions\n> \n> I may lack imagination but I couldn't think of any use for that\n> vestigial macro in backend/extension code, and client code doesn't see\n> c.h and might not get the right answer anyway if it's dynamically\n> linked which is the usual case. I took it out for now. Open to\n> discussing further if someone can show what kinds of realistic\n> external code would be affected.\n\nWFM.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:49:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "Apparently \"drongo\" didn't like something about this commit, and an\necpg test failed, but I can't immediately see why. Just \"server\nclosed the connection unexpectedly\". drongo is running the new Meson\nbuildfarm variant on Windows+MSVC, so, being still in development, I\nwonder if some diagnostic clue/log/backtrace etc is not being uploaded\nyet. FWIW Meson+Windows+MSVC passes on CI.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:21:53 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 3:21 PM Thomas Munro <[email protected]> wrote:\n> Apparently \"drongo\" didn't like something about this commit, and an\n> ecpg test failed, but I can't immediately see why. Just \"server\n> closed the connection unexpectedly\". drongo is running the new Meson\n> buildfarm variant on Windows+MSVC, so, being still in development, I\n> wonder if some diagnostic clue/log/backtrace etc is not being uploaded\n> yet. FWIW Meson+Windows+MSVC passes on CI.\n\nOh, that's probably unrelated to this commit. It's failed 6 times\nlike that in the past ~3 months.\n\n\n",
"msg_date": "Wed, 12 Jul 2023 15:34:20 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On Wed, Jul 12, 2023 at 3:34 PM Thomas Munro <[email protected]> wrote:\n> On Wed, Jul 12, 2023 at 3:21 PM Thomas Munro <[email protected]> wrote:\n> > Apparently \"drongo\" didn't like something about this commit, and an\n> > ecpg test failed, but I can't immediately see why. Just \"server\n> > closed the connection unexpectedly\". drongo is running the new Meson\n> > buildfarm variant on Windows+MSVC, so, being still in development, I\n> > wonder if some diagnostic clue/log/backtrace etc is not being uploaded\n> > yet. FWIW Meson+Windows+MSVC passes on CI.\n>\n> Oh, that's probably unrelated to this commit. It's failed 6 times\n> like that in the past ~3 months.\n\nAh, right, that is a symptom of the old Windows TCP linger vs process\nexit thing. Something on my list to try to investigate again, but not\ntoday.\n\nhttps://www.postgresql.org/message-id/flat/16678-253e48d34dc0c376%40postgresql.org\n\n\n",
"msg_date": "Wed, 12 Jul 2023 17:27:49 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 8:43 PM Heikki Linnakangas <[email protected]> wrote:\n> On 10/06/2023 05:23, Thomas Munro wrote:\n> > 2. I don't like the way we have to deal with POSIX vs Windows at\n> > every site where we use threads, and each place has a different style\n> > of wrappers. I considered a few different approaches to cleaning this\n> > up:\n> >\n> > * provide centralised and thorough pthread emulation for Windows; I\n> > don't like this, I don't even like all of pthreads and there are many\n> > details to get lost in\n> > * adopt C11 <threads.h>; unfortunately it is too early, so you'd need\n> > to write/borrow replacements for at least 3 of our 11 target systems\n> > * invent our own mini-abstraction for a carefully controlled subset of stuff\n>\n> Google search on \"c11 threads on Windows\" found some emulation wrappers:\n> https://github.com/jtsiomb/c11threads and\n> https://github.com/tinycthread/tinycthread, for example. Would either of\n> those work for us?\n>\n> Even if we use an existing emulation wrapper, I wouldn't mind having our\n> own pg_* abstration on top of it, to document which subset of the POSIX\n> or C11 functions we actually use.\n\nYeah. I am still interested in getting our thread API tidied up, and\nI intend to do that for PG 18. The patch I posted on that front,\nwhich can be summarised as a very lightweight subset of standard\n<threads.h> except with pg_ prefixes everywhere mapping to Windows or\nPOSIX threads, still needs one tiny bit more work: figuring out how to\nget the TLS destructors to run on Windows FLS or similar, or\nimplementing destructors myself (= little destructor callback list\nthat a thread exit hook would run, work I was hoping to avoid by using\nsomething from the OS libs... I will try again soon). Andres also\nopined upthread that we should think about offering a thread_local\nstorage class and getting away from TLS with keys.\n\nOne thing to note is that the ECPG code is using TLS with destructors\n(in fact they are b0rked in master, git grep \"FIXME: destructor\" so\nECPG leaks memory on Windows, so the thing that I'm trying to fix in\npg_threads.h is actually fixing a long-standing bug), and although\nthread_local has destructors in C++ it doesn't in C, so if we decided\nto add the storage class but not bother with the tss_create feature,\nthen ECPG would need to do cleanup another way. I will look into that\noption.\n\nOne new development since last time I wrote the above stuff is that\nthe Microsoft toolchain finally implemented the library components of\nC11 <threads.h>:\n\nhttps://devblogs.microsoft.com/cppblog/c11-threads-in-visual-studio-2022-version-17-8-preview-2/\n\nIt seems like it would be a long time before we could contemplate\nusing that stuff though, especially given our duty of keeping libpq\nand ECPG requirements low and reasonable. However, it seems fairly\nstraightforward to say that we require C99 + some way to access a\nC11-like thread local storage class. In practice that means a\npg_thread_local macro that points to MSVC __declspec(thread) (which\nworks since MSVC 2014?) or GCC/Clang __thread_local (which works since\nGCC4.8 in 2013?) or whatever. Of course every serious toolchain\nsupports this stuff somehow or other, having been required to for C11.\n\nI can't immediately see any build farm animals that would be affected.\nGrison's compiler info is out of date, it's really running\n8.something. The old OpenBSD GCC 4.2 animal was upgraded, and antique\nAIX got the boot: that's not even a coincidence, those retirements\ncame about because those systems didn't support arbitrary alignment,\nanother C11 feature that we now effectively require. (We could have\nworked around it it we had to but on but they just weren't reasonable\ntargets.)\n\nSo I'll go ahead and add the storage class to the next version, and\ncontemplate a couple of different options for the tss stuff, including\nperhaps leaving it out if that seems doable.\n\nhttps://stackoverflow.com/questions/29399494/what-is-the-current-state-of-support-for-thread-local-across-platforms\n\n\n",
"msg_date": "Sun, 14 Apr 2024 15:16:56 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On Sun, Apr 14, 2024 at 3:16 PM Thomas Munro <[email protected]> wrote:\n> So I'll go ahead and add the storage class to the next version, and\n> contemplate a couple of different options for the tss stuff, including\n> perhaps leaving it out if that seems doable.\n\nHere is a new attempt at pg_threads.h. Still work in progress, but\npassing tests, with storage class and working TSS, showing various\nusers.\n\nI eventually understood first that my TSS destructor problems on\nWindows came from mismatched calling conventions, and that you can't\nreally trampoline your way around that, at least not without doing\nsome pretty unreasonable things, and that is why nobody can emulate\neither tss_create() or pthread_key_create() directly with Windows'\nFlsAlloc(), so everybody who tries finishes up building their own\ninfrastructure to track destructors, or in ECPG's case just leaks all\nthe memory instead.\n\nHere's the simplest implementation I could come up with so far. I\ndon't have Windows so I made it possible to use emulated TSS\ndestructors on my local machine with a special macro for testing, and\nthen argued with CI for a while until the other machines agreed.\nOtherwise, it's all a fairly thin wrapper and hopefully not suprising.\n\nIn one place, an ECPG thread-local variable has no destructor, so we\ncan use it as the first example of the new pg_thread_local storage\nclass.\n\nOne thing this would need to be complete, at least the way I've\nimplemented it, is memory barriers, for non-TSO hardware, which would\nrequire lifting the ban on atomics.h in frontend code, or at least\nparts of it. Only 64 bit emulation is actually tied to the backend\nnow (because it calls spinlock stuff, that itself is backend-only, but\nalso it doesn't actually need to be either). Or maybe I can figure\nout a different scheme that doesn't need that. Or something...\n\nWIP patch attached.",
"msg_date": "Thu, 22 Aug 2024 00:51:53 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up threading code"
},
{
"msg_contents": "On 21/08/2024 15:51, Thomas Munro wrote:\n> On Sun, Apr 14, 2024 at 3:16 PM Thomas Munro <[email protected]> wrote:\n>> So I'll go ahead and add the storage class to the next version, and\n>> contemplate a couple of different options for the tss stuff, including\n>> perhaps leaving it out if that seems doable.\n> \n> Here is a new attempt at pg_threads.h. Still work in progress, but\n> passing tests, with storage class and working TSS, showing various\n> users.\n\nLooks good, thanks!\n\n> I eventually understood first that my TSS destructor problems on\n> Windows came from mismatched calling conventions, and that you can't\n> really trampoline your way around that, at least not without doing\n> some pretty unreasonable things, and that is why nobody can emulate\n> either tss_create() or pthread_key_create() directly with Windows'\n> FlsAlloc(), so everybody who tries finishes up building their own\n> infrastructure to track destructors, or in ECPG's case just leaks all\n> the memory instead.\n> \n> Here's the simplest implementation I could come up with so far. I\n> don't have Windows so I made it possible to use emulated TSS\n> destructors on my local machine with a special macro for testing, and\n> then argued with CI for a while until the other machines agreed.\n> Otherwise, it's all a fairly thin wrapper and hopefully not suprising.\n\nIn the Windows implementation of pg_tss_create(), how do threads and \nFlsAlloc() interact? If I understand correctly, one thread can run \nmultiple \"fibers\" in a co-operative fashion. It's a legacy feature, not \nwidely used nowadays, but what will happen if someone does try to use \nfibers? I think that will result in chaos. You're using FlsAlloc() to \ninstall the destructor callback but TlsAlloc() for allocating the actual \nthread-local values. So all fibers running on the same thread will share \nthe storage, but the destructor will be called whenever any of the \nfibers exit, clearing the TSS storage for all fibers.\n\nI don't know much about Windows or fibers but mixing the Tls* and Fls* \nfunctions seems unwise.\n\nTo be honest, I think we should just accept the limitation that TSS \ndestructors don't run on Windows. Yes, it means we'll continue to leak \nmemory on ecpg, but that's not a new issue. We can address that as a \nseparate patch later, if desired. Or more likely, do nothing until C11 \nthreads with proper destructors become widely available on Windows.\n\n> One thing this would need to be complete, at least the way I've\n> implemented it, is memory barriers, for non-TSO hardware, which would\n> require lifting the ban on atomics.h in frontend code, or at least\n> parts of it.\n\n+1 on doing that. Although it becomes moot if you just give up on the \ndestructors on Windows.\n\n> Only 64 bit emulation is actually tied to the backend\n> now (because it calls spinlock stuff, that itself is backend-only, but\n> also it doesn't actually need to be either). Or maybe I can figure\n> out a different scheme that doesn't need that. Or something...\n\nYou could use a pg_mtx now instead of a spinlock. I wonder if there are \nany supported platforms left that don't have 64-bit atomics though.\n\n> + * We have some extensions of our own, not present in C11:\n> + *\n> + * - pg_rwlock_t for read/write locks\n> + * - pg_mtx_t static initializer PG_MTX_STATIC_INIT\n> + * - pg_barrier_t\n\nHmm, I don't see anything wrong with those extensions as such, but I \nwonder how badly we really need them?\n\npg_rwlock is used by:\n- Windows implementation of pg_mtx_t, to provide PG_MTX_STATIC_INIT\n- the custom Thread-Specific Storage implementation (i.e. Windows)\n\nPG_MTX_STATIC_INIT is used by:\n- ecpg\n- libpq\n\npg_barrier_t is used by:\n- pgbench\n\npg_rwlock goes away if you give up on the TSS destructors on Windows, \nand implement pg_mtx directly with SRWLOCK on Windows.\n\nThe barrier implementation could easily be moved into pgbench, if we \ndon't expect to need it in other places soon. Having a generic \nimplementation seems fine too though, it's pretty straightforward.\n\nPG_MTX_STATIC_INIT seems hard to get rid of. I suppose you could use \ncall_once() to ensure it's initialized only once, but a static \ninitializer is a lot nicer to use.\n\n> diff --git a/src/interfaces/ecpg/ecpglib/ecpglib_extern.h b/src/interfaces/ecpg/ecpglib/ecpglib_extern.h\n> index 01b4309a71..d8416b19e3 100644\n> --- a/src/interfaces/ecpg/ecpglib/ecpglib_extern.h\n> +++ b/src/interfaces/ecpg/ecpglib/ecpglib_extern.h\n> @@ -169,7 +169,7 @@ bool\t\tecpg_get_data(const PGresult *, int, int, int, enum ECPGttype type,\n> \t\t\t\t\t\t enum ECPGttype, char *, char *, long, long, long,\n> \t\t\t\t\t\t enum ARRAY_TYPE, enum COMPAT_MODE, bool);\n> \n> -void\t\tecpg_pthreads_init(void);\n> +#define ecpg_pthreads_init()\n> struct connection *ecpg_get_connection(const char *connection_name);\n> char\t *ecpg_alloc(long size, int lineno);\n> char\t *ecpg_auto_alloc(long size, int lineno);\n\nIs it safe to remove the function, or might it be referenced by existing \nbinaries that were built against an older ecpglib version? I believe \nit's OK to remove, it's not part of the public API and should not be \ncalled directly from a binary. But in that case, we don't need the dummy \n#define either.\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * Barriers. Not in C11. Apple currently lacks the POSIX version.\n> + * We assume that the OS might know a better way to implement it that\n> + * we do, so we only provide our own if we have to.\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +\n> +#ifdef PG_THREADS_WIN32\n> +typedef SYNCHRONIZATION_BARRIER pg_barrier_t;\n> +#elif defined(HAVE_PTHREAD_BARRIER)\n> +typedef pthread_barrier_t pg_barrier_t;\n> +#else\n> +typedef struct pg_barrier_t\n> +{\n> +\tbool\t\tsense;\n> +\tint\t\t\texpected;\n> +\tint\t\t\tarrived;\n> +\tpg_mtx_t\tmutex;\n> +\tpg_cnd_t\tcond;\n> +} pg_barrier_t;\n> +#endif\n\nI'd suggest calling this pg_thrd_barrier_t or even pg_thread_barrier_t. \nIt's a little more verbose, but would rhyme with pthread_barrier_t. And \nto avoid confusing it with memory barriers and ProcSignal barriers.\n\n\nComments on the TSS implementation follow, which largely become moot if \nyou give up on the destructor support on Windows:\n\n> +void\n> +pg_tss_dtor_delete(pg_tss_t tss_id)\n\nShould this be pg_tss_delete(), since the C11 function is tss_delete()? \nOr is this different? There are actually no callers currently, so maybe \njust leave it out. pg_tss_dtor_delete() also seems racy, if run \nconcurrently with pg_tss_run_destructors().\n\n> +\t * Make sure our destructor hook is registered with the operating system\n> +\t * in this process. This happens only once in the whole process. Making\n> +\t * sure it will run actually in each thread happens in\n> +\t * pg_tss_ensure_destructors_will_run().\n\nthe function is called pg_tss_ensure_destructors_in_this_thread(), not \npg_tss_ensure_destructors_will_run().\n\n> +/*\n> + * Called every time pg_tss_set() installs a non-NULL value.\n> + */\n> +void\n> +pg_tss_ensure_destructors_in_this_thread(void)\n> +{\n> +\t/*\n> +\t * Pairs with pg_tss_install_run_destructors(), called by pg_tss_create().\n> +\t * This makes sure that we know if the tss_id being set could possibly\n> +\t * have a destructor. We don't want to pay the cost of checking, but we\n> +\t * can check with a simple load if *any* tss_id has a destructor. If so,\n> +\t * we make sure that pg_tss_destructor_hook has a non-NULL value in *this*\n> +\t * thread, because both Windows and POSIX will only call a destructor for\n> +\t * a non-NULL value.\n> +\t */\n> +\tpg_read_barrier();\n> +\tif (pg_tss_run_destructors_installed)\n> +\t{\n> +#ifdef PG_THREADS_WIN32\n> +\t\tif (FlsGetValue(pg_tss_destructor_hook) == NULL)\n> +\t\t\tFlsSetValue(pg_tss_destructor_hook, (void *) 1);\n> +#else\n> +\t\tif (pthread_getspecific(pg_tss_destructor_hook) == NULL)\n> +\t\t\tpthread_setspecific(pg_tss_destructor_hook, (void *) 1);\n> +#endif\n> +\t}\n> +}\n> +#endif\n\nI believe this cannot get called if !pg_tss_run_destructors_installed, \nno need to check that. Because pg_tss_create() will fail \npg_tss_run_destructors is set. Maybe turn it into an Assert. Or \nalternatively, skip the call to pg_tss_install_run_destructors when \npg_tss_create() is called with destructor==NULL. I think that may have \nbeen the idea here, based on the comment above.\n\nThe write barriers in pg_tss_install_run_destructors() seem excessive. \nThe function will only ever run in one thread, protected by \npg_call_once(). pg_call_once() surely provides all the necessary barriers.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 12:59:10 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up threading code"
}
] |
[
{
"msg_contents": "I've previously noted in \"Add last commit LSN to\npg_last_committed_xact()\" [1] that it's not possible to monitor how\nmany bytes of WAL behind a logical replication slot is (computing such\nis obviously trivial for physical slots) because the slot doesn't need\nto replicate beyond the last commit. In some cases it's possible for\nthe current WAL location to be far beyond the last commit. A few\nexamples:\n\n- An idle server with checkout_timeout at a lower value (so lots of\nempty WAL advance).\n- Very large transactions: particularly insidious because committing a\n1 GB transaction after a small transaction may show almost zero time\nlag even though quite a bit of data needs to be processed and sent\nover the wire (so time to replay is significantly different from\ncurrent lag).\n- A cluster with multiple databases complicates matters further,\nbecause while physical replication is cluster-wide, the LSNs that\nmatter for logical replication are database specific.\n\nSince we don't expose the most recent commit's LSN there's no way to\nsay \"the WAL is currently 1250, the last commit happened at 1000, the\nslot has flushed up to 800, therefore there are at most 200 bytes\nreplication needs to read through to catch up.\n\nIn the aforementioned thread [1] I'd proposed a patch that added a SQL\nfunction pg_last_commit_lsn() to expose the most recent commit's LSN.\nRobert Haas didn't think the initial version's modifications to\ncommit_ts made sense, and a subsequent approach adding the value to\nPGPROC didn't have strong objections, from what I can see, but it also\ndidn't generate any enthusiasm.\n\nAs I was thinking about how to improve things, I realized that this\ninformation (since it's for monitoring anyway) fits more naturally\ninto the stats system. I'd originally thought of exposing it in\npg_stat_wal, but that's per-cluster rather than per-database (indeed,\nthis is a flaw I hadn't considered in the original patch), so I think\npg_stat_database is the correct location.\n\nI've attached a patch to track the latest commit's LSN in pg_stat_database.\n\nRegards,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/flat/CAAaqYe9QBiAu+j8rBun_JKBRe-3HeKLUhfVVsYfsxQG0VqLXsA@mail.gmail.com",
"msg_date": "Fri, 9 Jun 2023 21:26:54 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "Hi,\n\n> [...]\n> As I was thinking about how to improve things, I realized that this\n> information (since it's for monitoring anyway) fits more naturally\n> into the stats system. I'd originally thought of exposing it in\n> pg_stat_wal, but that's per-cluster rather than per-database (indeed,\n> this is a flaw I hadn't considered in the original patch), so I think\n> pg_stat_database is the correct location.\n>\n> I've attached a patch to track the latest commit's LSN in pg_stat_database.\n\nThanks for the patch. It was marked as \"Needs Review\" so I decided to\ntake a brief look.\n\nAll in all the code seems to be fine but I have a couple of nitpicks:\n\n- If you are modifying pg_stat_database the corresponding changes to\nthe documentation are needed.\n- pg_stat_get_db_last_commit_lsn() has the same description as\npg_stat_get_db_xact_commit() which is confusing.\n- pg_stat_get_db_last_commit_lsn() is marked as STABLE which is\nprobably correct. But I would appreciate a second opinion on this.\n- Wouldn't it be appropriate to add a test or two?\n- `if (!XLogRecPtrIsInvalid(commit_lsn))` - I suggest adding\nXLogRecPtrIsValid() macro for better readability\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 19 Sep 2023 15:16:24 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Sat, 10 Jun 2023 at 07:57, James Coleman <[email protected]> wrote:\n>\n> I've previously noted in \"Add last commit LSN to\n> pg_last_committed_xact()\" [1] that it's not possible to monitor how\n> many bytes of WAL behind a logical replication slot is (computing such\n> is obviously trivial for physical slots) because the slot doesn't need\n> to replicate beyond the last commit. In some cases it's possible for\n> the current WAL location to be far beyond the last commit. A few\n> examples:\n>\n> - An idle server with checkout_timeout at a lower value (so lots of\n> empty WAL advance).\n> - Very large transactions: particularly insidious because committing a\n> 1 GB transaction after a small transaction may show almost zero time\n> lag even though quite a bit of data needs to be processed and sent\n> over the wire (so time to replay is significantly different from\n> current lag).\n> - A cluster with multiple databases complicates matters further,\n> because while physical replication is cluster-wide, the LSNs that\n> matter for logical replication are database specific.\n>\n> Since we don't expose the most recent commit's LSN there's no way to\n> say \"the WAL is currently 1250, the last commit happened at 1000, the\n> slot has flushed up to 800, therefore there are at most 200 bytes\n> replication needs to read through to catch up.\n>\n> In the aforementioned thread [1] I'd proposed a patch that added a SQL\n> function pg_last_commit_lsn() to expose the most recent commit's LSN.\n> Robert Haas didn't think the initial version's modifications to\n> commit_ts made sense, and a subsequent approach adding the value to\n> PGPROC didn't have strong objections, from what I can see, but it also\n> didn't generate any enthusiasm.\n>\n> As I was thinking about how to improve things, I realized that this\n> information (since it's for monitoring anyway) fits more naturally\n> into the stats system. I'd originally thought of exposing it in\n> pg_stat_wal, but that's per-cluster rather than per-database (indeed,\n> this is a flaw I hadn't considered in the original patch), so I think\n> pg_stat_database is the correct location.\n>\n> I've attached a patch to track the latest commit's LSN in pg_stat_database.\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as Aleksander's comments have not yet been resolved. Please\nfeel free to post an updated version of the patch and update the\ncommitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 16:31:05 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "Hello,\n\nThanks for reviewing!\n\nOn Tue, Sep 19, 2023 at 8:16 AM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > [...]\n> > As I was thinking about how to improve things, I realized that this\n> > information (since it's for monitoring anyway) fits more naturally\n> > into the stats system. I'd originally thought of exposing it in\n> > pg_stat_wal, but that's per-cluster rather than per-database (indeed,\n> > this is a flaw I hadn't considered in the original patch), so I think\n> > pg_stat_database is the correct location.\n> >\n> > I've attached a patch to track the latest commit's LSN in pg_stat_database.\n>\n> Thanks for the patch. It was marked as \"Needs Review\" so I decided to\n> take a brief look.\n>\n> All in all the code seems to be fine but I have a couple of nitpicks:\n>\n> - If you are modifying pg_stat_database the corresponding changes to\n> the documentation are needed.\n\nUpdated.\n\n> - pg_stat_get_db_last_commit_lsn() has the same description as\n> pg_stat_get_db_xact_commit() which is confusing.\n\nI've fixed this.\n\n> - pg_stat_get_db_last_commit_lsn() is marked as STABLE which is\n> probably correct. But I would appreciate a second opinion on this.\n\nSounds good.\n\n> - Wouldn't it be appropriate to add a test or two?\n\nAdded.\n\n> - `if (!XLogRecPtrIsInvalid(commit_lsn))` - I suggest adding\n> XLogRecPtrIsValid() macro for better readability\n\nWe have 36 usages of !XLogRecPtrIsInvalid(...) already, so I think we\nshould avoid making this change in this patch.\n\nv2 is attached.\n\nRegards,\nJames Coleman",
"msg_date": "Wed, 17 Jan 2024 21:11:35 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Sun, Jan 14, 2024 at 6:01 AM vignesh C <[email protected]> wrote:\n>\n> On Sat, 10 Jun 2023 at 07:57, James Coleman <[email protected]> wrote:\n> >\n> > I've previously noted in \"Add last commit LSN to\n> > pg_last_committed_xact()\" [1] that it's not possible to monitor how\n> > many bytes of WAL behind a logical replication slot is (computing such\n> > is obviously trivial for physical slots) because the slot doesn't need\n> > to replicate beyond the last commit. In some cases it's possible for\n> > the current WAL location to be far beyond the last commit. A few\n> > examples:\n> >\n> > - An idle server with checkout_timeout at a lower value (so lots of\n> > empty WAL advance).\n> > - Very large transactions: particularly insidious because committing a\n> > 1 GB transaction after a small transaction may show almost zero time\n> > lag even though quite a bit of data needs to be processed and sent\n> > over the wire (so time to replay is significantly different from\n> > current lag).\n> > - A cluster with multiple databases complicates matters further,\n> > because while physical replication is cluster-wide, the LSNs that\n> > matter for logical replication are database specific.\n> >\n> > Since we don't expose the most recent commit's LSN there's no way to\n> > say \"the WAL is currently 1250, the last commit happened at 1000, the\n> > slot has flushed up to 800, therefore there are at most 200 bytes\n> > replication needs to read through to catch up.\n> >\n> > In the aforementioned thread [1] I'd proposed a patch that added a SQL\n> > function pg_last_commit_lsn() to expose the most recent commit's LSN.\n> > Robert Haas didn't think the initial version's modifications to\n> > commit_ts made sense, and a subsequent approach adding the value to\n> > PGPROC didn't have strong objections, from what I can see, but it also\n> > didn't generate any enthusiasm.\n> >\n> > As I was thinking about how to improve things, I realized that this\n> > information (since it's for monitoring anyway) fits more naturally\n> > into the stats system. I'd originally thought of exposing it in\n> > pg_stat_wal, but that's per-cluster rather than per-database (indeed,\n> > this is a flaw I hadn't considered in the original patch), so I think\n> > pg_stat_database is the correct location.\n> >\n> > I've attached a patch to track the latest commit's LSN in pg_stat_database.\n>\n> I have changed the status of commitfest entry to \"Returned with\n> Feedback\" as Aleksander's comments have not yet been resolved. Please\n> feel free to post an updated version of the patch and update the\n> commitfest entry accordingly.\n\nThanks for reminding me; I'd lost track of this patch.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Wed, 17 Jan 2024 21:12:10 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\", but it seems like\nthere was some CFbot test failure last time it was run [1]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4355\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 14:26:03 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Sun, Jan 21, 2024 at 10:26 PM Peter Smith <[email protected]> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\", but it seems like\n> there was some CFbot test failure last time it was run [1]. Please\n> have a look and post an updated version if necessary.\n>\n> ======\n> [1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4355\n>\n> Kind Regards,\n> Peter Smith.\n\nUpdated patch attached,\n\nThanks,\nJames Coleman",
"msg_date": "Mon, 22 Jan 2024 19:29:30 -0500",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "Hi James,\n\nThanks for the updated patch. I don't have a clear opinion on the\nfeature and whether this is the way to implement it, but I have two\nsimple questions.\n\n1) Do we really need to modify RecordTransactionCommitPrepared and\nXactLogCommitRecord to return the LSN of the commit record? Can't we\njust use XactLastRecEnd? It's good enough for advancing\nreplorigin_session_origin_lsn, why wouldn't it be good enough for this?\n\n\n2) Earlier in this thread it was claimed the function returning the\nlast_commit_lsn is STABLE, but I have to admit it's not clear to me why\nthat's the case :-( All the pg_stat_get_db_* functions are marked as\nstable, so I guess it's thanks to the pgstat system ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 18 Feb 2024 02:28:06 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Sun, Feb 18, 2024 at 02:28:06AM +0100, Tomas Vondra wrote:\n> Thanks for the updated patch. I don't have a clear opinion on the\n> feature and whether this is the way to implement it, but I have two\n> simple questions.\n\nSome users I know of would be really happy to be able to get this\ninformation for each database in a single view, so that feels natural\nto plug the information into pg_stat_database.\n\n> 1) Do we really need to modify RecordTransactionCommitPrepared and\n> XactLogCommitRecord to return the LSN of the commit record? Can't we\n> just use XactLastRecEnd? It's good enough for advancing\n> replorigin_session_origin_lsn, why wouldn't it be good enough for this?\n\nOr XactLastCommitEnd? The changes in AtEOXact_PgStat() are not really\nattractive for what's a static variable in all the backends.\n\n> 2) Earlier in this thread it was claimed the function returning the\n> last_commit_lsn is STABLE, but I have to admit it's not clear to me why\n> that's the case :-( All the pg_stat_get_db_* functions are marked as\n> stable, so I guess it's thanks to the pgstat system ...\n\nThe consistency of the shared stats data depends on\nstats_fetch_consistency. The default of \"cache\" makes sure that the\nvalues don't change across a scan, until the end of a transaction.\n\nI have not paid much attention about that until now, but note that it\nwould not be the case of \"none\" where the data is retrieved each time\nit is requested. So it seems to me that these functions should be\nactually volatile, not stable, because they could deliver different\nvalues across the same scan as an effect of the design behind\npgstat_fetch_entry() and a non-default stats_fetch_consistency. I may\nbe missing something, of course.\n--\nMichael",
"msg_date": "Mon, 19 Feb 2024 15:57:38 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On 2/19/24 07:57, Michael Paquier wrote:\n> On Sun, Feb 18, 2024 at 02:28:06AM +0100, Tomas Vondra wrote:\n>> Thanks for the updated patch. I don't have a clear opinion on the\n>> feature and whether this is the way to implement it, but I have two\n>> simple questions.\n> \n> Some users I know of would be really happy to be able to get this\n> information for each database in a single view, so that feels natural\n> to plug the information into pg_stat_database.\n> \n\nOK\n\n>> 1) Do we really need to modify RecordTransactionCommitPrepared and\n>> XactLogCommitRecord to return the LSN of the commit record? Can't we\n>> just use XactLastRecEnd? It's good enough for advancing\n>> replorigin_session_origin_lsn, why wouldn't it be good enough for this?\n> \n> Or XactLastCommitEnd?\n\nBut that's not set in RecordTransactionCommitPrepared (or twophase.c in\ngeneral), and the patch seems to need that.\n\n> The changes in AtEOXact_PgStat() are not really\n> attractive for what's a static variable in all the backends.\n> \n\nI'm sorry, I'm not sure what \"changes not being attractive\" means :-(\n\n>> 2) Earlier in this thread it was claimed the function returning the\n>> last_commit_lsn is STABLE, but I have to admit it's not clear to me why\n>> that's the case :-( All the pg_stat_get_db_* functions are marked as\n>> stable, so I guess it's thanks to the pgstat system ...\n> \n> The consistency of the shared stats data depends on\n> stats_fetch_consistency. The default of \"cache\" makes sure that the\n> values don't change across a scan, until the end of a transaction.\n> \n> I have not paid much attention about that until now, but note that it\n> would not be the case of \"none\" where the data is retrieved each time\n> it is requested. So it seems to me that these functions should be\n> actually volatile, not stable, because they could deliver different\n> values across the same scan as an effect of the design behind\n> pgstat_fetch_entry() and a non-default stats_fetch_consistency. I may\n> be missing something, of course.\n\nRight. If I do this:\n\ncreate or replace function get_db_lsn(oid) returns pg_lsn as $$\ndeclare\n v_lsn pg_lsn;\nbegin\n select last_commit_lsn into v_lsn from pg_stat_database\n where datid = $1;\n return v_lsn;\nend; $$ language plpgsql;\n\nselect min(l), max(l) from (select i, get_db_lsn(16384) AS l from\ngenerate_series(1,100000) s(i)) foo;\n\nand run this concurrently with a pgbench on the same database (with OID\n16384), I get:\n\n- stats_fetch_consistency=cache: always the same min/max value\n\n- stats_fetch_consistency=none: min != max\n\nWhich would suggest you're right and this should be VOLATILE and not\nSTABLE. But then again - the existing pg_stat_get_db_* functions are all\nmarked as STABLE, so (a) is that correct and (b) why should this\nfunction be marked different?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 19 Feb 2024 10:26:43 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 10:26:43AM +0100, Tomas Vondra wrote:\n> On 2/19/24 07:57, Michael Paquier wrote:\n> > On Sun, Feb 18, 2024 at 02:28:06AM +0100, Tomas Vondra wrote:\n>>> 1) Do we really need to modify RecordTransactionCommitPrepared and\n>>> XactLogCommitRecord to return the LSN of the commit record? Can't we\n>>> just use XactLastRecEnd? It's good enough for advancing\n>>> replorigin_session_origin_lsn, why wouldn't it be good enough for this?\n>> \n>> Or XactLastCommitEnd?\n> \n> But that's not set in RecordTransactionCommitPrepared (or twophase.c in\n> general), and the patch seems to need that.\n\nHmm. Okay.\n\n> > The changes in AtEOXact_PgStat() are not really\n> > attractive for what's a static variable in all the backends.\n> \n> I'm sorry, I'm not sure what \"changes not being attractive\" means :-(\n\nIt just means that I am not much a fan of the signature changes done\nfor RecordTransactionCommit() and AtEOXact_PgStat_Database(), adding a\ndependency to a specify LSN. Your suggestion to switching to\nXactLastRecEnd should avoid that.\n\n> - stats_fetch_consistency=cache: always the same min/max value\n> \n> - stats_fetch_consistency=none: min != max\n> \n> Which would suggest you're right and this should be VOLATILE and not\n> STABLE. But then again - the existing pg_stat_get_db_* functions are all\n> marked as STABLE, so (a) is that correct and (b) why should this\n> function be marked different?\n\nThis can look like an oversight of 5891c7a8ed8f to me. I've skimmed\nthrough the threads related to this commit and messages around [1]\nexplain why this GUC exists and why we have both behaviors, but I'm\nnot seeing a discussion about the volatibility of these functions.\nThe current situation is not bad either, the default makes these\nfunctions stable, and I'd like to assume that users of this GUC know\nwhat they do. Perhaps Andres or Horiguchi-san can comment on that.\n\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Tue, 20 Feb 2024 07:56:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "> I've previously noted in \"Add last commit LSN to\n> pg_last_committed_xact()\" [1] that it's not possible to monitor how\n> many bytes of WAL behind a logical replication slot is (computing such\n> is obviously trivial for physical slots) because the slot doesn't need\n> to replicate beyond the last commit. In some cases it's possible for\n> the current WAL location to be far beyond the last commit. A few\n> examples:\n> \n> - An idle server with checkout_timeout at a lower value (so lots of\n> empty WAL advance).\n> - Very large transactions: particularly insidious because committing a\n> 1 GB transaction after a small transaction may show almost zero time\n> lag even though quite a bit of data needs to be processed and sent\n> over the wire (so time to replay is significantly different from\n> current lag).\n> - A cluster with multiple databases complicates matters further,\n> because while physical replication is cluster-wide, the LSNs that\n> matter for logical replication are database specific.\n> \n> \n> Since we don't expose the most recent commit's LSN there's no way to\n> say \"the WAL is currently 1250, the last commit happened at 1000, the\n> slot has flushed up to 800, therefore there are at most 200 bytes\n> replication needs to read through to catch up.\n\nI'm not sure I fully understand the problem. What are you doing \ncurrently to measure the lag? If you look at pg_replication_slots today, \nconfirmed_flush_lsn advances also when you do pg_switch_wal(), so \nlooking at the diff between confirmed_flush_lsn and pg_current_wal_lsn() \nworks, no?\n\nAnd on the other hand, even if you expose the database's last commit \nLSN, you can have an publication that includes only a subset of tables. \nOr commits that don't write to any table at all. So I'm not sure why the \ndatabase's last commit LSN is relevant. Getting the last LSN that did \nsomething that needs to be replicated through the publication might be \nuseful, but that's not what what this patch does.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 7 Mar 2024 19:30:35 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nI think it is convenient to know the last commit LSN of a database for troubleshooting and monitoring purposes similar to the \"pd_lsn\" field in a page header that records the last LSN that modifies this page. I am not sure if it can help determine the WAL location to resume / catch up in logical replication as the \"confirmed_flush_lsn\" and \"restart_lsn\" in a logical replication slot are already there to figure out the resume / catchup point as I understand. Anyhow, I had a look at the patch, in general it looks good to me. Also ran it against the CI bot, which also turned out fine. Just one small comment though. The patch supports the recording of last commit lsn from 2 phase commit as well, but the test does not seem to have a test on 2 phase commit. In my opinion, it should test whether the last commit lsn increments when a prepared transaction is committed in addition to a regular transaction.\r\n\r\nthank you\r\n-----------------\r\nCary Huang\r\nwww.highgo.ca",
"msg_date": "Fri, 05 Apr 2024 22:27:51 +0000",
"msg_from": "Cary Huang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "Hi James,\n\nThere are some review in the thread that need to be addressed.\nit seems that we need to mark this entry \"Waiting on Author\" and move to\nthe next CF [0].\n\nThanks\n\n[0] https://commitfest.postgresql.org/47/4355/\n\nOn Sat, 10 Jun 2023 at 05:27, James Coleman <[email protected]> wrote:\n\n> I've previously noted in \"Add last commit LSN to\n> pg_last_committed_xact()\" [1] that it's not possible to monitor how\n> many bytes of WAL behind a logical replication slot is (computing such\n> is obviously trivial for physical slots) because the slot doesn't need\n> to replicate beyond the last commit. In some cases it's possible for\n> the current WAL location to be far beyond the last commit. A few\n> examples:\n>\n> - An idle server with checkout_timeout at a lower value (so lots of\n> empty WAL advance).\n> - Very large transactions: particularly insidious because committing a\n> 1 GB transaction after a small transaction may show almost zero time\n> lag even though quite a bit of data needs to be processed and sent\n> over the wire (so time to replay is significantly different from\n> current lag).\n> - A cluster with multiple databases complicates matters further,\n> because while physical replication is cluster-wide, the LSNs that\n> matter for logical replication are database specific.\n>\n> Since we don't expose the most recent commit's LSN there's no way to\n> say \"the WAL is currently 1250, the last commit happened at 1000, the\n> slot has flushed up to 800, therefore there are at most 200 bytes\n> replication needs to read through to catch up.\n>\n> In the aforementioned thread [1] I'd proposed a patch that added a SQL\n> function pg_last_commit_lsn() to expose the most recent commit's LSN.\n> Robert Haas didn't think the initial version's modifications to\n> commit_ts made sense, and a subsequent approach adding the value to\n> PGPROC didn't have strong objections, from what I can see, but it also\n> didn't generate any enthusiasm.\n>\n> As I was thinking about how to improve things, I realized that this\n> information (since it's for monitoring anyway) fits more naturally\n> into the stats system. I'd originally thought of exposing it in\n> pg_stat_wal, but that's per-cluster rather than per-database (indeed,\n> this is a flaw I hadn't considered in the original patch), so I think\n> pg_stat_database is the correct location.\n>\n> I've attached a patch to track the latest commit's LSN in pg_stat_database.\n>\n> Regards,\n> James Coleman\n>\n> 1:\n> https://www.postgresql.org/message-id/flat/CAAaqYe9QBiAu+j8rBun_JKBRe-3HeKLUhfVVsYfsxQG0VqLXsA@mail.gmail.com\n>\n\nHi James,\nThere are some review in the thread that need to be addressed.it seems that we need to mark this entry \"Waiting on Author\" and move to the next CF [0].\nThanks[0] https://commitfest.postgresql.org/47/4355/On Sat, 10 Jun 2023 at 05:27, James Coleman <[email protected]> wrote:I've previously noted in \"Add last commit LSN to\npg_last_committed_xact()\" [1] that it's not possible to monitor how\nmany bytes of WAL behind a logical replication slot is (computing such\nis obviously trivial for physical slots) because the slot doesn't need\nto replicate beyond the last commit. In some cases it's possible for\nthe current WAL location to be far beyond the last commit. A few\nexamples:\n\n- An idle server with checkout_timeout at a lower value (so lots of\nempty WAL advance).\n- Very large transactions: particularly insidious because committing a\n1 GB transaction after a small transaction may show almost zero time\nlag even though quite a bit of data needs to be processed and sent\nover the wire (so time to replay is significantly different from\ncurrent lag).\n- A cluster with multiple databases complicates matters further,\nbecause while physical replication is cluster-wide, the LSNs that\nmatter for logical replication are database specific.\n\nSince we don't expose the most recent commit's LSN there's no way to\nsay \"the WAL is currently 1250, the last commit happened at 1000, the\nslot has flushed up to 800, therefore there are at most 200 bytes\nreplication needs to read through to catch up.\n\nIn the aforementioned thread [1] I'd proposed a patch that added a SQL\nfunction pg_last_commit_lsn() to expose the most recent commit's LSN.\nRobert Haas didn't think the initial version's modifications to\ncommit_ts made sense, and a subsequent approach adding the value to\nPGPROC didn't have strong objections, from what I can see, but it also\ndidn't generate any enthusiasm.\n\nAs I was thinking about how to improve things, I realized that this\ninformation (since it's for monitoring anyway) fits more naturally\ninto the stats system. I'd originally thought of exposing it in\npg_stat_wal, but that's per-cluster rather than per-database (indeed,\nthis is a flaw I hadn't considered in the original patch), so I think\npg_stat_database is the correct location.\n\nI've attached a patch to track the latest commit's LSN in pg_stat_database.\n\nRegards,\nJames Coleman\n\n1: https://www.postgresql.org/message-id/flat/CAAaqYe9QBiAu+j8rBun_JKBRe-3HeKLUhfVVsYfsxQG0VqLXsA@mail.gmail.com",
"msg_date": "Mon, 8 Apr 2024 15:21:30 +0300",
"msg_from": "Kirill Reshke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 10:30 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> > I've previously noted in \"Add last commit LSN to\n> > pg_last_committed_xact()\" [1] that it's not possible to monitor how\n> > many bytes of WAL behind a logical replication slot is (computing such\n> > is obviously trivial for physical slots) because the slot doesn't need\n> > to replicate beyond the last commit. In some cases it's possible for\n> > the current WAL location to be far beyond the last commit. A few\n> > examples:\n> >\n> > - An idle server with checkout_timeout at a lower value (so lots of\n> > empty WAL advance).\n> > - Very large transactions: particularly insidious because committing a\n> > 1 GB transaction after a small transaction may show almost zero time\n> > lag even though quite a bit of data needs to be processed and sent\n> > over the wire (so time to replay is significantly different from\n> > current lag).\n> > - A cluster with multiple databases complicates matters further,\n> > because while physical replication is cluster-wide, the LSNs that\n> > matter for logical replication are database specific.\n> >\n> >\n> > Since we don't expose the most recent commit's LSN there's no way to\n> > say \"the WAL is currently 1250, the last commit happened at 1000, the\n> > slot has flushed up to 800, therefore there are at most 200 bytes\n> > replication needs to read through to catch up.\n>\n> I'm not sure I fully understand the problem. What are you doing\n> currently to measure the lag? If you look at pg_replication_slots today,\n> confirmed_flush_lsn advances also when you do pg_switch_wal(), so\n> looking at the diff between confirmed_flush_lsn and pg_current_wal_lsn()\n> works, no?\n\nNo, it's not that simple because of the \"large, uncommitted\ntransaction\" case I noted in the OP. Suppose I have a batch system,\nand a job in that system has written 1 GB of data but not yet\ncommitted (or rolled back). In this case confirmed_flush_lsn cannot\nadvance, correct?\n\nAnd so having a \"last commit LSN\" allows me to know how far back the\n\"last possibly replicatable write\"\n\n> And on the other hand, even if you expose the database's last commit\n> LSN, you can have an publication that includes only a subset of tables.\n> Or commits that don't write to any table at all. So I'm not sure why the\n> database's last commit LSN is relevant. Getting the last LSN that did\n> something that needs to be replicated through the publication might be\n> useful, but that's not what what this patch does.\n\nI think that's fine, because as you noted earlier the\nconfirmed_flush_lsn could advance beyond that point anyway (if there's\nnothing to replicate), but in the case where:\n\n1. confirmed_flush_lsn is not advancing, and\n2. last_commit_lsn is not advancing, and\n3. pg_current_wal_lsn() has advanced a lot\n\nthen you can probably infer that there's a large amount of data that\nsimply cannot be completed by the subscriber, and so there's no \"real\"\ndelay. It also gives you an idea of how much data you will need to\nchurn through (even if not replicated) once the transaction commits.\n\nCertainly understanding the data here will be simplest in the case\nwhere 1.) there's a single database and 2.) all tables are in the\nreplication set, but I don't think the value is limited to that\nsituation either.\n\nRegards,\nJames Coleman\n\n\n",
"msg_date": "Tue, 28 May 2024 09:11:11 -0600",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "On Mon, Feb 19, 2024 at 3:56 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Feb 19, 2024 at 10:26:43AM +0100, Tomas Vondra wrote:\n> > On 2/19/24 07:57, Michael Paquier wrote:\n> > > On Sun, Feb 18, 2024 at 02:28:06AM +0100, Tomas Vondra wrote:\n> >>> 1) Do we really need to modify RecordTransactionCommitPrepared and\n> >>> XactLogCommitRecord to return the LSN of the commit record? Can't we\n> >>> just use XactLastRecEnd? It's good enough for advancing\n> >>> replorigin_session_origin_lsn, why wouldn't it be good enough for this?\n> >>\n> >> Or XactLastCommitEnd?\n> >\n> > But that's not set in RecordTransactionCommitPrepared (or twophase.c in\n> > general), and the patch seems to need that.\n>\n> Hmm. Okay.\n>\n> > > The changes in AtEOXact_PgStat() are not really\n> > > attractive for what's a static variable in all the backends.\n> >\n> > I'm sorry, I'm not sure what \"changes not being attractive\" means :-(\n>\n> It just means that I am not much a fan of the signature changes done\n> for RecordTransactionCommit() and AtEOXact_PgStat_Database(), adding a\n> dependency to a specify LSN. Your suggestion to switching to\n> XactLastRecEnd should avoid that.\n\nAttached is an updated patch that uses XactLastCommitEnd and therefore\navoids all of those signature changes.\n\nWe can't use XactLastCommitEnd because it gets set to 0 immediately\nafter RecordTransactionCommit() sets XactLastCommitEnd to its value.\n\nI added a test for two-phase commit, as well, and in so doing I\ndiscovered something that I found curious: when I do \"COMMIT PREPARED\n't1'\", not only does RecordTransactionCommitPrepared() get called, but\neventually RecordTransactionCommit() is called as well before the\ncommand returns (despite the comments for those functions describing\nthem as two equivalents you need to change at the same time).\n\nSo it appears that we don't need to set XactLastCommitEnd in\nRecordTransactionCommitPrepared() for this to work, and, indeed,\nadding some logging to verify, the value of XactLastRecEnd we'd use to\nupdate XactLastCommitEnd is the same at the end of both of those\nfunctions during COMMIT PREPARED.\n\nI'd like to have someone weigh in on whether relying on\nRecordTransactionCommit() being called during COMMIT PREPARED is\ncorrect or not (if perchance there are non-obvious reasons why we\nshouldn't).\n\nRegards,\nJames Coleman",
"msg_date": "Tue, 28 May 2024 09:45:43 -0600",
"msg_from": "James Coleman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
},
{
"msg_contents": "At Tue, 20 Feb 2024 07:56:28 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Feb 19, 2024 at 10:26:43AM +0100, Tomas Vondra wrote:\n> It just means that I am not much a fan of the signature changes done\n> for RecordTransactionCommit() and AtEOXact_PgStat_Database(), adding a\n> dependency to a specify LSN. Your suggestion to switching to\n> XactLastRecEnd should avoid that.\n> \n> > - stats_fetch_consistency=cache: always the same min/max value\n> > \n> > - stats_fetch_consistency=none: min != max\n> > \n> > Which would suggest you're right and this should be VOLATILE and not\n> > STABLE. But then again - the existing pg_stat_get_db_* functions are all\n> > marked as STABLE, so (a) is that correct and (b) why should this\n> > function be marked different?\n> \n> This can look like an oversight of 5891c7a8ed8f to me. I've skimmed\n> through the threads related to this commit and messages around [1]\n> explain why this GUC exists and why we have both behaviors, but I'm\n> not seeing a discussion about the volatibility of these functions.\n> The current situation is not bad either, the default makes these\n> functions stable, and I'd like to assume that users of this GUC know\n> what they do. Perhaps Andres or Horiguchi-san can comment on that.\n> \n> https://www.postgresql.org/message-id/[email protected]\n\nI agree that we cannot achieve, nor do we need, perfect MVCC behavior,\nand that completely volatile behavior is unusable. I believe the\nfunctions are kept marked as stable, as this is the nearest and most\nusable volatility for the default behavior, since function volatility\nis not switchable on-the-fly. This approach seems least trouble-prone\nto me.\n\nThe consistency of the functions are discussed here.\n\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-STATS-VIEWS\n\n> This is a feature, not a bug, because ... Conversely, if it's known\n> that statistics are only accessed once, caching accessed statistics is\n> unnecessary and can be avoided by setting stats_fetch_consistency to\n> none.\n\nIt seems to me that this description already implies such an\nincongruity in the functions' behavior from the \"stable\" behavior, but\nwe might want to explicitly mention that incongruity.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 05 Jun 2024 14:25:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add last_commit_lsn to pg_stat_database"
}
] |
[
{
"msg_contents": "On Wed, Aug 24, 2022 at 6:42 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 16 Aug 2022 18:40:49 +0200, Alvaro Herrera <[email protected]> wrote in\n> > On 2022-Aug-16, Andrew Dunstan wrote:\n> >\n> > > I don't think there's a hard and fast rule about it. Certainly the case\n> > > would be more compelling if the functions were used across different TAP\n> > > suites. The SSL suite has suite-specific modules. That's a pattern also\n> > > worth considering. e.g something like.\n> > >\n> > > use FindBin qw($Bin);\n> > > use lib $Bin;\n> > > use MySuite;\n> > >\n> > > and then you put your common routines in MySuite.pm in the same\n> > > directory as the TAP test files.\n> >\n> > Yeah, I agree with that for advance_wal. Regarding find_in_log, that\n> > one seems general enough to warrant being in Cluster.pm -- consider\n> > issues_sql_like, which also slurps_file($log). That could be unified a\n> > little bit, I think.\n>\n> +1\n\nWith the generalized function for find_in_log() has been added as part\nof https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e25e5f7fc6b74c9d4ce82627e9145ef5537412e2,\nI'm proposing a generalized function for advance_wal(). Please find\nthe attached patch.\n\nI tried to replace the existing tests with the new cluster function\nadvance_wal(). Please let me know if I'm missing any other tests.\nAlso, this new function can be used by an in-progress feature -\nhttps://commitfest.postgresql.org/43/3663/.\n\nThoughts?\n\nFWIW, it's discussed here -\nhttps://www.postgresql.org/message-id/ZIKVd%2Ba43UfsIWJE%40paquier.xyz.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 11 Jun 2023 07:14:54 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Mmm. It seems like the email I thought I'd sent failed to go out.\r\n\r\nAt Sun, 11 Jun 2023 07:14:54 +0530, Bharath Rupireddy <[email protected]> wrote in \r\n> On Wed, Aug 24, 2022 at 6:42 AM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n> >\r\n> > At Tue, 16 Aug 2022 18:40:49 +0200, Alvaro Herrera <[email protected]> wrote in\r\n> > > On 2022-Aug-16, Andrew Dunstan wrote:\r\n> > > Yeah, I agree with that for advance_wal. Regarding find_in_log, that\r\n> > > one seems general enough to warrant being in Cluster.pm -- consider\r\n> > > issues_sql_like, which also slurps_file($log). That could be unified a\r\n> > > little bit, I think.\r\n> >\r\n> > +1\r\n> \r\n> With the generalized function for find_in_log() has been added as part\r\n> of https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e25e5f7fc6b74c9d4ce82627e9145ef5537412e2,\r\n> I'm proposing a generalized function for advance_wal(). Please find\r\n> the attached patch.\r\n> \r\n> I tried to replace the existing tests with the new cluster function\r\n> advance_wal(). Please let me know if I'm missing any other tests.\r\n> Also, this new function can be used by an in-progress feature -\r\n> https://commitfest.postgresql.org/43/3663/.\r\n> \r\n> Thoughts?\r\n\r\nThanks!\r\n\r\n+\t\t\t\"CREATE TABLE tt (); DROP TABLE tt; SELECT pg_switch_wal();\");\r\n\r\nAt least since 11, we can utilize pg_logical_emit_message() for this\r\npurpose. It's more lightweight and seems appropriate, not only because\r\nit doesn't cause any side effects but also bacause we don't have to\r\nworry about name conflicts.\r\n\r\n\r\n-\t\t SELECT 'finished';\",\r\n-\t\ttimeout => $PostgreSQL::Test::Utils::timeout_default));\r\n-is($result[1], 'finished', 'check if checkpoint command is not blocked');\r\n-\r\n+$node_primary2->advance_wal(1);\r\n+$node_primary2->safe_psql('postgres', 'CHECKPOINT;');\r\n\r\nThis test anticipates that the checkpoint could get blocked. Shouldn't\r\nwe keep the timeout?\r\n\r\n\r\n-$node_primary->safe_psql(\r\n-\t'postgres', \"create table retain_test(a int);\r\n-\t\t\t\t\t\t\t\t\t select pg_switch_wal();\r\n-\t\t\t\t\t\t\t\t\t insert into retain_test values(1);\r\n-\t\t\t\t\t\t\t\t\t checkpoint;\");\r\n+$node_primary->advance_wal(1);\r\n+$node_primary->safe_psql('postgres', \"checkpoint;\");\r\n\r\nThe original test generated some WAL after the segment switch, which\r\nappears to be a significant characteristics of the test.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 15 Jun 2023 13:40:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 01:40:15PM +0900, Kyotaro Horiguchi wrote:\n> +\t\t\t\"CREATE TABLE tt (); DROP TABLE tt; SELECT pg_switch_wal();\");\n> \n> At least since 11, we can utilize pg_logical_emit_message() for this\n> purpose. It's more lightweight and seems appropriate, not only because\n> it doesn't cause any side effects but also bacause we don't have to\n> worry about name conflicts.\n\nMaking this as cheap as possible by design is a good concept for a\ncommon routine. +1.\n\n> -\t\t SELECT 'finished';\",\n> -\t\ttimeout => $PostgreSQL::Test::Utils::timeout_default));\n> -is($result[1], 'finished', 'check if checkpoint command is not blocked');\n> -\n> +$node_primary2->advance_wal(1);\n> +$node_primary2->safe_psql('postgres', 'CHECKPOINT;');\n> \n> This test anticipates that the checkpoint could get blocked. Shouldn't\n> we keep the timeout?\n\nIndeed, this would partially invalidate what's getting tested in light\nof 1816a1c6 where we run a secondary command after the checkpoint. So\nthe last SELECT should remain around.\n\n> -$node_primary->safe_psql(\n> - 'postgres', \"create table retain_test(a int);\n> - select pg_switch_wal();\n> - insert into retain_test values(1);\n> - checkpoint;\");\n> +$node_primary->advance_wal(1);\n> +$node_primary->safe_psql('postgres', \"checkpoint;\");\n> \n> The original test generated some WAL after the segment switch, which\n> appears to be a significant characteristics of the test.\n\nStill it does not matter for this specific case? The logical slot has\nbeen already invalidated, so we don't care much about logical changes\nin WAL, do we?\n--\nMichael",
"msg_date": "Fri, 16 Jun 2023 11:30:15 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Thanks for the comments.\n\nAt Fri, 16 Jun 2023 11:30:15 +0900, Michael Paquier <[email protected]> wrote in \n> > -$node_primary->safe_psql(\n> > - 'postgres', \"create table retain_test(a int);\n> > - select pg_switch_wal();\n> > - insert into retain_test values(1);\n> > - checkpoint;\");\n> > +$node_primary->advance_wal(1);\n> > +$node_primary->safe_psql('postgres', \"checkpoint;\");\n> > \n> > The original test generated some WAL after the segment switch, which\n> > appears to be a significant characteristics of the test.\n> \n> Still it does not matter for this specific case? The logical slot has\n> been already invalidated, so we don't care much about logical changes\n> in WAL, do we?\n\nThe change itself doesn't seem to matter, but it seems intended to let\ncheckpoint trigger the removal of the last segment. However, I'm\nunsure how the insert would influence this that way. If my\nunderstanding is correct, then I'd support its removal.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Jun 2023 13:30:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 8:00 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jun 15, 2023 at 01:40:15PM +0900, Kyotaro Horiguchi wrote:\n> > + \"CREATE TABLE tt (); DROP TABLE tt; SELECT pg_switch_wal();\");\n> >\n> > At least since 11, we can utilize pg_logical_emit_message() for this\n> > purpose. It's more lightweight and seems appropriate, not only because\n> > it doesn't cause any side effects but also bacause we don't have to\n> > worry about name conflicts.\n>\n> Making this as cheap as possible by design is a good concept for a\n> common routine. +1.\n\nWhile it seems reasonable to use pg_logical_emit_message, it won't\nwork for all the cases - what if someone wants to advance WAL by a few\nWAL files? I think pg_switch_wal() is the way, no? For instance, the\nreplslot_limit.pl test increases the WAL in a very calculated way - it\nincreases by 5 WAL files. So, -1 to use pg_logical_emit_message.\n\nI understand the naming conflicts for the table name used ('tt' in\nthis case). If the table name 'tt' looks so simple and easy for\nsomeone to have tables with that name in their tests file, we can\ngenerate a random table name in advance_wal, something like in the\nattached v2 patch.\n\n> > - SELECT 'finished';\",\n> > - timeout => $PostgreSQL::Test::Utils::timeout_default));\n> > -is($result[1], 'finished', 'check if checkpoint command is not blocked');\n> > -\n> > +$node_primary2->advance_wal(1);\n> > +$node_primary2->safe_psql('postgres', 'CHECKPOINT;');\n> >\n> > This test anticipates that the checkpoint could get blocked. Shouldn't\n> > we keep the timeout?\n>\n> Indeed, this would partially invalidate what's getting tested in light\n> of 1816a1c6 where we run a secondary command after the checkpoint. So\n> the last SELECT should remain around.\n\nChanged.\n\n> > -$node_primary->safe_psql(\n> > - 'postgres', \"create table retain_test(a int);\n> > - select pg_switch_wal();\n> > - insert into retain_test values(1);\n> > - checkpoint;\");\n> > +$node_primary->advance_wal(1);\n> > +$node_primary->safe_psql('postgres', \"checkpoint;\");\n> >\n> > The original test generated some WAL after the segment switch, which\n> > appears to be a significant characteristics of the test.\n>\n> Still it does not matter for this specific case? The logical slot has\n> been already invalidated, so we don't care much about logical changes\n> in WAL, do we?\n\nCorrect, the slot has already been invalidated and the test is\nverifying that WAL isn't retained by the invalidated slot, so\nessentially what it needs is to generate \"some\" wal. So, using\nadvance_wal there seems fine to me. CFBot doesn't complain anything -\nhttps://github.com/BRupireddy/postgres/tree/add_a_TAP_test_function_to_generate_WAL_v2.\n\nAttached the v2 patch. Thoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Jul 2023 16:11:13 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 4:11 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Attached the v2 patch. Thoughts?\n\nRebase needed, attached v3 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Dec 2023 11:09:41 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Mon, Dec 18, 2023, at 2:39 AM, Bharath Rupireddy wrote:\n> Rebase needed, attached v3 patch.\n\nI think you don't understand the suggestion proposed by Michael and Kyotaro. If\nyou do a comparison with the following SQL commands:\n\neuler=# select pg_walfile_name(pg_current_wal_lsn());\n pg_walfile_name \n--------------------------\n000000010000000000000040\n(1 row)\n\neuler=# select pg_logical_emit_message(true, 'prefix', 'message4');\npg_logical_emit_message \n-------------------------\n0/400000A8\n(1 row)\n\neuler=# select pg_switch_wal();\npg_switch_wal \n---------------\n0/400000F0\n(1 row)\n\neuler=# create table cc (b int);\nCREATE TABLE\neuler=# drop table cc;\nDROP TABLE\neuler=# select pg_switch_wal();\npg_switch_wal \n---------------\n0/41017C88\n(1 row)\n\neuler=# select pg_walfile_name(pg_current_wal_lsn());\n pg_walfile_name \n--------------------------\n000000010000000000000041\n(1 row)\n\nYou get\n\n$ pg_waldump 000000010000000000000040\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/40000028, prev 0/3F0001C0, desc: RUNNING_XACTS nextXid 295180 latestCompletedXid 295179 oldestRunningXid 295180\nrmgr: LogicalMessage len (rec/tot): 65/ 65, tx: 295180, lsn: 0/40000060, prev 0/40000028, desc: MESSAGE transactional, prefix \"prefix\"; payload (8 bytes): 6D 65 73 73 61 67 65 34\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 295180, lsn: 0/400000A8, prev 0/40000060, desc: COMMIT 2023-12-18 08:35:06.821322 -03\nrmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/400000D8, prev 0/400000A8, desc: SWITCH \n\n$ pg_waldump 000000010000000000000041\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/41000028, prev 0/400000D8, desc: RUNNING_XACTS nextXid 295181 latestCompletedXid 295180 oldestRunningXid 295181\nrmgr: Storage len (rec/tot): 42/ 42, tx: 0, lsn: 0/41000060, prev 0/41000028, desc: CREATE base/33287/88102\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 295181, lsn: 0/41000090, prev 0/41000060, desc: NEW_CID rel: 1663/33287/1247, tid: 14/16, cmin: 0, cmax: 4294967295, combo: 4294967295\nrmgr: Heap len (rec/tot): 54/ 3086, tx: 295181, lsn: 0/410000D0, prev 0/41000090, desc: INSERT off: 16, flags: 0x00, blkref #0: rel 1663/33287/1247 blk 14 FPW\nrmgr: Btree len (rec/tot): 53/ 5133, tx: 295181, lsn: 0/41000CE0, prev 0/410000D0, desc: INSERT_LEAF off: 252, blkref #0: rel 1663/33287/2703 blk 2 FPW\n.\n.\n.\nrmgr: Btree len (rec/tot): 72/ 72, tx: 295181, lsn: 0/41016E48, prev 0/41014F00, desc: INSERT_LEAF off: 111, blkref #0: rel 1663/33287/2674 blk 7\nrmgr: Heap2 len (rec/tot): 69/ 69, tx: 295181, lsn: 0/41016E90, prev 0/41016E48, desc: PRUNE snapshotConflictHorizon: 295177, nredirected: 0, ndead: 7, nunused: 0, redirected: [], dead: [20, 21, 22, 23, 24, 26, 27], unused: [], blkref #0: rel 1663/33287/1249 blk 17\nrmgr: Transaction len (rec/tot): 385/ 385, tx: 295181, lsn: 0/41016ED8, prev 0/41016E90, desc: INVALIDATION ; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608 relcache 88102\nrmgr: Standby len (rec/tot): 42/ 42, tx: 295181, lsn: 0/41017060, prev 0/41016ED8, desc: LOCK xid 295181 db 33287 rel 88102 \nrmgr: Transaction len (rec/tot): 405/ 405, tx: 295181, lsn: 0/41017090, prev 0/41017060, desc: COMMIT 2023-12-18 08:35:22.342122 -03; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608 relcache 88102\nrmgr: Standby len (rec/tot): 42/ 42, tx: 295182, lsn: 0/41017228, prev 0/41017090, desc: LOCK xid 295182 db 33287 rel 88102 \nrmgr: Heap2 len (rec/tot): 61/ 61, tx: 295182, lsn: 0/41017258, prev 0/41017228, desc: PRUNE snapshotConflictHorizon: 295177, nredirected: 0, ndead: 3, nunused: 0, redirected: [], dead: [9, 12, 15], unused: [], blkref #0: rel 1663/33287/2608 blk 3\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 295182, lsn: 0/41017298, prev 0/41017258, desc: NEW_CID rel: 1663/33287/1247, tid: 14/17, cmin: 4294967295, cmax: 0, combo: 4294967295\nrmgr: Heap len (rec/tot): 54/ 54, tx: 295182, lsn: 0/410172D8, prev 0/41017298, desc: DELETE xmax: 295182, off: 17, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1663/33287/1247 blk 14\n.\n.\n.\nrmgr: Heap2 len (rec/tot): 60/ 60, tx: 295182, lsn: 0/410178D8, prev 0/410178A0, desc: NEW_CID rel: 1663/33287/2608, tid: 3/24, cmin: 4294967295, cmax: 2, combo: 4294967295\nrmgr: Heap len (rec/tot): 54/ 54, tx: 295182, lsn: 0/41017918, prev 0/410178D8, desc: DELETE xmax: 295182, off: 24, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1663/33287/2608 blk 3\nrmgr: Transaction len (rec/tot): 321/ 321, tx: 295182, lsn: 0/41017950, prev 0/41017918, desc: INVALIDATION ; inval msgs: catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 relcache 88102 snapshot 2608\nrmgr: Transaction len (rec/tot): 469/ 469, tx: 295182, lsn: 0/41017A98, prev 0/41017950, desc: COMMIT 2023-12-18 08:35:25.053905 -03; rels: base/33287/88102; dropped stats: 2/33287/88102; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 snapshot 2608 snapshot 2608 relcache 88102 snapshot 2608\nrmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/41017C70, prev 0/41017A98, desc: SWITCH\n\nThe difference is\n\neuler=# select '0/400000A8'::pg_lsn - '0/40000028'::pg_lsn;\n?column? \n----------\n 128\n(1 row)\n\neuler=# select '0/41017A98'::pg_lsn - '0/41000028'::pg_lsn;\n?column? \n----------\n 96880\n(1 row)\n\n\nIt is cheaper.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Dec 18, 2023, at 2:39 AM, Bharath Rupireddy wrote:Rebase needed, attached v3 patch.I think you don't understand the suggestion proposed by Michael and Kyotaro. Ifyou do a comparison with the following SQL commands:euler=# select pg_walfile_name(pg_current_wal_lsn()); pg_walfile_name --------------------------000000010000000000000040(1 row)euler=# select pg_logical_emit_message(true, 'prefix', 'message4');pg_logical_emit_message -------------------------0/400000A8(1 row)euler=# select pg_switch_wal();pg_switch_wal ---------------0/400000F0(1 row)euler=# create table cc (b int);CREATE TABLEeuler=# drop table cc;DROP TABLEeuler=# select pg_switch_wal();pg_switch_wal ---------------0/41017C88(1 row)euler=# select pg_walfile_name(pg_current_wal_lsn()); pg_walfile_name --------------------------000000010000000000000041(1 row)You get$ pg_waldump 000000010000000000000040rmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/40000028, prev 0/3F0001C0, desc: RUNNING_XACTS nextXid 295180 latestCompletedXid 295179 oldestRunningXid 295180rmgr: LogicalMessage len (rec/tot): 65/ 65, tx: 295180, lsn: 0/40000060, prev 0/40000028, desc: MESSAGE transactional, prefix \"prefix\"; payload (8 bytes): 6D 65 73 73 61 67 65 34rmgr: Transaction len (rec/tot): 46/ 46, tx: 295180, lsn: 0/400000A8, prev 0/40000060, desc: COMMIT 2023-12-18 08:35:06.821322 -03rmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/400000D8, prev 0/400000A8, desc: SWITCH $ pg_waldump 000000010000000000000041rmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/41000028, prev 0/400000D8, desc: RUNNING_XACTS nextXid 295181 latestCompletedXid 295180 oldestRunningXid 295181rmgr: Storage len (rec/tot): 42/ 42, tx: 0, lsn: 0/41000060, prev 0/41000028, desc: CREATE base/33287/88102rmgr: Heap2 len (rec/tot): 60/ 60, tx: 295181, lsn: 0/41000090, prev 0/41000060, desc: NEW_CID rel: 1663/33287/1247, tid: 14/16, cmin: 0, cmax: 4294967295, combo: 4294967295rmgr: Heap len (rec/tot): 54/ 3086, tx: 295181, lsn: 0/410000D0, prev 0/41000090, desc: INSERT off: 16, flags: 0x00, blkref #0: rel 1663/33287/1247 blk 14 FPWrmgr: Btree len (rec/tot): 53/ 5133, tx: 295181, lsn: 0/41000CE0, prev 0/410000D0, desc: INSERT_LEAF off: 252, blkref #0: rel 1663/33287/2703 blk 2 FPW...rmgr: Btree len (rec/tot): 72/ 72, tx: 295181, lsn: 0/41016E48, prev 0/41014F00, desc: INSERT_LEAF off: 111, blkref #0: rel 1663/33287/2674 blk 7rmgr: Heap2 len (rec/tot): 69/ 69, tx: 295181, lsn: 0/41016E90, prev 0/41016E48, desc: PRUNE snapshotConflictHorizon: 295177, nredirected: 0, ndead: 7, nunused: 0, redirected: [], dead: [20, 21, 22, 23, 24, 26, 27], unused: [], blkref #0: rel 1663/33287/1249 blk 17rmgr: Transaction len (rec/tot): 385/ 385, tx: 295181, lsn: 0/41016ED8, prev 0/41016E90, desc: INVALIDATION ; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608 relcache 88102rmgr: Standby len (rec/tot): 42/ 42, tx: 295181, lsn: 0/41017060, prev 0/41016ED8, desc: LOCK xid 295181 db 33287 rel 88102 rmgr: Transaction len (rec/tot): 405/ 405, tx: 295181, lsn: 0/41017090, prev 0/41017060, desc: COMMIT 2023-12-18 08:35:22.342122 -03; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608 relcache 88102rmgr: Standby len (rec/tot): 42/ 42, tx: 295182, lsn: 0/41017228, prev 0/41017090, desc: LOCK xid 295182 db 33287 rel 88102 rmgr: Heap2 len (rec/tot): 61/ 61, tx: 295182, lsn: 0/41017258, prev 0/41017228, desc: PRUNE snapshotConflictHorizon: 295177, nredirected: 0, ndead: 3, nunused: 0, redirected: [], dead: [9, 12, 15], unused: [], blkref #0: rel 1663/33287/2608 blk 3rmgr: Heap2 len (rec/tot): 60/ 60, tx: 295182, lsn: 0/41017298, prev 0/41017258, desc: NEW_CID rel: 1663/33287/1247, tid: 14/17, cmin: 4294967295, cmax: 0, combo: 4294967295rmgr: Heap len (rec/tot): 54/ 54, tx: 295182, lsn: 0/410172D8, prev 0/41017298, desc: DELETE xmax: 295182, off: 17, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1663/33287/1247 blk 14...rmgr: Heap2 len (rec/tot): 60/ 60, tx: 295182, lsn: 0/410178D8, prev 0/410178A0, desc: NEW_CID rel: 1663/33287/2608, tid: 3/24, cmin: 4294967295, cmax: 2, combo: 4294967295rmgr: Heap len (rec/tot): 54/ 54, tx: 295182, lsn: 0/41017918, prev 0/410178D8, desc: DELETE xmax: 295182, off: 24, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1663/33287/2608 blk 3rmgr: Transaction len (rec/tot): 321/ 321, tx: 295182, lsn: 0/41017950, prev 0/41017918, desc: INVALIDATION ; inval msgs: catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 relcache 88102 snapshot 2608rmgr: Transaction len (rec/tot): 469/ 469, tx: 295182, lsn: 0/41017A98, prev 0/41017950, desc: COMMIT 2023-12-18 08:35:25.053905 -03; rels: base/33287/88102; dropped stats: 2/33287/88102; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 55 catcache 54 snapshot 2608 snapshot 2608 relcache 88102 snapshot 2608rmgr: XLOG len (rec/tot): 24/ 24, tx: 0, lsn: 0/41017C70, prev 0/41017A98, desc: SWITCHThe difference iseuler=# select '0/400000A8'::pg_lsn - '0/40000028'::pg_lsn;?column? ---------- 128(1 row)euler=# select '0/41017A98'::pg_lsn - '0/41000028'::pg_lsn;?column? ---------- 96880(1 row)It is cheaper.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 18 Dec 2023 08:48:09 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 08:48:09AM -0300, Euler Taveira wrote:\n> It is cheaper.\n\nAgreed that this could just use a set of pg_logical_emit_message()\nwhen jumping across N segments. Another thing that seems quite\nimportant to me is to force a flush of WAL with the last segment\nswitch, and the new \"flush\" option of pg_logical_emit_message() can\nbe very handy for this purpose.\n--\nMichael",
"msg_date": "Tue, 19 Dec 2023 13:21:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 9:51 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Dec 18, 2023 at 08:48:09AM -0300, Euler Taveira wrote:\n> > It is cheaper.\n>\n> Agreed that this could just use a set of pg_logical_emit_message()\n> when jumping across N segments.\n\nThanks. I missed the point of using pg_logical_emit_message() over\nCREATE .. DROP TABLE to generate WAL. And, I agree that it's better\nand relatively cheaper in terms of amount of WAL generated.\n\n> Another thing that seems quite\n> important to me is to force a flush of WAL with the last segment\n> switch, and the new \"flush\" option of pg_logical_emit_message() can\n> be very handy for this purpose.\n\nI used pg_logical_emit_message() in non-transactional mode without\nneeding an explicit WAL flush as the pg_switch_wal() does a WAL flush\nat the end [1].\n\nAttached v4 patch.\n\n[1]\n /*\n * If this was an XLOG_SWITCH record, flush the record and the empty\n * padding space that fills the rest of the segment, and perform\n * end-of-segment actions (eg, notifying archiver).\n */\n if (class == WALINSERT_SPECIAL_SWITCH)\n {\n TRACE_POSTGRESQL_WAL_SWITCH();\n XLogFlush(EndPos);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 19 Dec 2023 11:25:50 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 11:25:50AM +0530, Bharath Rupireddy wrote:\n> I used pg_logical_emit_message() in non-transactional mode without\n> needing an explicit WAL flush as the pg_switch_wal() does a WAL flush\n> at the end [1].\n\nIndeed, that should be enough to answer my comment.\n\n> Attached v4 patch.\n\nLGTM, thanks. Euler, what do you think?\n--\nMichael",
"msg_date": "Wed, 20 Dec 2023 08:00:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Tue, Dec 19, 2023, at 8:00 PM, Michael Paquier wrote:\n> On Tue, Dec 19, 2023 at 11:25:50AM +0530, Bharath Rupireddy wrote:\n> > I used pg_logical_emit_message() in non-transactional mode without\n> > needing an explicit WAL flush as the pg_switch_wal() does a WAL flush\n> > at the end [1].\n> \n> Indeed, that should be enough to answer my comment.\n> \n> > Attached v4 patch.\n> \n> LGTM, thanks. Euler, what do you think?\n> \n\nLGTM.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Dec 19, 2023, at 8:00 PM, Michael Paquier wrote:On Tue, Dec 19, 2023 at 11:25:50AM +0530, Bharath Rupireddy wrote:> I used pg_logical_emit_message() in non-transactional mode without> needing an explicit WAL flush as the pg_switch_wal() does a WAL flush> at the end [1].Indeed, that should be enough to answer my comment.> Attached v4 patch.LGTM, thanks. Euler, what do you think?LGTM.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 20 Dec 2023 00:24:04 -0300",
"msg_from": "\"Euler Taveira\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 12:24:04AM -0300, Euler Taveira wrote:\n> LGTM.\n\nI was eyeing at 020_messages.pl and it has a pg_switch_wal() after a\ntransaction rollbacked. 020_archive_status.pl creates a table, does \none segment switch, then a checkpoint (table is used afterwards).\nPerhaps these could be changed with the new routine, but it does not\nseem like this improves the readability of the tests, either, contrary\nto the ones updated here where a fake table is created to force some\nrecords. What do you think?\n\nWe have a few more pg_switch_wal() calls, as well, but these rely on\nWAL being already generated beforehand.\n\nI have added a comment about pg_logical_emit_message() being in\nnon-transactional mode and the flush implied by pg_switch_wal() as\nthat's important, edited a bit the whole, then applied the patch.\n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 10:21:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> I have added a comment about pg_logical_emit_message() being in\n> non-transactional mode and the flush implied by pg_switch_wal() as\n> that's important, edited a bit the whole, then applied the patch.\n\nBuildfarm member skink has failed 3 times in\n035_standby_logical_decoding.pl in the last couple of days:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2020%3A07%3A15\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2017%3A09%3A27\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-01%2020%3A10%3A18\n\nThese all look like\n\n# poll_query_until timed out executing this query:\n# select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb'\n# expecting this output:\n# t\n# last actual query output:\n# f\n\nalthough it's notable that two different tests are involved\n(vacuum vs. vacuum full).\n\nI am not real sure what is happening there, but I see that c161ab74f\nchanged some details of how that test works, so I wonder if it's\nresponsible for these failures. The timing isn't a perfect match,\nsince this commit went in two weeks ago, but I don't see any\nmore-recent commits that seem like plausible explanations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jan 2024 18:39:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Wed, Jan 03, 2024 at 06:39:29PM -0500, Tom Lane wrote:\n> I am not real sure what is happening there, but I see that c161ab74f\n> changed some details of how that test works, so I wonder if it's\n> responsible for these failures. The timing isn't a perfect match,\n> since this commit went in two weeks ago, but I don't see any\n> more-recent commits that seem like plausible explanations.\n\nLikely the INSERT query on retain_test that has been removed from the\ntest is impacting the slot conflict analysis that we'd expect.\n--\nMichael",
"msg_date": "Thu, 4 Jan 2024 09:02:56 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Hello Tom,\n\n04.01.2024 02:39, Tom Lane wrote:\n> Buildfarm member skink has failed 3 times in\n> 035_standby_logical_decoding.pl in the last couple of days:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2020%3A07%3A15\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2017%3A09%3A27\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-01%2020%3A10%3A18\n>\n> These all look like\n>\n> # poll_query_until timed out executing this query:\n> # select (confl_active_logicalslot = 1) from pg_stat_database_conflicts where datname = 'testdb'\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n>\n> although it's notable that two different tests are involved\n> (vacuum vs. vacuum full).\n>\n> I am not real sure what is happening there, but I see that c161ab74f\n> changed some details of how that test works, so I wonder if it's\n> responsible for these failures. The timing isn't a perfect match,\n> since this commit went in two weeks ago, but I don't see any\n> more-recent commits that seem like plausible explanations.\n\nReproduced here.\nAs I can see in the failure logs you referenced, the first problem is:\n# Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n# at t/035_standby_logical_decoding.pl line 224.\n\nIt reminded me of:\nhttps://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n\nIt seems that it's not something new, and maybe that my analysis is still\nvalid. If so, VACUUM FREEZE/autovacuum = off should fix the issue.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 4 Jan 2024 16:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Thu, Jan 04, 2024 at 04:00:01PM +0300, Alexander Lakhin wrote:\n> Reproduced here.\n\nDid you just make the run slow enough to show the failure with\nvalgrind?\n\n> As I can see in the failure logs you referenced, the first problem is:\n> # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n> # at t/035_standby_logical_decoding.pl line 224.\n> It reminded me of:\n> https://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n> \n> It seems that it's not something new, and maybe that my analysis is still\n> valid. If so, VACUUM FREEZE/autovacuum = off should fix the issue.\n\nNot sure about that yet.\n--\nMichael",
"msg_date": "Fri, 5 Jan 2024 08:48:33 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "05.01.2024 02:48, Michael Paquier wrote:\n> On Thu, Jan 04, 2024 at 04:00:01PM +0300, Alexander Lakhin wrote:\n>> Reproduced here.\n> Did you just make the run slow enough to show the failure with\n> valgrind?\n\nYes, I just run several test instances (with no extra modifications) under\nValgrind with parallel as follows:\nfor i in `seq 20`; do cp -r src/test/recovery/ src/test/recovery_$i/; sed \"s|src/test/recovery|src/test/recovery_$i|\" -i \nsrc/test/recovery_$i/Makefile; done\n\nfor i in `seq 20`; do echo \"iteration $i\"; parallel --halt now,fail=1 -j20 --linebuffer --tag PROVE_TESTS=\"t/035*\" \nNO_TEMP_INSTALL=1 make check -s -C src/test/recovery_{} PROVE_FLAGS=\"--timer\" ::: `seq 20` || break; done\n\nThe test run fails for me on iterations 9, 8, 14 like so:\n...\niteration 9\n...\n5\n5 # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n5 # at t/035_standby_logical_decoding.pl line 224.\n5\n5 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_authid'\n5 # at t/035_standby_logical_decoding.pl line 229.\n13 [07:35:53] t/035_standby_logical_decoding.pl .. ok 432930 ms ( 0.02 usr 0.00 sys + 292.08 cusr 13.05 csys = \n305.15 CPU)\n13 [07:35:53]\n13 All tests successful.\n...\n\nI've also reproduced it without Valgrind in a VM with CPU slowed down to\n5% (on iterations 2, 5, 4), where average test duration is about 800 sec:\n\n4\n4 # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n4 # at t/035_standby_logical_decoding.pl line 222.\n4\n4 # Failed test 'activeslot slot invalidation is logged with vacuum on pg_authid'\n4 # at t/035_standby_logical_decoding.pl line 227.\n6 [15:16:53] t/035_standby_logical_decoding.pl .. ok 763168 ms ( 0.68 usr 0.10 sys + 19.55 cusr 102.59 csys = \n122.92 CPU)\n\n>\n>> As I can see in the failure logs you referenced, the first problem is:\n>> # Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n>> # at t/035_standby_logical_decoding.pl line 224.\n>> It reminded me of:\n>> https://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n>>\n>> It seems that it's not something new, and maybe that my analysis is still\n>> valid. If so, VACUUM FREEZE/autovacuum = off should fix the issue.\n> Not sure about that yet.\n>\n\nYour suspicion was proved right. After\ngit show c161ab74f src/test/recovery/t/035_standby_logical_decoding.pl | git apply -R\n20 iterations with 20 tests in parallel performed successfully for me\n(twice).\n\nSo it looks like c161ab74f really made the things worse.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 5 Jan 2024 23:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Fri, Jan 05, 2024 at 11:00:00PM +0300, Alexander Lakhin wrote:\n> Your suspicion was proved right. After\n> git show c161ab74f src/test/recovery/t/035_standby_logical_decoding.pl | git apply -R\n> 20 iterations with 20 tests in parallel performed successfully for me\n> (twice).\n> \n> So it looks like c161ab74f really made the things worse.\n\nWe have two different failures here, one when VACUUM fails for a\nshared relation:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2017%3A09%3A27\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-01%2020%3A10%3A18\n\nAnd the second failure happens for VACUUM FULL with a shared relation:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2020%3A07%3A15\n\nIn the second case, the VACUUM FULL happens *BEFORE* the new\nadvance_wal(), making c161ab74f unrelated, no?\n\nAnyway, if one looks at the buildfarm logs, this failure is more\nancient than c161ab74f. We have many of them before that, some\nreported back in October:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-19%2000%3A44%3A58\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-30%2013%3A39%3A20\n\nI suspect on the contrary that c161ab74f may be actually helping here,\nbecause we've switched the CREATE TABLE/INSERT queries to not use a\nsnapshot anymore, reducing the reasons why a slot conflict would\nhappen? Or maybe that's just a matter of luck because the test is\nracy anyway.\n\nAnyway, this has the smell of a legit bug to me. I am also a bit\ndubious about the choice of pg_authid as shared catalog to choose for\nthe slot invalidation check. Isn't that potentially racy with the\nscans we may do on it at connection startup? Something else should be\nchosen, like pg_shdescription as it is non-critical? I am adding in\nCC Bertrand and Andres, as author and committer behind befcd77d53217b.\n--\nMichael",
"msg_date": "Sun, 7 Jan 2024 16:10:50 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "07.01.2024 10:10, Michael Paquier wrote:\n> On Fri, Jan 05, 2024 at 11:00:00PM +0300, Alexander Lakhin wrote:\n>> Your suspicion was proved right. After\n>> git show c161ab74f src/test/recovery/t/035_standby_logical_decoding.pl | git apply -R\n>> 20 iterations with 20 tests in parallel performed successfully for me\n>> (twice).\n>>\n>> So it looks like c161ab74f really made the things worse.\n\nAfter more waiting, I saw the test failure (with c161ab74f reverted) on\niteration 17 in VM where one test run takes up to 800 seconds.\n\n> We have two different failures here, one when VACUUM fails for a\n> shared relation:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2017%3A09%3A27\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-01%2020%3A10%3A18\n>\n> And the second failure happens for VACUUM FULL with a shared relation:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2024-01-03%2020%3A07%3A15\n>\n> In the second case, the VACUUM FULL happens *BEFORE* the new\n> advance_wal(), making c161ab74f unrelated, no?\n>\n> Anyway, if one looks at the buildfarm logs, this failure is more\n> ancient than c161ab74f. We have many of them before that, some\n> reported back in October:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-19%2000%3A44%3A58\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-30%2013%3A39%3A20\n\nYes, I wrote exactly about that upthread and referenced my previous\ninvestigation. But what I'm observing now, is that the failure probability\ngreatly increased with c161ab74f, so something really changed in the test\nbehaviour. (I need a couple of days to investigate this.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 7 Jan 2024 17:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Sun, Jan 07, 2024 at 05:00:00PM +0300, Alexander Lakhin wrote:\n> Yes, I wrote exactly about that upthread and referenced my previous\n> investigation. But what I'm observing now, is that the failure probability\n> greatly increased with c161ab74f, so something really changed in the test\n> behaviour. (I need a couple of days to investigate this.)\n\nAs far as I've cross-checked the logs between successful and failed\nruns on skink and my own machines (not reproduced it locally\nunfortunately), I did not notice a correlation with autovacuum running\nwhile VACUUM (with or without FULL) is executed on the catalogs.\nPerhaps a next sensible step would be to plug-in pg_waldump or\npg_walinspect and get some sense from the WAL records if we fail to\ndetect an invalidation from the log contents, from a LSN retrieved\nslightly at the beginning of each scenario.\n\nI would be tempted to add more increments of $Test::Builder::Level as\nwell in the subroutines of the test because it is kind of hard to find\nout from where a failure comes now. One needs to grep for the \nslot names, whose strings are built from prefixes and suffixes defined\nas arguments of these subroutines...\n--\nMichael",
"msg_date": "Mon, 8 Jan 2024 09:16:19 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 08, 2024 at 09:16:19AM +0900, Michael Paquier wrote:\n> On Sun, Jan 07, 2024 at 05:00:00PM +0300, Alexander Lakhin wrote:\n> > Yes, I wrote exactly about that upthread and referenced my previous\n> > investigation. But what I'm observing now, is that the failure probability\n> > greatly increased with c161ab74f, so something really changed in the test\n> > behaviour. (I need a couple of days to investigate this.)\n> \n> As far as I've cross-checked the logs between successful and failed\n> runs on skink and my own machines (not reproduced it locally\n> unfortunately), I did not notice a correlation with autovacuum running\n> while VACUUM (with or without FULL) is executed on the catalogs.\n\nIf one is able to reproduce, would it be possible to change the test and launch\nthe vacuum in verbose mode? That way, we could see if this is somehow due to [1]\n(means something holding global xmin).\n\nBTW, I think we should resume working on [1] and having it fixed in all the cases.\n\n[1]: https://www.postgresql.org/message-id/d40d015f-03a4-1cf2-6c1f-2b9aca860762%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 Jan 2024 07:34:48 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Hello Bertrand,\n\n08.01.2024 10:34, Bertrand Drouvot wrote:\n> If one is able to reproduce, would it be possible to change the test and launch\n> the vacuum in verbose mode? That way, we could see if this is somehow due to [1]\n> (means something holding global xmin).\n\nYes, I've added (VERBOSE) and also cut down the test to catch the failure faster.\nThe difference between a successful and a failed run:\n2024-01-08 11:58:30.679 UTC [668363] 035_standby_logical_decoding.pl INFO: vacuuming \"testdb.pg_catalog.pg_authid\"\n2024-01-08 11:58:30.679 UTC [668363] 035_standby_logical_decoding.pl INFO: finished vacuuming \n\"testdb.pg_catalog.pg_authid\": index scans: 1\n pages: 0 removed, 1 remain, 1 scanned (100.00% of total)\n tuples: 1 removed, 15 remain, 0 are dead but not yet removable\n removable cutoff: 767, which was 0 XIDs old when operation ended\n new relfrozenxid: 767, which is 39 XIDs ahead of previous value\n frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n index scan needed: 1 pages from table (100.00% of total) had 1 dead item identifiers removed\n---\n2024-01-08 12:00:59.903 UTC [669119] 035_standby_logical_decoding.pl LOG: statement: VACUUM (VERBOSE) pg_authid;\n2024-01-08 12:00:59.904 UTC [669119] 035_standby_logical_decoding.pl INFO: vacuuming \"testdb.pg_catalog.pg_authid\"\n2024-01-08 12:00:59.904 UTC [669119] 035_standby_logical_decoding.pl INFO: finished vacuuming \n\"testdb.pg_catalog.pg_authid\": index scans: 0\n pages: 0 removed, 1 remain, 1 scanned (100.00% of total)\n tuples: 0 removed, 16 remain, 1 are dead but not yet removable\n removable cutoff: 766, which was 1 XIDs old when operation ended\n new relfrozenxid: 765, which is 37 XIDs ahead of previous value\n frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n\nThe difference in WAL is essentially the same as I observed before [1].\nrmgr: Transaction len (rec/tot): 82/ 82, tx: 766, lsn: 0/0403D418, prev 0/0403D3D8, desc: COMMIT \n2024-01-08 11:58:30.679035 UTC; inval msgs: catcache 11 catcache 10\nrmgr: Heap2 len (rec/tot): 57/ 57, tx: 0, lsn: 0/0403D470, prev 0/0403D418, desc: PRUNE \nsnapshotConflictHorizon: 766, nredirected: 0, ndead: 1, isCatalogRel: T, nunused: 0, redirected: [], dead: [16], unused: \n[], blkref #0: rel 1664/0/1260 blk 0\nrmgr: Btree len (rec/tot): 52/ 52, tx: 0, lsn: 0/0403D4B0, prev 0/0403D470, desc: VACUUM ndeleted: \n1, nupdated: 0, deleted: [1], updated: [], blkref #0: rel 1664/0/2676 blk 1\nrmgr: Btree len (rec/tot): 52/ 52, tx: 0, lsn: 0/0403D4E8, prev 0/0403D4B0, desc: VACUUM ndeleted: \n1, nupdated: 0, deleted: [16], updated: [], blkref #0: rel 1664/0/2677 blk 1\nrmgr: Heap2 len (rec/tot): 50/ 50, tx: 0, lsn: 0/0403D520, prev 0/0403D4E8, desc: VACUUM nunused: \n1, unused: [16], blkref #0: rel 1664/0/1260 blk 0\nrmgr: Heap2 len (rec/tot): 64/ 8256, tx: 0, lsn: 0/0403D558, prev 0/0403D520, desc: VISIBLE \nsnapshotConflictHorizon: 0, flags: 0x07, blkref #0: rel 1664/0/1260 fork vm blk 0 FPW, blkref #1: rel 1664/0/1260 blk 0\nrmgr: Heap len (rec/tot): 225/ 225, tx: 0, lsn: 0/0403F5B0, prev 0/0403D558, desc: INPLACE off: 26, \nblkref #0: rel 1663/16384/16410 blk 4\nvs\nrmgr: Transaction len (rec/tot): 82/ 82, tx: 766, lsn: 0/0403F480, prev 0/0403F440, desc: COMMIT \n2024-01-08 12:00:59.901582 UTC; inval msgs: catcache 11 catcache 10\nrmgr: XLOG len (rec/tot): 49/ 8241, tx: 0, lsn: 0/0403F4D8, prev 0/0403F480, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 2 FPW\nrmgr: XLOG len (rec/tot): 49/ 8241, tx: 0, lsn: 0/04041528, prev 0/0403F4D8, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 1 FPW\nrmgr: XLOG len (rec/tot): 49/ 8241, tx: 0, lsn: 0/04043578, prev 0/04041528, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 0 FPW\nrmgr: Heap len (rec/tot): 225/ 225, tx: 0, lsn: 0/040455C8, prev 0/04043578, desc: INPLACE off: 26, \nblkref #0: rel 1663/16384/16410 blk 4\n\n(Complete logs and the modified test are attached.)\n\nWith FREEZE, 10 iterations with 20 tests in parallel succeeded for me\n(while without it, I get failures on iterations 1,2).\n\n[1] https://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Mon, 8 Jan 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "On Mon, Jan 08, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> Yes, I've added (VERBOSE) and also cut down the test to catch the failure faster.\n> The difference between a successful and a failed run:\n> tuples: 1 removed, 15 remain, 0 are dead but not yet removable\n> [...]\n> tuples: 0 removed, 16 remain, 1 are dead but not yet removable\n\nYep, it's clear that the horizon is not stable.\n\n> With FREEZE, 10 iterations with 20 tests in parallel succeeded for me\n> (while without it, I get failures on iterations 1,2).\n> \n> [1] https://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n\nAlexander, does the test gain in stability once you begin using the\npatch posted on [2], mentioned by Bertrand?\n\n(Also, perhaps we'd better move the discussion to the other thread\nwhere the patch has been sent.)\n\n[2]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael",
"msg_date": "Tue, 9 Jan 2024 13:59:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 09, 2024 at 01:59:08PM +0900, Michael Paquier wrote:\n> On Mon, Jan 08, 2024 at 08:00:00PM +0300, Alexander Lakhin wrote:\n> > Yes, I've added (VERBOSE) and also cut down the test to catch the failure faster.\n\nThanks Alexander!\n\n> > The difference between a successful and a failed run:\n> > ������� tuples: 1 removed, 15 remain, 0 are dead but not yet removable\n> > [...]\n> > ������� tuples: 0 removed, 16 remain, 1 are dead but not yet removable\n> \n> Yep, it's clear that the horizon is not stable.\n\nYeap.\n\n> \n> > With FREEZE, 10 iterations with 20 tests in parallel succeeded for me\n> > (while without it, I get failures on iterations 1,2).\n> > \n> > [1] https://www.postgresql.org/message-id/b2a1f7d0-7629-72c0-2da7-e9c4e336b010%40gmail.com\n> \n> Alexander, does the test gain in stability once you begin using the\n> patch posted on [2], mentioned by Bertrand?\n\nAlexander, pleae find attached v3 which is more or less a rebased version of it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 9 Jan 2024 05:29:09 +0000",
"msg_from": "Bertrand Drouvot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add a perl function in Cluster.pm to generate WAL"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile testing some stuff, I noticed heapam_estimate_rel_size (or rather\ntable_block_relation_estimate_size it calls) ignores fillfactor, so that\nfor a table without statistics it ends up with reltuple estimate much\nhigher than reality. For example, with fillfactor=10 the estimate is\nabout 10x higher.\n\nI ran into this while doing some tests with hash indexes, where I use\nfillfactor to make the table look bigger (as if the tuples were wider),\nand I ran into this:\n\n drop table hash_test;\n create table hash_test (a int, b text) with (fillfactor=10);\n insert into hash_test select 1 + ((i - 1) / 10000), md5(i::text)\n from generate_series(1, 1000000) s(i);\n -- analyze hash_test;\n create index hash_test_idx on hash_test using hash (a);\n\n select pg_size_pretty(pg_relation_size('hash_test_idx'));\n\nIf you run it like this (without the analyze), the index will be 339MB.\nWith the analyze, it's 47MB.\n\nThis only happens if there are no relpages/reltuples statistics yet, in\nwhich case table_block_relation_estimate_size estimates density from\ntuple width etc.\n\nSo it seems the easiest \"fix\" is to do ANALYZE before creating the index\n(and not after it, as I had in my scripts). But I wonder how many people\nfail to realize this - it sure took me a while to realize the indexes\nare too large and even longer what is causing it. I wouldn't be very\nsurprised if many people had bloated hash indexes after bulk loads.\n\nSo maybe we should make table_block_relation_estimate_size smarter to\nalso consider the fillfactor in the \"no statistics\" branch, per the\nattached patch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 11 Jun 2023 14:41:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": ">\n> So maybe we should make table_block_relation_estimate_size smarter to\n> also consider the fillfactor in the \"no statistics\" branch, per the\n> attached patch.\n>\n\nI like this a lot. The reasoning is obvious, the fix is simple,it doesn't\nupset any make-check-world tests, and in order to get a performance\nregression we'd need a table whose fillfactor has been changed after the\ndata was loaded but before an analyze happens, and that's a narrow enough\ncase to accept.\n\nMy only nitpick is to swap\n\n(usable_bytes_per_page * fillfactor / 100) / tuple_width\n\nwith\n\n(usable_bytes_per_page * fillfactor) / (tuple_width * 100)\n\n\nas this will eliminate the extra remainder truncation, and it also gets the\narguments \"in order\" algebraically.\n\nSo maybe we should make table_block_relation_estimate_size smarter to\nalso consider the fillfactor in the \"no statistics\" branch, per the\nattached patch.I like this a lot. The reasoning is obvious, the fix is simple,it doesn't upset any make-check-world tests, and in order to get a performance regression we'd need a table whose fillfactor has been changed after the data was loaded but before an analyze happens, and that's a narrow enough case to accept.My only nitpick is to swap(usable_bytes_per_page * fillfactor / 100) / tuple_widthwith(usable_bytes_per_page * fillfactor) / (tuple_width * 100)as this will eliminate the extra remainder truncation, and it also gets the arguments \"in order\" algebraically.",
"msg_date": "Wed, 14 Jun 2023 14:41:37 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-11 14:41:27 +0200, Tomas Vondra wrote:\n> While testing some stuff, I noticed heapam_estimate_rel_size (or rather\n> table_block_relation_estimate_size it calls) ignores fillfactor, so that\n> for a table without statistics it ends up with reltuple estimate much\n> higher than reality. For example, with fillfactor=10 the estimate is\n> about 10x higher.\n> \n> I ran into this while doing some tests with hash indexes, where I use\n> fillfactor to make the table look bigger (as if the tuples were wider),\n> and I ran into this:\n> \n> drop table hash_test;\n> create table hash_test (a int, b text) with (fillfactor=10);\n> insert into hash_test select 1 + ((i - 1) / 10000), md5(i::text)\n> from generate_series(1, 1000000) s(i);\n> -- analyze hash_test;\n> create index hash_test_idx on hash_test using hash (a);\n> \n> select pg_size_pretty(pg_relation_size('hash_test_idx'));\n> \n> If you run it like this (without the analyze), the index will be 339MB.\n> With the analyze, it's 47MB.\n> \n> This only happens if there are no relpages/reltuples statistics yet, in\n> which case table_block_relation_estimate_size estimates density from\n> tuple width etc.\n> \n> So it seems the easiest \"fix\" is to do ANALYZE before creating the index\n> (and not after it, as I had in my scripts). But I wonder how many people\n> fail to realize this - it sure took me a while to realize the indexes\n> are too large and even longer what is causing it. I wouldn't be very\n> surprised if many people had bloated hash indexes after bulk loads.\n> \n> So maybe we should make table_block_relation_estimate_size smarter to\n> also consider the fillfactor in the \"no statistics\" branch, per the\n> attached patch.\n\nSeems like a good idea - I can't think of a reason why we shouldn't do so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 14 Jun 2023 12:53:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": "On 14.06.23 20:41, Corey Huinker wrote:\n> So maybe we should make table_block_relation_estimate_size smarter to\n> also consider the fillfactor in the \"no statistics\" branch, per the\n> attached patch.\n> \n> \n> I like this a lot. The reasoning is obvious, the fix is simple,it \n> doesn't upset any make-check-world tests, and in order to get a \n> performance regression we'd need a table whose fillfactor has been \n> changed after the data was loaded but before an analyze happens, and \n> that's a narrow enough case to accept.\n> \n> My only nitpick is to swap\n> \n> (usable_bytes_per_page * fillfactor / 100) / tuple_width\n> \n> with\n> \n> (usable_bytes_per_page * fillfactor) / (tuple_width * 100)\n> \n> \n> as this will eliminate the extra remainder truncation, and it also gets \n> the arguments \"in order\" algebraically.\n\nThe fillfactor is in percent, so it makes sense to divide it by 100 \nfirst before doing any further arithmetic with it. I find your version \nof this to be more puzzling without any explanation. You could do \nfillfactor/100.0 to avoid the integer division, but then, the comment \nabove that says \"integer division is intentional here\", without any \nfurther explanation. I think a bit more explanation of all the \nsubtleties here would be good in any case.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 08:46:00 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": "\n\nOn 7/3/23 08:46, Peter Eisentraut wrote:\n> On 14.06.23 20:41, Corey Huinker wrote:\n>> So maybe we should make table_block_relation_estimate_size smarter to\n>> also consider the fillfactor in the \"no statistics\" branch, per the\n>> attached patch.\n>>\n>>\n>> I like this a lot. The reasoning is obvious, the fix is simple,it\n>> doesn't upset any make-check-world tests, and in order to get a\n>> performance regression we'd need a table whose fillfactor has been\n>> changed after the data was loaded but before an analyze happens, and\n>> that's a narrow enough case to accept.\n>>\n>> My only nitpick is to swap\n>>\n>> (usable_bytes_per_page * fillfactor / 100) / tuple_width\n>>\n>> with\n>>\n>> (usable_bytes_per_page * fillfactor) / (tuple_width * 100)\n>>\n>>\n>> as this will eliminate the extra remainder truncation, and it also\n>> gets the arguments \"in order\" algebraically.\n> \n> The fillfactor is in percent, so it makes sense to divide it by 100\n> first before doing any further arithmetic with it. I find your version\n> of this to be more puzzling without any explanation. You could do\n> fillfactor/100.0 to avoid the integer division, but then, the comment\n> above that says \"integer division is intentional here\", without any\n> further explanation. I think a bit more explanation of all the\n> subtleties here would be good in any case.\n> \n\nYeah, I guess the formula should be doing (fillfactor / 100.0).\n\nThe \"integer division is intentional here\" comment is unrelated to this\nchange, it refers to the division by \"tuple_width\" (and yeah, it'd be\nnice to have it explain why it's intentional).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jul 2023 09:00:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": "\n\nOn 7/3/23 09:00, Tomas Vondra wrote:\n> \n> \n> On 7/3/23 08:46, Peter Eisentraut wrote:\n>> On 14.06.23 20:41, Corey Huinker wrote:\n>>> So maybe we should make table_block_relation_estimate_size smarter to\n>>> also consider the fillfactor in the \"no statistics\" branch, per the\n>>> attached patch.\n>>>\n>>>\n>>> I like this a lot. The reasoning is obvious, the fix is simple,it\n>>> doesn't upset any make-check-world tests, and in order to get a\n>>> performance regression we'd need a table whose fillfactor has been\n>>> changed after the data was loaded but before an analyze happens, and\n>>> that's a narrow enough case to accept.\n>>>\n>>> My only nitpick is to swap\n>>>\n>>> (usable_bytes_per_page * fillfactor / 100) / tuple_width\n>>>\n>>> with\n>>>\n>>> (usable_bytes_per_page * fillfactor) / (tuple_width * 100)\n>>>\n>>>\n>>> as this will eliminate the extra remainder truncation, and it also\n>>> gets the arguments \"in order\" algebraically.\n>>\n>> The fillfactor is in percent, so it makes sense to divide it by 100\n>> first before doing any further arithmetic with it. I find your version\n>> of this to be more puzzling without any explanation. You could do\n>> fillfactor/100.0 to avoid the integer division, but then, the comment\n>> above that says \"integer division is intentional here\", without any\n>> further explanation. I think a bit more explanation of all the\n>> subtleties here would be good in any case.\n>>\n> \n> Yeah, I guess the formula should be doing (fillfactor / 100.0).\n> \n> The \"integer division is intentional here\" comment is unrelated to this\n> change, it refers to the division by \"tuple_width\" (and yeah, it'd be\n> nice to have it explain why it's intentional).\n> \n\nFWIW the reason why the integer division is intentional is most likely\nthat we want \"floor\" semantics - if there's 10.23 rows per page, that\nreally means 10 rows per page.\n\nI doubt it makes a huge difference in this particular place, considering\nwe're calculating the estimate from somewhat unreliable values, and then\nuse it for rough estimate of relation size.\n\nBut from this POV, I think it's more correct to do it \"my\" way:\n\n density = (usable_bytes_per_page * fillfactor / 100) / tuple_width;\n\nbecause that's doing *two* separate integer divisions, with floor\nsemantics. First we calculate \"usable bytes\" (rounded down), then\naverage number of rows per page (also rounded down).\n\nCorey's formula would do just one integer division. I don't think it\nmakes a huge difference, though. I mean, it's just an estimate and so we\ncan't expect to be 100% accurate.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:40:58 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
},
{
"msg_contents": "On 7/3/23 11:40, Tomas Vondra wrote:\n> ...\n>\n> FWIW the reason why the integer division is intentional is most likely\n> that we want \"floor\" semantics - if there's 10.23 rows per page, that\n> really means 10 rows per page.\n> \n> I doubt it makes a huge difference in this particular place, considering\n> we're calculating the estimate from somewhat unreliable values, and then\n> use it for rough estimate of relation size.\n> \n> But from this POV, I think it's more correct to do it \"my\" way:\n> \n> density = (usable_bytes_per_page * fillfactor / 100) / tuple_width;\n> \n> because that's doing *two* separate integer divisions, with floor\n> semantics. First we calculate \"usable bytes\" (rounded down), then\n> average number of rows per page (also rounded down).\n> \n> Corey's formula would do just one integer division. I don't think it\n> makes a huge difference, though. I mean, it's just an estimate and so we\n> can't expect to be 100% accurate.\n> \n\nPushed, using the formula with two divisions (as in the original patch).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jul 2023 19:54:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should heapam_estimate_rel_size consider fillfactor?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.